text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Isolation of a Novel Reassortant Highly Pathogenic Avian Influenza (H5N2) Virus in Egypt
Highly pathogenic avian influenza (HPAI) H5N1 and H5N8 have become endemic among domestic poultry in Egypt since 2006 and 2016, respectively. In parallel, the low pathogenic avian influenza H9N2 virus has been endemic since 2010. Despite the continuous circulation of these subtypes for several years, no natural reassortant has been detected so far among the domestic poultry population in Egypt. In this study, the HPAI (H5N2) virus was isolated from a commercial duck farm, giving evidence of the emergence of the first natural reassortment event in domestic poultry in Egypt. The virus was derived as a result of genetic reassortment between avian influenza viruses of H5N8 and H9N2 subtypes circulating in Egypt. The exchange of the neuraminidase segment and high number of acquired mutations might be associated with an alteration in the biological propensities of this virus.
Introduction
Highly pathogenic avian influenza (HPAI) H5N1 and low pathogenic avian influenza (LPAI) H9N2 viruses have become endemic among domestic poultry in Egypt since 2006 and 2010, respectively [1]. In late 2016, HPAI H5N8 virus of clade 2.3.4.4 (group B) was first reported among wild birds in Egypt [2]. Since 2016, HPAI H5N8 viruses have been reported in different geographical regions across the country in both commercial and backyard bird sectors [3,4], where six genotypes have been detected in both wild and domestic birds [3][4][5]. Moreover, Egypt reported the highest number of human cases with HPAI H5N1 virus worldwide, and one of the only three countries (Egypt, China, and Bangladesh) that reported LPAI H9N2 in humans [1,6]. Recently, simultaneous detection of the three subtypes (H5N1, H5N8, and H9N2) has been described in a poultry farm in Egypt [7]. The co-circulation of those three subtypes increases risks for the generation of reassortants with unpredictable phenotypic properties, including an increased potential threat to human populations. Despite the co-circulation of HPAI H5N1 and LPAI H9N2 viruses for more than eight years, no natural reassortant has been detected so far among domestic poultry population in Egypt between those two subtypes. However, the risk of natural reassortant has been increased after the introduction of the HPAI H5N8 virus in 2016 [1]. This study reports and analyses the genetic and phylogenetic features of the first natural reassortant evidence in domestic poultry in Egypt.
Samples
On 31 December 2018, twenty oropharyngeal and cloacal swabs were collected from a commercial 90-days-old Muscovy duck farm (of 5000 birds) at Mansoura city, Dakahlia governorate, in Egypt. Samples were taken prior to slaughtering as a part of an active surveillance program conducted by the National Laboratory for veterinary quality control on poultry production (NLQP) and General Organization for Veterinary Services. Ducks were vaccinated via an H5N1 vaccine at 7 days old. Ducks were apparently healthy showing no signs of disease. Collected samples were submitted to NLQP for virus identification and isolation. All experiments in this study were conducted in accordance with the ethically approved protocol (AHRI-18032019) of the Animal Health Research Institute, Giza, Egypt.
RNA Extraction and Molecular Diagnosis
Viral RNA was extracted from the pooled samples using the QIAamp Viral RNA Mini Kit (Qiagen, Gmbh, Hilden, Germany) according to the manufacturer's instructions. Pooled samples were tested using one step Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) (Qiagen, Gmbh, Hilden, Germany) for the M gene of influenza type A viruses [8] using the real-time PCR Mx3005P QPCR System (Agilent, Santa Clara, CA, USA). Positive avian influenza virus (AIV) RNA was subtyped for hemagglutinin (HA) and neuraminidase (NA) using specific subtyping RT-qPCR [9,10].
Virus Isolation
Virus isolation was performed via inoculation into the allantoic cavity of 10-day-old specific pathogen free (SPF) embryonated chicken eggs (ECE) according to the World Organization for Animal Health (OIE) diagnostic manual [11]. Collected allantoic fluid was tested using an hemaglutination assay and specific RT-qPCRs [9,10]. HA-positive allantoic fluids were stored at −80 • C.
Gene Sequencing
The complete genome sequences of the Egyptian virus A/duck/Egypt/VG1099/2018 (EG-VG1099) was amplified using RT-PCR with SuperScript-III One-Step RT-PCR System with Platinum ® Taq DNA Polymerase (Invitrogen, Waltham, MA, USA) with primers described previously [12,13]. The gene-specific RT-PCR amplicons were size-separated using agarose gel electrophoresis, excised, and purified from gels using the QIAquick Gel Extraction Kit (Qiagen, Gmbh, Hilden, Germany). Further, purified PCR products were used directly for cycle sequencing reactions using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA). Reaction products were purified using Centrisep spin columns (ThermoFisher, Waltham, MA, USA) and sequenced on an ABI 3500XL Genetic Analyzer (Life Technologies, Carlsbad, CA, USA). Sequences generated in this study were submitted to the Global Initiative on Sharing All Influenza Data (GISAID) platform under accession number: EPI1387245-52.
Genetic and Phylogenetic Characterization
The obtained sequences were assembled and edited using the Geneious ® software, version 11.0.5 [14]. A Basic Local Alignment Search Tool (BLAST) search was performed using Global Initiative on Sharing All Influenza Data (GISAID) platform, and sequences used in this study have been retrieved from the GISAID database for representative H5N8, H9N2, and other similar viruses. Alignment and identity matrix analyses were done using Multiple Alignment using Fast Fourier Transform (MAFFT) [15]. Phylogenetic analyses were based on a maximum likelihood methodology based on Bayesian (BIC) criterion after selection of the best fit models using IQ-tree software version 1.1.3 [16]. Trees were finally viewed and edited using FigTree v1.4.2 software (http://tree.bio.ed.ac.uk/software/figtree/).
Virus Isolation
Pooled samples were found positive for the influenza A H5N2 subtype and negative for H7 and H9, as well as N1 and N8. Ducks were apparently healthy with neither disease symptoms nor abnormal mortalities. The virus was isolated, confirmed, and named A/duck/Egypt/VG1099/2018 (EG-VG1099).
Genetic and Phylogenetic Characterization
The complete genome sequences of the Egyptian virus EG-VG1099 were obtained and sequences generated in this study were submitted to the Global Initiative on Sharing All Influenza Data (GISAID) platform under the accession number: EPI1387245-52. Nucleotide identity analysis showed that the EG-VG1099 virus was closely related, according to its hemagglutinin (HA) gene segment, to HPAI H5N8 viruses isolated in Europe 2016-2017, with a nucleotide sequence identity of 98% to the A/Eur_Wig/NL-Greonterp/16015653-001/2016; the same identity of 98% with A/duck/Egypt/F446/2017(H5N8) virus was also recorded. The neuraminidase (NA) gene of the EG-VG1099 virus shared 97% nucleotide identity with the Egyptian LPAI H9N2 virus A/chicken/Egypt/D10700/2015(H9N2) ( Table 1) (Table 1). Possible reassortment option of the novel Egyptian HPAI H5N2 virus is illustrated in Figure 1C. Amino acid substitutions at positions S98R, A138S, and A160V have been reported previously to be related to virulence and host specificity [17,18]. The NA coded protein of the emerged HPAI H5N2 virus was distinguished by 43 nucleotide mutations from the Egyptian LPAI H9N2 viruses (during 2010-2016, no available published sequence hereafter). Twelve out of the 43 were non-synonymous and encoded amino acid substitutions: F37L, T43A, V50E, K143E, R199K, M210I, L211I, R283Q, R288I, R344V, N384K, and K415R. Positions 143 and 344 were located in the antibodies recognition sites of the NA coded protein [19,20]. However, none of the remaining amino acid substitutions have been described in association with resistance to neuraminidase inhibitors or located in the hemadsorption site. Further, the Egyptian EG-VG1099 possessed no mutations at known molecular features associated with virulence or host adaptation, like E627K and D701N in PB2, N375S and L598P in PB1, or V100A in PA [17]. However, N30D and T215A amino acid substitutions in the M1 protein and P42S in the NS1 protein (previously reported in the Egyptian H5N8 viruses) were observed, suggesting that the virus could exhibit increased virulence in mammals [17]. and A/duck/Egypt/F446/2017, and revealed a close phylogenetic relatedness with H5N8 viruses frequently found in Europe in 2016-2017 ( Figure 2). The HA gene of the EG-VG1099 virus possessed multiple basic amino acids, "PLREKRRKR/GLF," in the cleavage site of the HA indicating high pathogenicity of this virus. The EG-VG1099 virus exhibited R76S, S98R, A138S, A160V, and R173Q amino acid substitutions (H3 numbering) at its HA protein, distinguishing it from previously reported HPAI H5N8 viruses of clade 2.3.4.4 (group B). Amino acid substitutions at positions S98R, A138S, and A160V have been reported previously to be related to virulence and host specificity [17,18]. The NA coded protein of the emerged HPAI H5N2 virus was distinguished by 43 nucleotide mutations from the Egyptian LPAI H9N2 viruses (during 2010-2016, no available published sequence hereafter). Twelve out of the 43 were non-synonymous and encoded amino acid substitutions: F37L, T43A, V50E, K143E, R199K, M210I, L211I, R283Q, R288I, R344V, N384K, and K415R. Positions 143 and 344 were located in the antibodies recognition sites of the NA coded protein [19,20]. However, none of the remaining amino
Discussion
Currently, Egypt faces endemic co-circulation of HPAI (H5N1, H5N8) and LPAI (H9N2) viruses, where a simultaneous detection of the three subtypes has been detected [7]. The co-circulation of those three subtypes raises fears for the generation of a new subtype/genotype with unpredictable properties, including an increased potential threat to human [1]. However, the nucleotide identity and phylogenetic distance of the Egyptian HPAI H5N2 virus (EG-VG1099) virus with other Egyptian HA genes seems low, and it is reasonable to suggest that the new reassortant EG-VG1099 resulted from a reassortant between HPAI H5N8 virus of clade 2.3.4.4 (group B) and the Egyptian LPAI (H9N2) virus of the G1-like lineage ( Figure 1C). This was supported by the phylogenetic relatedness of the NA gene segment with only viruses isolated from Egypt; however, the circulation of an additional variant of H5N8 in Egypt cannot be excluded. So far, no natural reassortant has been detected between the Egyptian H5N1/H9N2 subtypes [21], and the emerged H5N2 virus in this study indicated a higher reassortment compatibility between the Egyptian H5N8 and H9N2 viruses compared to the Egyptian H5N1/H9N2. However, we cannot exclude the detection of new genotypes with different gene constellations in the upcoming period. This highlights the significance of obtaining the whole genome sequence of the circulating HPAI H5N1/H5N2/H5N8 viruses in Egypt.
Further, different amino acid substitutions associated with either virulence or host adaptation have been observed in the newly detected Egyptian HPAI H5N2 virus (EG-VG1099). Amino acid substitutions (S98R, A138S, and A160V) in the antibody recognition sites of the HA have been observed. These mutations were described to enhance the binding of AIV H5 with the α2.6 sialic acid receptor and increase virulence in the mammalian animal model [17,18]. Further, N30D and T215A amino acid substitutions in the M1 protein of the EG-VG1099 virus were reported to enhance the virulence of the HPAI H5N1 virus in mice [17]. In addition, P42S in the NS1 protein also has been found to increase the virulence of the HPAI H5N1 virus in mice [17]. Interestingly, a high number of non-synonymous nucleotide mutations (n = 12) in the NA have been noticed. Only positions 143 and 344 are located in the antibody recognition sites of the NA coded protein and have been reported in the literature to be related to the antigenic drift/escape mutant [19,20]. However, none of the rest have been described in association with resistance to neuraminidase inhibitors or located in the hemadsorption site. At this point, it cannot be excluded that the new HPAI H5N2 subtype, reported in this study, may be associated with an alternation in the biological propensities. Hence, future studies in birds (e.g., chickens, ducks) and mammalian models (e.g., ferrets, mice) is recommended to provide experimental data on the fitness and virulence of this virus subtype. In addition, in vivo and in vitro studies comparing the newly detected H5N2 virus to a closely related H5N8 virus are required to determine any alteration in the phenotypic properties between the two viruses.
Moreover, our study underlines the importance of active surveillance in the timely detection of new AIV subtypes. It can be expected that continuing surveillance activities might lead to the detection of additional new cases; hence, surveillance should be strengthened with special emphasis in the surrounding regions of the outbreak. Interventions, like improved biosecurity practices in poultry production enterprises, and live bird trading and marketing practices in Egypt, should be considered to reduce the dissemination of AIV and reduce the chance of reassortment raised through co-infection. | 2,814 | 2019-06-01T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Biology"
] |
Multifunctional Gas and pH Fluorescent Sensors Based on Cellulose Acetate Electrospun Fibers Decorated with Rhodamine B-Functionalised Core-Shell Ferrous Nanoparticles
Ferrous core-shell nanoparticles consisting of a magnetic γ-Fe2O3 multi-nanoparticle core and an outer silica shell have been synthesized and covalently functionalized with Rhodamine B (RhB) fluorescent molecules (γ-Fe2O3/SiO2/RhB NPs). The resulting γ-Fe2O3/SiO2/RhB NPs were integrated with a renewable and naturally-abundant cellulose derivative (i.e. cellulose acetate, CA) that was processed in the form of electrospun fibers to yield multifunctional fluorescent fibrous nanocomposites. The encapsulation of the nanoparticles within the fibers and the covalent anchoring of the RhB fluorophore onto the nanoparticle surfaces prevented the fluorophore’s leakage from the fibrous mat, enabling thus stable fluorescence-based operation of the developed materials. These materials were further evaluated as dual fluorescent sensors (i.e. ammonia gas and pH sensors), demonstrating consistent response for very high ammonia concentrations (up to 12000 ppm) and fast and linear response in both alkaline and acidic environments. The superparamagnetic nature of embedded nanoparticles provides means of electrospun fibers morphology control by magnetic field-assisted processes and additional means of electromagnetic-based manipulation making possible their use in a wide range of sensing applications.
exhibiting enhanced thermal stability and magnetic properties for potential use in high-temperature magnetic sensing and microwave absorption applications 30 .
The general advantages of combining superparamagnetic Fe 3 O 4 nanoparticles with fibrous materials designed for use in sensing applications include among others the inherent ability of magnetic Fe 3 O 4 nanoparticles to act as effective gas sensors 32 and the possibility for employing fiber alignment via magnetic field-assisted electrospinning 33 . Concerning the latter, it has been demonstrated that fiber alignment results in the enhancement of the sensing properties compared to their randomly oriented analogues 34,35 . Furthermore, the magnetic properties of the produced electrospun mats could provide an additional functionality in their manipulation and processing. More precisely, it may promote the magnetic bonding of the fibrous mats on suitable electromagnetic holding platforms, the controllable remote heating of materials for specific sensing applications or the efficient collection of mats by magnetic means from remote areas in various sensing applications 36 . Additionally, magnetic electrospun mats with sensing capabilities could be employed as overlayers on various optical platforms in integrated optics or optical fibers. Such a thin magnetic overlayer on the proximity of an optical waveguide could operate as a sensing element of magnetic measurands that could alter the surrounding refractive index leading to pure photonic interrogation, with the simultaneous monitoring of pH conditions. Other applications may also include the measurement of magnetic fields by the induced deflection of fibers or waveguide cantilevers in the presence of magnetic fields, retaining at the same time the chemical sensing capability. Furthermore, in the presence of electric high frequency fields the resulted remote heating of overplayed mats could alter the refractive index properties of optical waveguides allowing thus the optical sensing of fields or temperature in magnetic hyperthermia applications 36 and in cases where pH monitoring would also be necessary.
Herein, ferrous core-shell nanoparticles consisting of a magnetic γ-Fe 2 O 3 multi-nanoparticle core and an outer SiO 2 shell have been synthesized and further functionalized with Rhodamine B (RhB) fluorescent molecules (γ-Fe 2 O 3 /SiO 2 /RhB NPs). The latter were covalently bound in the matrix of the silica shell. The resulting γ-Fe 2 O 3 /SiO 2 /RhB NPs were further incorporated within cellulose acetate (CA) electrospun fibers to yield fluorescent multifunctional fibrous nanocomposites. Electrospun fibers with embedded fluorescence moieties designed for use in fluorescence sensing are considered to be highly advantageous compared to their film analogues due to their larger surface-to-volume ratios. In previous reports on fluorescent-functionalized electrospun polymer fibers, the fluorophores were either covalently attached onto the polymer backbone [37][38][39][40][41] or incorporated as dopants within the fibers [42][43][44][45][46] .
In one such example referring to the doping of polymer nanofibers with RhB, the fluorescent dye was added into a poly(ether sulfone) solution prepared in N,N-dimethylacetamide and the mixture was electrospun to obtain fluorescent, RhB-doped nanofibers that were further evaluated as metal ion (Cu 2+ ) fluorescent sensors in aqueous media 47 . Furthermore, electrospun polymer fibers doped with RhB derivatives have been successfully used as highly efficient turn-on fluorescent sensors for the detection of Hg 2+ ions 48,49 .
In the present study, the use of γ-Fe 2 O 3 /SiO 2 /RhB NPs as dopants in electrospun fibers is considered to be advantageous compared to other fabrication routes reported so far, since the leakage of the RhB fluorophore from the core-shell NPs is prevented due to its covalent anchoring onto the nanoparticle surfaces. In contrast, small fluorescent molecules introduced within the polymer fibers as dopants are only held onto the polymer chains via weak van der Waals interactions, thus often resulting to their desorption from the polymer matrix and consequently the decrease in the fluorescence efficiency of the fibers. In addition, the covalent anchoring of RhB molecules onto the nanoparticles' surfaces and the blending of the γ-Fe 2 O 3 /SiO 2 /RhB NPs with the fibrous CA matrix, suppress self-quenching phenomena, whereas by covalently integrating RhB within the silica shell, fluorescence quenching is further prevented by avoiding direct contact with iron oxides.
Besides the above, the use of a renewable, naturally-abundant acetylated cellulose derivative as a polymer matrix exhibiting biocompatibility, biodegradability and environmental friendliness, combined with the magnetic character of the inorganic γ-Fe 2 O 3 /SiO 2 /RhB NPs additives providing the possibility for magnetic separation by applying an external magnetic field, are additional benefits of the multifunctional fibrous nanocomposite fluorescent sensor described in this study.
Two different fabrication protocols were followed for the preparation of CA fibers doped with the γ-Fe 2 O 3 / SiO 2 /RhB NPs. In the first synthetic route the nanoparticles were sprayed on top of the fibrous mat while in the second approach a nanoparticle suspension in the CA polymer solution was directly electrospun to yield the final product. The dual sensing capability (i.e. ammonia gas and pH sensing) of the developed fibrous nanocomposites was investigated for both types of the produced nanocomposites. In both cases, a good response for gas ammonia sensing was shown for high NH 3 concentrations. The presented multifunctional electrospun ammonia sensors show high tolerance in poisoning and saturation effects, providing linear response up to 12000 ppm. Consequently, they could be extremely valuable in the detection of high levels of ammonia (concentrations above 1000 ppm) at industrial facilities employing ammonia transfer lines and dense storage spaces. Moreover, low cost sensing elements or materials can then be replaced after logging an ammonia leakage event.
In the case of pH sensing, the fibrous nanocomposites in which the NPs were embedded inside the fibers were more robust compared to the fibers having the NPs deposited via spraying onto their surfaces, with the former demonstrating a fast and linear response in both alkaline and acidic environments with good reversibility. experimental Materials. For the synthesis of the γ-Fe 2 O 3 /SiO 2 /RhB NPs, reagent grade chemicals were used as received from the manufacturers. Iron (III) sulfate hydrate, iron (II) sulfate heptahydrate (ACS, 99%), citric acid (CA, 99%), tetraethoxysilane (TEOS, 99.9%) and NH 4 OH (28-30%) were supplied by Alfa Aesar (Lancashire, UK). Acetone (AppliChem GmbH) and absolute ethanol (Carlo Erba, reagent -USP) were used as received. The reaction between RhB and APS was carried out first in the mixture of DCM/DMF = 4/1 overnight at room temperature. RhB (0.00933 mmol) was dissolved in the solvent mixture (0.5 mL) and then APS (0.186 mmol) was added. Subsequently, the volatile solvent was removed using nitrogen flow and the product (RhB-APS) was mixed with TEOS for the silica coating. The general synthetic route is described as follows: First, 100 mg of PVP was dissolved in 200 mL of ethanol containing 450 mg of nanoparticle clusters. Subsequently, 1.2 mL of TEOS and RhB-APS was mixed and then added to the suspension followed by the addition of 2.0 mL of aqueous ammonia solution. The reaction mixture was stirred using a 2-cm-wide glass propeller at 300 rpm for six hours at room temperature and upon completion, the product was washed first with ethanol and then 3-times with distilled water. Finally, an additional thin layer (a few nanometers thick) of non-fluorescent silica was deposited using the same synthetic protocol with the only difference being the amount of TEOS, i.e. 0.15 mL of TEOS in the absence of RhB-APS. The prepared γ-Fe 2 O 3 /SiO 2 /RhB NPs were dispersed in ethanol-based stable colloidal suspensions at a concentration of 26 mg/mL. fabrication of γ-fe 2 o 3 /Sio 2 /RhB NPs-functionalized electrospun CA fibers. Initially, CA (2.5 g) was dissolved in acetone (20 mL) upon stirring at ambient conditions and the obtained colourless transparent homogeneous solution was loaded in the syringe of the electrospinning set-up. All electrospinning experiments were performed at room temperature. Equipment included a controlled-flow, four-channel volumetric microdialysis pump (KD Scientific, Model: 789252), a syringe with a connected spinneret needle electrode, a high-voltage power source (10-50 kV) and a custom-designed grounded target collector inside an interlocked Faraday enclosure safety cabinet. Systematic parametric studies were carried out by varying the applied voltage, the needle-to-collector distance, the needle diameter and the flow rate so as to determine the optimum experimental conditions for obtaining CA nanoparticle-free fibers. The electrospinning conditions used for obtaining continuous, nanoparticle-free fibers were the following: Flow rate: 5.9 mL/hr; applied voltage: 15 kV; needle-to-collector distance: 10 cm; Needle diameter: 16G.
Fabrication of CA/γ-Fe 2 O 3 /SiO 2 /RhB NPs nanocomposite fibers was carried out by following 2 different experimental protocols. In the first one, a CA fibrous mat (19.5 mg) was placed inside a petri dish and the nanoparticle suspension (1 mL, nanoparticle suspension in ethanol, concentration: 26 mg/mL) was sprayed on top of the fibrous mat. Afterwards, the fibers were placed in a laboratory fume hood until complete drying. The 2 nd experimental protocol involved the dropwise addition of 1 mL of the aforementioned nanoparticle suspension in the CA polymer solution prepared in acetone (polymer solution concentration: 12.5% w/v) during stirring at ambient temperature and subsequent electrospinning of the resulting suspension under identical electrospinning conditions applied in the case of the CA fibers, to obtain CA/γ-Fe 2 O 3 /SiO 2 /RhB NPs nanocomposite fibers.
Materials characterization.
For the TEM investigations, the γ-Fe 2 O 3 /SiO 2 /RhB NPs were deposited by drying dispersion on a copper-grid supported, perforated, transparent carbon foil and analysed by TEM (Jeol, JEM, 2100) which operated at 200 kV. The magnetic properties of the samples were measured at room temperature by Vibrational Sample Magnetometry (VSM) (Lake Shore 7307 VSM). The dry, γ-Fe 2 O 3 /SiO 2 /RhB NPs-decorated CA fibers (10-15 mg) were placed into the VSM system prior to measurements. Magnetic characterization was carried out at room temperature. The hydrodynamic size distributions of the core-shell nanoparticles were measured using dynamic light scattering (DLS, Fritsch, Analysette 12 DynaSyzer, Germany).
The morphological characteristics of the produced fibers prepared in the absence and presence of the γ-Fe 2 O 3 / SiO 2 /RhB NPs were determined by scanning electron microscopy (SEM) (Vega TS5136LS Tescan). The samples were gold-sputtered (sputtering system K575X Turbo Sputter Coater -Emitech) prior to SEM inspection.
Fluorescence microscopy was further used for visualizing the pristine and γ-Fe 2 O 3 /SiO 2 /RhB NPs-modified CA electrospun fibers. The samples were placed on glass slides, covered with coverslips and documented under the Olympus fluorescence microscope (U-RLF-T model). The fluorescence intensity of the pure CA fibers was determined by using the FITC filter (Excitation: 490 nm, Emission: 520 nm) whereas for the γ-Fe 2 O 3 /SiO 2 /RhB NPs-modified fibers a CY3 filter was used (Excitation: 552 nm, Emission: 570 nm). Images were analyzed using the cellSens software.
www.nature.com/scientificreports www.nature.com/scientificreports/ Ammonia and pH sensing apparatus. In order to measure the samples' response to ammonia vapors and different pH values, the γ-Fe 2 O 3 /SiO 2 /RhB NPs-functionalized CA electrospun fibers were placed in a cuvette holder having two perpendicular light paths that were specially designed for free space applications. An all solid state 532 nm laser was used to excite the RhB moiety using a 400 μm core multimode silica optical fiber. The emitted fluorescence was filtered by a Thorlabs FGL550 longpass filter with 550 nm cutoff wavelength that blocks the excitation wavelength. It was then collected by a 600 μm core multimode optical fiber and analyzed by a Thorlabs CCS200 spectrometer. Both SMA terminated optical fibers are connected with the cuvette with SMA fiber adapters having mounted collimators. A schematic representation of the setup is shown in Fig. 1A.
For the gas ammonia detection, the above described cuvette holder containing the sample was placed in a custom made sealed testing chamber of 4.3 L volume. A peltier element was used for the evaporation of the 25% w/v ammonia solution drops as schematically depicted in Fig. 1B. For the pH detection, aquatic solutions of different pH values were used. HCl solutions with pH values ranging from 1-5 and NaOH solutions ranging from 8-13 were inserted in the cuvette where the sample was placed, in order to evaluate the sample's response.
Results and Discussion
Synthesis and characterization of γ-fe 2 o 3 /Sio 2 /RhB NPs. Fluorescent silica-coated nanocrystal clusters with approx. 20 nm thick silica shell were prepared as fluorescent and magnetic labels for the produced CA electrospun fibers (γ-Fe 2 O 3 /SiO 2 /RhB NPs). The magnetic cores of the γ-Fe 2 O 3 /SiO 2 /RhB NPs were prepared by the self-assembly of approximately a hundred of superparamagnetic maghemite nanocrystals (size ∼10 nm). Darker magnetic nanocrystal cores can be clearly distinguished from the brighter amorphous silica shell in the TEM images ( Fig. 2A,B). The size of the γ-Fe 2 O 3 /SiO 2 /RhB NPs determined from the TEM images (>100 particles counted) was found to be ∼130 nm ± 30 nm. The γ-Fe 2 O 3 /SiO 2 /RhB NPs showed superparamagnetic properties with a saturation magnetization Ms of ∼37 Am 2 kg −1 (Fig. 2C). DLS measurements of the γ-Fe 2 O 3 /SiO 2 /RhB NPs in an ethanol suspension (1.0 mg mL −1 ) showed narrow hydrodynamic-size distribution with the average size at ∼151 nm (SD = 2.8%) (Fig. 2D).
The γ-Fe 2 O 3 /SiO 2 /RhB NPs form stable colloidal suspensions and this is verified by the fact that their DLS-determined size is in close agreement with the size determined by TEM.
In order to investigate the influence of the RhB-loading on the nanoparticles' fluorescence properties, γ-Fe 2 O 3 /SiO 2 /RhB NPs having a lower RhB amount (i.e. only 3.11 µmol of RhB instead of 9.33 µmol per 450 mg of γ-Fe 2 O 3 /SiO 2 /RhB NPs -see in section 2.3) were synthesized. As seen in Fig. 2E, the emission spectra of the γ-Fe 2 O 3 /SiO 2 /RhB NPs with high RhB loading showed higher relative intensity compared to the low RhB loading clusters as expected. Most importantly, no shift in the maximum emission wavelength was observed upon altering the RhB loading.
In order to examine whether the optical response of RhB is influenced by the covalent linkage to the silica shell, the maximum emission wavelengths of the γ-Fe 2 O 3 /SiO 2 /RhB NPs and of the RhB aqueous solution (free RhB) were compared. As seen in Fig. 2F, the maximum emission wavelength of RhB aqueous solution was recorded at 585 nm, while after covalent anchoring with silica, the wavelength was shifted to 580 nm. It is noteworthy to mention at this point that when RhB was simply added to the TEOS (silica precursor) without the pre-formation of a covalent bond with the amino-silane molecule (APS), the product (silica-coated nanoparticle clusters) was not fluorescently labelled and all RhB molecules were removed during vigorous washing.
fabrication of γ-fe 2 o 3 /Sio 2 /RhB NPs-functionalized electrospun CA fibers. Electrospinning was first employed in the fabrication of pure CA fibers. A schematic of the electrospinning set-up used is provided in Fig. 3. The experimental conditions employed for obtaining nanoparticle-free CA electrospun fibers are given in the experimental section.
The morphological characteristics of the produced CA fibers were investigated by SEM. As seen in the SEM images appearing in Fig. 4. CA fibers had a belt-like (ribbon-like) morphology, a random orientation and they were characterized by a relatively broad diameter distribution within the micrometer size range. Ribbon-like morphologies in electrospun fibers based on cellulose and cellulose derivatives have been previously reported 56 . www.nature.com/scientificreports www.nature.com/scientificreports/ Based on earlier reports, the CA fiber morphology can be altered upon changing the solvent system and the polymer solution concentration. Under certain experimental conditions (i.e. specific solvent system, polymer solution concentration and optimum electrospinning parameters) the generation of cylindrical CA fibers is also feasible 57 .
As already mentioned in the experimental section, the fabrication of CA electrospun fibers decorated with γ-Fe 2 O 3 /SiO 2 /RhB NPs was accomplished by following two different synthetic routes involving: (a) the deposition of the nanoparticles onto the fibers' surfaces via spraying (denoted as sprayed magnetic fibers) and (b) the mixing of the nanoparticle dispersion prepared in ethanol with the CA acetone solution followed by electrospinning (denoted as electrospun magnetic fibers).
RhB was chosen to be covalently linked onto the nanoparticles' surfaces due to its high photostability compared to other fluorescent dyes introduced in previous studies as active moieties in optical sensing applications 58 . Actually, earlier studies in our group involved the incorporation of fluorescein (FL)-functionalized core-shell ferrous nanoparticles (γ-Fe 2 O 3 /SiO 2 /FL NPs) within CA electrospun fibers by following the spraying deposition methodology which however demonstrated low photostability, in agreement with previous reports 59 . www.nature.com/scientificreports www.nature.com/scientificreports/ The γ-Fe 2 O 3 /SiO 2 /RhB NPs-functionalized CA fibers were visualized by SEM and fluorescence microscopy. As seen in Fig. 4(B,C), in both cases, the presence of the nanoparticles on the fibers' surfaces can be clearly seen.
Transmission electron microscopy (TEM) analyses confirmed the presence of γ-Fe 2 O 3 /SiO 2 /RhB NPs along the nanocomposite fibers (Fig. 5). By comparing the TEM images of the electrospun magnetic fibers (Fig. 5A-C) with the sprayed magnetic fibers (Fig. 5E-G), the presence of the Fe 2 O 3 /SiO 2 /RhB NPs exclusively onto the fibers' surfaces can be observed in the second case, in contrary to the electrospun magnetic fibers where the nanoparticles are mainly accumulated within the fibers. Moreover, the synthetic protocol affected the mean fibers' diameter significantly where electrospun magnetic fibers' approach yields thinner while sprayed magnetic fibers' approach yields thicker fibers with diameter of 547 nm and 770 nm, respectively.
In the case of the electrospun materials, the core-shell nanoparticle morphology could clearly be resolved in images at higher magnifications (Fig. 5B,C). The Fe 2 O 3 /SiO 2 /RhB NPs are relatively homogeneously distributed along the fibers while on the nanoscale there is some segregation composed of up to dozen nanoparticles present in small aggregates.
Fluorescence microscopy was used to verify the fluorescence properties of the multifunctional nanocomposite fibers. Figure 6 provides the fluorescence images of the produced materials. By observing the fluorescence microscopy images it can be seen that fluorescence is not homogeneous for the entire sample, particularly in the case of the electrospun magnetic fibers (Fig. 6B). According to the TEM data (provided in Fig. 5), the sprayed magnetic fibers have Fe 2 O 3 /SiO 2 /RhB NPs anchored all over their external surfaces. In contrary, in the case of the electrospun magnetic fiber analogues, the nanoparticles that are embedded within the CA fibers form clusters and the presence of nanoparticle-free regions along the fibers can be clearly observed, resulting to a fluorescence "inhomogeneity" along the fibers.
The fluorescence efficiency of the Fe 2 O 3 /SiO 2 /RhB NPs anchored onto the CA fiber surfaces was investigated by photoluminescence spectroscopy at 520 nm excitation wavelength (Fig. 7). The emission wavelength was recorded at 574 nm, in agreement with previous reports recording the emission wavelength of RhB within 574-577 nm [60][61][62] .
The magnetic behaviour of the nanocomposite membranes was investigated by VSM at room temperature. Figure 8 presents the magnetization versus applied magnetic field strength plots for the 2 types of the γ-Fe 2 O 3 /SiO 2 /RhB NPs-functionalized CA nanocomposite fibers. As seen in the plots, both systems exhibited superparamagnetic behavior at ambient temperature, demonstrated by the symmetrical sigmoidal shape of the magnetization curves and the absence of a hysteresis loop. The sprayed magnetic fibers had a higher saturation magnetization value (Ms ∼ 0.76 Am 2 kg −1 ) compared to the electrospun magnetic fibers (Ms ∼ 0.20 Am 2 kg −1 ) due to the presence of the non-magnetic coating (cellulose acetate) around the NPs embedded within the fibers (in contrast to the NPs that are deposited onto the fiber surfaces), resulting to the decrease in magnetization due to quenching of surface effect 63 . Moreover, based on VSM magnetic measurements, the amounts of the γ-Fe 2 O 3 / SiO 2 /RhB NPs in the functionalized CA nanocomposite fibers were estimated to be ∼2.1 wt. % and ∼0.5 wt. % for the sprayed magnetic fibers and the electrospun magnetic fibers respectively. These differences justify the differences observed in the photoluminescence spectra corresponding to the 2 cases (Fig. 7). Gas (ammonia) and pH sensing. Both electrospun and sprayed magnetic fibers were evaluated for different gas ammonia concentrations and pH values. The emission spectrum of the sample was stable upon continuous illumination, showing no self-quenching mechanisms observed with other fluorescent moieties such as anthracene 41 . For the measurements, the integration time of the spectrometer was set to 10 s. The averaging method, rolling average over the last 3 acquisitions, was used. The sample was exposed for 30 s for each data point, showing fast response to the measurands. As each individual measurement corresponds to 30 s, this can be an estimate of the time response which is comparable to other electrospun-based ammonia sensors 64,65 where the response times range between 50 s-350 s, as well as fluorescence-based sensors 66 . In that sense, the response of the materials presented herein can be considered as fast. In order to assure the reliability of measurements for each recorded value, the sample was illuminated for 10 more minutes before changing NH 3 concentration or pH, showing no further change. This stability demonstrates that at continuous excitation, the properties of the www.nature.com/scientificreports www.nature.com/scientificreports/ material at constant ammonia concentration were not deteriorated and also that the response is attributed reliably to the specific measuring conditions. When exposed to NH 3 vapors, RhB undergoes structural changes resulting to the generation of a non-fluorescent lactone (Fig. 9) 67 . The latter explains the reduction observed in the fluorescence intensity upon exposure of the nanocomposite γ-Fe 2 O 3 /SiO 2 /RhB NPs -functionalized fibrous mats in NH 3 (Fig. 10A,C). Figure 10A shows the fluorescence spectrum of the electrospun magnetic fibers taken for different ammonia concentrations. A reference intensity value (I ref ) was taken before inserting the ammonia vapors. The fibers' response is defined as: www.nature.com/scientificreports www.nature.com/scientificreports/ where I sig is the intensity measured for an ammonia concentration. Figure 10B shows the response corresponding to the 577 nm peak, demonstrating a clear ammonia sensing for concentrations up to 11000 ppm.
The sprayed magnetic fibers decorated on their external surfaces with γ-Fe 2 O 3 /SiO 2 /RhB NPs also show a clear response to the ammonia vapours as presented in Fig. 10C,D for NH 3 concentrations up to ~13000 ppm. www.nature.com/scientificreports www.nature.com/scientificreports/ The detection of high NH 3 concentrations can be attributed to the large surface-to-volume ratio of the electrospun fibers. Previous studies have shown that due to their large surface-to-volume ratio, electrospun fiber sensors exhibit enhanced sensitivity 67 as well as quenching efficiency 38 compared to thin films. The overall factor that . Sensing (fluorescence quenching) mechanism of RhB molecules undergoing structural changes when exposed to NH 3 vapours, resulting to the generation of the non-fluorescent lactone form. www.nature.com/scientificreports www.nature.com/scientificreports/ enables the high concentration detection is actually the total number of sensing elements which are the γ-Fe 2 O 3 / SiO 2 /RhB NPs. Their total number in a specific volume of the material is determined by the three dimensional (3D) fibrous form and the concentration of the functionalised nanoparticles. However, the surface enhancement enabled by the fibrous morphology is the dominant factor which can be considered as a nonlinear scaling factor in a three dimensional space where the nanofibrous material is organised. The arrangement of nanoparticles in a linear shaped fiber scales linearly but in this 3D fibrous form the total effect can be eventually characterised by a nonlinear scaling or enhancement factor.
The response of the sprayed magnetic fibers is ~5% lower compared to the electrospun magnetic fibers. This could be attributed partially to the fact that in the former case the NPs can easily detach from the fibers' surfaces before and during the measurements because they are only weakly attached to them.
The sample starts to quench above ~12000 ppm and the response curve reaches a plateau. Therefore the sensor can be used for NH 3 concentrations detection up to ~12000 ppm. Above this limit, the sensor cannot be reliably used as it becomes permanently "poisoned" and irreversible.
Furthermore, our experimental data show no consistent reversibility in high concentrations ammonia sensing. This can be attributed to the high ammonia concentration, as heavy loading (with ammonia) tends to destabilize the sensor, resulting in poor reversibility and smaller relative signal changes 68 . However, despite the non-reversible nature of the demonstrated behaviour, there are several sensing and detection applications where reversibility is not required as the purpose of the sensing element is to log/register successfully a specific high concentration of ammonia. Such cases are important in ammonia leakage monitoring especially at very high concentrations that can become toxic. At the current stage of development the applications of detection of high level of ammonia concentration (above 1000 ppm) could be used at industrial facilities employing ammonia transfer lines and dense storage spaces, as the presented sensors can withstand the poisoning and saturation effects and can thus provide linear response up to 12000 ppm. Low cost sensing elements or materials can then be replaced after logging an ammonia leakage event.
Although the physical form of the electrospun fibrous materials is suitable for high concentration gas detection, it is not considered ideal for efficient collection of fluorescent light that could enable low level of detection. This is attributed to the fact that the material arrangement in the measuring cell could be vulnerable to external factors like vibrations, air flow etc., resulting to possible displacements that may alter significantly the optimal excitation and collection light angles. Therefore, for the optimization of the sensing performance the development of a more robust customised miniaturised measuring apparatus is required in future studies. Furthermore, modification of the electrospun fibrous materials towards the development of electrospun rigid fibrous mats will facilitate in future studies a highly controllable characterization process in lower ammonia concentrations by improving also the optical excitation and collection system.
The γ-Fe 2 O 3 /SiO 2 /RhB NPs-functionalized CA fibers were also evaluated as pH sensors in aqueous solutions with different pH values. Both, electrospun and sprayed magnetic fibers were tested but only the former exhibited a consistent response/behaviour in aquatic environments. On the contrary, detachment of the γ-Fe 2 O 3 /SiO 2 /RhB NPs decorating only the surface of the sprayed magnetic fibers was observed when the latter were immersed in aquatic solutions, thus indicating the limited robustness of this system compared to the electrospun analogue. These results may be attributed to the weak nanoparticle/polymer matrix interactions arising from the surface functionalization via spray deposition. The latter is further supported by the TEM analysis provided in Fig. 5E-G.
In Fig. 11A, the fluorescent intensity of the electrospun magnetic fibers immersed in acidic aqueous solutions with different pH values, ranging from 5 to 1, is presented. The fibers were firstly immersed in a pH 5 solution and subsequently in solutions with lower pH values. As the pH decreases, the fluorescent intensity clearly increases.
The electrospun magnetic fibers were also tested as fluorescent sensors in alkaline aqueous solutions with different pH values ranging from 8 to 13. The fibers were firstly immersed in a pH 8 solution and subsequently in solutions with higher pH values. As the pH increases the fluorescent intensity decreases (Fig. 11C,D). The intensity at the 580 nm peak is linearly depended on the pH value with R-square values of 0.9577 and 0.8817 for the acid and alkaline environments, respectively (Fig. 11B,D). The values of the slopes for the acid and alkaline environments are −0.00158 and −0.00165, respectively, with corresponding standard deviations of 0.0001917 and 0.0003028. The fact that the standard errors are less than 20% of the slope values strongly indicates that the correlation is linear. It should be noted that as the characterization of response in alkaline and acidic environments was performed by two different set of experiments due to current experimental limitations, the measurements cannot be directly compared or related in terms of the measured intensity, because of different placement conditions of the electrospun material in the measuring apparatus. However, the behaviour of the material is consistent over the entire pH range.
The samples were also evaluated towards pH sensing reversibility by alternating solutions with pH values of 2 and 7. The sample exhibited reversible on/off switchable fluorescence emission as presented in Fig. 12 in accordance to the literature 69,70 . It is noteworthy to emphasize that the γ-Fe 2 O 3 /SiO 2 /RhΒ NPs-functionalized electrospun magnetic fibers characterised 6 months after production towards pH sensing, exhibited a measurable response for a relatively long range of pH between 2 and 7, together with a consistent reversibility, demonstrating their stability and long term functionality. The relatively small degradation of response signal could be attributed to either a drift or poisoning effect, or also to combined random changes in the environmental conditions and setup's parameters during the long duration (~120 min) of the experiment. Despite this degradation the discrimination between the two pH values is still very clear and reliable.
To summarize, concerning the ammonia sensing performance, the sprayed magnetic fibers showed a ~5% lower response compared to the electrospun analogues, partially due to nanoparticle concentration differences in the two samples deriving from the fact that in the case of spray deposition the NPs are detached from the fibers' surfaces before and during the measurements because they are only weakly attached to them.
Further to the above logical assumption, it should be underlined that the absolute value of NPs concentration and the resulted photoluminescence (Fig. 7) of the two different fiber types should not be related to their sensing responses, as the response is calculated to a reference intensity value (Eq. 1) connected to the absolute initial photoluminescence. The absolute photoluminescence is reflected only by the absolute values of the measured intensity in the graphs appearing in Fig. 10A,C, which are indeed higher for the case of the sprayed magnetic fibers (Fig. 10C). Therefore, it is noteworthy to stress out that the response in the case of the electrospun magnetic fibers (Fig. 10B) is higher despite the fact that the sensing NPs are embedded in the hosting fibers and are not directly exposed to ammonia gas as in the case of the NPs deposited onto the fibers' surface via spraying. The adsorption mechanism of ammonia in the electrospun magnetic fibers is proved equivalent (or even more efficient), compared to the direct sensing on the sprayed NPs because of the minimal dimension of fibers. Furthermore, the stability of the NPs is retained, since these are protected within the fibers and they are not directly exposed to external degradation factors. www.nature.com/scientificreports www.nature.com/scientificreports/ Based on the above, the sprayed magnetic fibers were found to be less robust as pH fluorescent sensors owned to the fact that the γ-Fe 2 O 3 /SiO 2 /RhB NPs decorating only the fibers' surfaces were detached upon immersion of the fibers in aquatic solutions indicating their ineffectiveness in applications involving aquatic environments.
conclusions
In this work, the fabrication and characterization of cellulose acetate electrospun fibers doped with γ-Fe 2 O 3 / SiO 2 /RhB NPs is reported. Two different fabrication protocols were followed. In the first synthetic approach, the γ-Fe 2 O 3 /SiO 2 /RhB NPs were sprayed on top of the fibrous mat while in the second, a NP suspension prepared in CA acetone solution was directly electrospun. In the latter case, due to the encapsulation of the NPs within the fibers and the covalent anchoring of the RhB fluorophore onto the NP surfaces, the fluorophore's leakage from the fibrous mat is prevented, enabling thus stable fluorescence-based operation of the developed materials.
In ammonia sensing performance, the sprayed magnetic fibers show a response ~5% lower compared to the electrospun magnetic fibers. It should be stressed out that the electrospun magnetic fibers exhibit equivalent or even higher response despite the fact that the sensing γ-Fe 2 O 3 /SiO 2 /RhB NPs are embedded in the hosting fibers and not directly exposed to ammonia gas as in the case of the sprayed analogues, while -as mentioned above -the embedded NPs retain also a much better stability since they are protected within the fibrous polymer matrix.
The electrospun nanocomposite fibers were evaluated for both gas ammonia and pH sensing. Due to the large surface to volume ratio of the functionalized fibrous mats, high ammonia concentrations up to 12000 ppm were detected. Furthermore, the electrospun magnetic fibers showed fast and linear response to aquatic solutions of long pH range from acidic to alkaline environments. However, the stability of the sprayed magnetic fibers is limited due to the detachment of the Fe 2 O 3 /SiO 2 /RhB NPs from their surfaces. | 7,943.6 | 2020-01-15T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Engineering"
] |
Early-life gut dysbiosis linked to juvenile mortality in ostriches
Imbalances in the gut microbial community (dysbiosis) of vertebrates have been associated with several gastrointestinal and autoimmune diseases. However, it is unclear which taxa are associated with gut dysbiosis, and if particular gut regions or specific time periods during ontogeny are more susceptible. We also know very little of this process in non-model organisms, despite an increasing realization of the general importance of gut microbiota for health. Here, we examine the changes that occur in the microbiome during dysbiosis in different parts of the gastrointestinal tract in a long-lived bird with high juvenile mortality, the ostrich (Struthio camelus). We evaluated the 16S rRNA gene composition of the ileum, cecum, and colon of 68 individuals that died of suspected enterocolitis during the first 3 months of life (diseased individuals), and of 50 healthy individuals that were euthanized as age-matched controls. We combined these data with longitudinal environmental and fecal sampling to identify potential sources of pathogenic bacteria and to unravel at which stage of development dysbiosis-associated bacteria emerge. Diseased individuals had drastically lower microbial alpha diversity and differed substantially in their microbial beta diversity from control individuals in all three regions of the gastrointestinal tract. The clear relationship between low diversity and disease was consistent across all ages in the ileum, but decreased with age in the cecum and colon. Several taxa were associated with mortality (Enterobacteriaceae, Peptostreptococcaceae, Porphyromonadaceae, Clostridium), while others were associated with health (Lachnospiraceae, Ruminococcaceae, Erysipelotrichaceae, Turicibacter, Roseburia). Environmental samples showed no evidence of dysbiosis-associated bacteria being present in either the food, water, or soil substrate. Instead, the repeated fecal sampling showed that pathobionts were already present shortly after hatching and proliferated in individuals with low microbial diversity, resulting in high mortality several weeks later. Identifying the origins of pathobionts in neonates and the factors that subsequently influence the establishment of diverse gut microbiota may be key to understanding dysbiosis and host development. 2dRYXYkxWvrfQp8YyiPnfy Video Abstract Video Abstract
Introduction
The composition of the microbial community in the gastrointestinal tract of animals ("the gut microbiome") is extremely important for host fitness and health [1]. Imbalances in the gut microbiome, commonly referred to as gut dysbiosis, have been widely associated with a variety of gastrointestinal and autoimmune diseases such as type 1 diabetes, Crohn's disease, inflammatory bowel disease, ulcerative colitis, and multiple sclerosis [2][3][4][5][6]. Dysbiosis is typically characterized by loss of beneficial microorganisms, proliferation of pathobionts (opportunistic microorganisms), and a reduction in overall microbial diversity [7,8]. Transplants of gut microbiota from mice with gastrointestinal disease have been shown to result in similar disease symptoms in recipients, suggesting a strong causal effect of gut dysbiosis on host health [9,10]. Inflammation of the gastrointestinal tract is often associated with gut dysbiosis, which in turn alters the intestinal mucus layer and epithelial permeability resulting in increased susceptibility to infection, sepsis, and organ failure [11][12][13].
When and where imbalances in gut microbiota originate is unclear. The diversity and composition of microbes differ markedly across the length of the gastrointestinal tract [14,15], and it is possible that certain gut regions may act as sources of pathobionts, radiating out to disrupt other parts of the gut. For example, some areas might be more susceptible to pathogenic overgrowth due to low microbial diversity and reduced resilience [16]. Alternatively, dysbiosis may occur throughout the gastrointestinal tract or develop from diverse communities that harbor more pathobionts. Pinpointing when groups of bacteria start to proliferate in different regions of the gut has been difficult because most studies have used cross-sectional sampling (one sample per individual). As a result, it remains unclear whether bacteria associated with dysbiosis are always present in low abundance, or whether dysbiosis is linked with a sudden influx of foreign microbes from an external source.
An additional problem has been to establish whether certain groups of bacteria are consistently involved in dysbiosis across diverse host species. The vast majority of microbiome studies, and specifically those on dysbiosis, have focused on humans and laboratory mice [7]. This research has shown that certain bacterial taxa seem to be routinely associated with dysbiosis across species and individuals. For example, in inflammatory bowel disease, one of the most common indicators of dysbiosis is elevated levels of Enterobacteriaceae (Gammaproteobacteria) [10,17,18], and a reduction of Ruminococcaceae and Lachnospiraceae (Clostridia) [6,19]. Whether these patterns extend across more distantly related species and outside laboratory settings is unclear, especially for nonmammalian organisms.
In this study, we examined a novel vertebrate host system, the ostrich (Struthio camelus), to understand patterns of gut dysbiosis and its role in the widespread mortality that occurs in captive populations. For example, commercially farmed ostriches suffer from exceptionally high and variable mortality rates during their first 3 months of life [20,21]. While the causes of mortality are mostly unknown, several candidate pathogens associated with enterocolitis have been reported, for example Escherichia coli, Campylobacter jejuni, Pseudomonas aeruginosa, Salmonella spp., Klebsiella spp., and multiple Clostridium spp. [22][23][24][25][26]. However, whether variation in mortality is due to infection of specific pathogens or the result of microbiome dysbiosis has not yet been established. The studies investigating causes of mortality in ostrich chicks have so far used bacterial culture or species-specific DNA primers [22][23][24][25][26]. These methods can be useful to detect the presence of targeted microorganisms, but searching for a particular culprit may yield ambiguous answers if pathobionts exist in the normal gut microbiota of the host and only exhibit pathogenic tendencies when the community is disturbed [27]. In addition to a high mortality rate, ostriches exhibit large variation in microbial composition between individuals and across gut regions [28]. Because these animals have only been reared in captivity for a very short time relative to other farmed animals (< 120 years) [29], they exhibit several of the advantages of a wild study system (high genetic variation, nondomesticated social groups) while still allowing for controlled conditions and ease of sampling.
Ostrich chicks (n = 234) were hatched and raised in four groups under standardized conditions and studied for 12 weeks to investigate gut dysbiosis and mortality patterns. We evaluated the gut microbiota of 68 individuals that died from suspected enterocolitis within 3 months after hatching (referred to as "diseased") and compared it to 50 individuals that were euthanized as age-matched healthy controls (referred to as "controls"). Age-matched controls were crucial for establishing the characteristics of normal gut microbial communities and how they changed throughout host development. The microbial composition of the ileum, cecum, and colon were characterized to determine the pattern of dysbiosis in different regions of the gastrointestinal tract. Fecal samples collected at 1, 2, 4, and 6 weeks of age from the control and diseased individuals, together with 25 additional individuals that survived the whole period, were analyzed to identify the time point when dysbiosis-related features emerge. Finally, samples from food, water, and soil substrate were examined to evaluate potential sources of dysbiosis-associated bacteria.
Results and discussion
Mortality and dysbiosis in different gut regions during ontogeny Mortality of juvenile ostriches occurred throughout the entire 12-week study period but was highest between 4 and 8 weeks of age, with a peak at 6 weeks (Fig. 1b).
Individuals with disease followed the growth curve of all other individuals before rapidly dropping in weight prior to death (Fig. 1c, d). The cause of the weight reduction is unknown, but diseased individuals were observed to stop eating and drinking, and in some cases suffered from diarrhea, so dehydration and wasting are likely explanations. In total, 40% of all chicks died of suspected disease (68/170, excluding 60 controls and 4 injured individuals). Post-mortems of diseased and control individuals revealed that mortality was associated with extensive inflammation of the gastrointestinal tract ( Fig. 1e; Figure S1). The gut inflammation scores of diseased individuals (mean ± SD for ileum = 3.1 ± 1.0, cecum = 2.0 ± 1.3, colon = 2.0 ± 1.2) were substantially higher than those of control individuals (ileum = 0.4 ± 1.0, cecum = 0.04 ± 0.29, colon = 0.08 ± 0.45) ( Figure S1).
The structure of the microbiota of diseased and control individuals was extremely different in all three gut regions (Fig. 2, Figure S2, Table 1). Specifically, there were significant differences in the microbial community distances (obtained with both Bray-Curtis (BC) and weighted UniFrac (wUF) measures) between diseased and control individuals, controlling for age, sex, group, and time since death (Table 1). However, Bray-Curtis and weighted UniFrac measures revealed contrasting patterns: Bray-Curtis distances were greatest in the ileum decreasing towards the lower gut (cecum-colon), whereas weighted UniFrac measures were greatest in the colon decreasing towards the ileum (Table 1). Sex, group, and time since death had no significant effects on any of the distance measures of the microbiome in any of the gut regions (Table 1).
Taxa associated with disease in the ileum
To better understand the microbial dissimilarities between diseased and control individuals, we evaluated the taxonomic composition of all gastrointestinal regions. The ileum showed the most striking evidence of dysbiosis ( Fig. 4). Control individuals had a diverse community of different bacterial classes in the ileum, whereas diseased individuals displayed a bloom of Gammaproteobacteria and a major reduction in Bacilli and other rarer classes. A detailed investigation of the families belonging to Gammaproteobacteria showed an almost complete dominance of Enterobacteriaceae in the diseased ileum samples, while the control individuals harbored a diverse set of Gammaproteobacteria families ( Figure S5).
The Gram-negative Enterobacteriaceae is a large family that is well-known for encompassing several intestinal pathogens and pathobionts, and is frequently seen in higher abundances in hosts with gut dysbiosis [10,17,18]. There were 19 operational taxonomic units (OTUs; sequences with 100% nucleotide identity) associated with Enterobacteriaceae in the ileum, and blast searches against the NCBI nucleotide database matched to a wide range of genera, including Escherichia, Klebsiella, Shigella, Salmonella, Yokenella, Citrobacter, Enterobacter, Cronobacter, Atlantibacter, Pluralibacter, Leclercia, and Kluyvera. In previous studies, it has been shown that various members of the Enterobacteriaceae family often co-occur and bloom simultaneously during dysbiosis [3,31], which is consistent with our results.
Another key characteristic of dysbiosis in the ileum was that certain individuals had microbiomes almost entirely comprised of Clostridia, a pattern not observed in any control individuals ( Figure 4). The families of Clostridia showed further striking taxonomic patterns in diseased individuals, including a major increase of Peptostreptococcaceae and a marked reduction of Ruminococcaceae and other rare families ( Figure S5). The Peptostreptococcaceae family was represented by six OTUs in our data, and blast searches yielded matches to various species of Paeniclostridium, Paraclostridium, and Clostridium. The most prevalent of these OTUs matched Paeniclostridium sordellii, a bacteria known to have virulent strains causing high morbidity and mortality through enteritis and enterotoxaemia in both humans and animals [32,33].
Next, we identified specific OTUs associated with dysbiosis by performing negative binomial Wald tests of bacterial abundances, while controlling for the age of the hosts.
Taxa associated with disease in the cecum and colon Examining the relative abundances of bacterial classes in the cecum and colon showed that control individuals were largely similar, exhibiting a relatively stable microbiome composition across hosts and ages. However, there were major disruptions in the microbial composition of both gut regions in diseased individuals (Fig. 4). Similar to the ileum, the Gammaproteobacteria were more prevalent in the cecum and colon of diseased individuals, but a reduction in Clostridia and an increase in Bacteroidia constituted the most prominent differences. Further taxonomic analyses of Bacteroidia showed that the family Porphyromonadaceae had proliferated in the cecum and colon of diseased individuals ( Figure S5). This family encompassed two species in our data, Parabacteroides distasonis and Dysgonomonas sp., which are commonly found in normal gut microbiota [34]. However, P. distasonis has previously been identified as a colitis-promoting species in mice [35] and Dysgonomonas members are known to be associated with cachexia and intestinal inflammation [36]. Differential abundance tests identified large similarities in the dysbiosis patterns of the cecum and colon, as 50 out of the 56 (89%) OTUs that were more abundant in the diseased colon samples were also more abundant in the diseased cecal samples ( Fig. 5; Tables S2-S3). In addition, 15 out of these OTUs (39%) were also significantly overrepresented in the ileum (Table S1). The most significant OTU in the cecum (q = 1.2e−53) and colon (q = 2.4e−56) was absent in control individuals but abundant in diseased individuals (Tables S2-S3). This OTU, which was also highly significant in the ileum (q = 3.4e−21), had a 100% match against Clostridium paraputrificum, a known human pathogen associated with sepsis and necrotizing enterocolitis [37][38][39]. C. paraputrificum has also been experimentally studied in gnotobiotic quails, where it caused lesions and haemorrhages in the gut lining associated with enterocolitis [40].
Taxa associated with health in different gut regions
The ileum of diseased individuals showed large reductions in certain bacteria compared to controls (Fig. 4), mainly Bacilli, a class in which Turicibacteraceae and Lactobacillaceae were the most common families. Turicibacteraceae included two significant OTUs from Turicibacter (Table S1), which showed decreased abundances in diseased ilea. Turicibacter has been shown to be highly heritable in humans and mice where it is in direct contact with host cells of the small intestine [48]. This genus has been associated with both health and disease, but is often found to be depleted in animals with diarrhea and enteropathy [49][50][51].
One of the most striking differences in both the cecum and colon of the diseased individuals was a substantial reduction of the Bacteroidia family, S24-7 ( Figure S5). Little is known about S24-7, despite it being a prominent component of the normal vertebrate gut microbiota [52]. Nevertheless, studies of mice have reported a potentially beneficial effect of S24-7, with abundances often being reduced in diseased hosts [53,54]. The majority of OTUs with reduced abundances in the colon of diseased individuals were also underrepresented in the cecum (15 out of 19; 79%), indicating large-scale depletion of potentially health-associated bacteria throughout the hindgut. These OTUs belonged to taxa such as Lachnospiraceae (e.g., Coprococcus, Blautia), Ruminococcaceae (e.g., Ruminococcus), S24-7, Erysipelotrichaceae, Clostridium, Anaeroplasma, Turicibacter, Methanobrevibacter, Akkermansia muciniphila, and several unknown Clostridiales ( Fig. 5; Tables S2-S3).
While 15 OTUs were found to be significantly overrepresented in all three gut regions of diseased individuals, only a single OTU was significantly underrepresented in all gut regions of diseased individuals. This OTU matched the butyrate-producing genus Roseburia, which has repeatedly been associated with health. For example, lower abundances of Roseburia spp. have been discovered in humans with ulcerative colitis, inflammatory bowel disease, irritable bowel syndrome, obesity, hepatic encephalopathy, and type 2 diabetes [2,[55][56][57], and in pigs with swine dysentery [58]. These results support the idea that Roseburia and many other taxa previously found to be negatively associated with disease, are not only specific to mammalian dysbiosis patterns, but their depletion is a unifying feature of dysbiosis across phylogenetically distant hosts such as humans and ostriches.
Disruption of the gut microbiota in the weeks preceding death
To establish whether dysbiosis occurs immediately before death or results from imbalances emerging earlier in life, we examined the microbiota of fecal samples that were repeatedly collected prior to death. We found that chick survival up to 4 weeks of age was not related to alpha or phylogenetic diversity of bacteria earlier in life (Table S4). However, the probability of surviving beyond six weeks was predicted by higher alpha diversity at 2 weeks of age (Cox's hazard ratio (HR): 0.57±0.25, p < 0.05), but lower alpha diversity at 4 weeks of age (HR: 4.02±0.59, p < 0.05), and lower phylogenetic diversity at two and four weeks of age (HR 2 weeks: 1.40±0.15, p < 0.05; HR 4 weeks: 1.88±0.24, p < 0.01) ( Figure S6; Table S4). These results suggest that individuals with low microbial alpha diversity at 2 weeks of age were susceptible to colonization with distinct phylogenetic groups of bacteria, which increased their risk of mortality in the subsequent weeks.
Next, we examined if the abundances of bacterial families that differed between diseased and control individuals could predict patterns of future mortality in the weeks leading up to death. There was only weak evidence that having higher abundances of Lactobacillaceae at 2 weeks of age and Turicibacteraceae at 4 weeks of age had a tendency to positively influence survival (Figure S7; Table S4). The abundances of Peptostreptococcaceae and S24-7 beyond 6 weeks of age were also associated with increased subsequent survival, although not significantly (Table S4). However, there were very strong associations between the abundances of Peptostreptococcaceae and S24-7 during the first week of life and mortality at all subsequent ages, even after controlling for the abundances of these bacterial families at later ages (Peptostreptococcaceae HR range: 1.65±0.13 to 1.73±0.16, all p values < 0.001; S24-7 HR range: 1.24± 0.11 to 1.60±0.21, all p values < 0.05) ( Fig. 6; Table S4). This result suggests that the timing of proliferation of certain bacterial groups, such as Peptostreptococcaceae and S24-7, may be key to host fitness with higher abundances during early ages potentially having detrimental effects even if the same bacterial groups might be beneficial at later ages. It further lends support to the notion that the first couple of days after hatching is a critical period determining whether microbial imbalances ensue, which can lead to increased mortality even months later.
Environmental sources of gut bacteria
Finally, we evaluated potential environmental sources of the microbes present in the gut of control and diseased individuals. Samples were collected from water, food and soil substrate during the study period and analyzed with SourceTracker [59]. There was essentially no contribution from the water supply (0.1-0.4%) or from the soil (0.2-0.7%) to the gut microbiota of either diseased or control individuals (Fig. 7). Instead, the majority of gut bacteria were from unknown sources (89.9%). Some microbial sequences present in food overlapped with OTUs found in the ileum and colon. However, these were predominantly in control individuals, which may be explained by healthy individuals eating more than sick individuals (Fig. 7). These findings indicate that contaminated food or water were unlikely sources of bacteria associated with mortality.
Our environmental sampling scheme does not exclude the possibility that there are other environmental sources of pathogenic bacteria. For example, several species of wild birds, including cape sparrows, cape weavers, masked weavers, red bishops, and quelea were frequently observed in the chicks' outdoor enclosures. Sampling water, food, and soil every 2 weeks may also not have been frequent enough to detect potential transient presence of bacteria in the environment or transmission events that may occur sporadically. Nevertheless, our longitudinal fecal microbiome analyses suggest that dysbiosis problems arise early in life from taxa already present in the gut, rather than the sudden acquisition of new taxa. Little is known about the microbiomes of eggs, parents, or the hatching environment for this species, but this is an obvious avenue for future research that may help to identify ways of controlling the prevalence of problematic bacteria during early life. For this study, chicks were reared in isolation from adults because it facilitates management and handling. However, this approach prevents interactions between chicks and parents that may be important for the early establishment of gut microbiota. For instance, coprophagy (feeding on feces) has been shown to be important in the development of microbiota in other animals [60] and ostrich chicks are known to be coprophagic [61]. Providing access to adults (or at least their feces) may allow chicks to seed their microbiome early in life with a balanced and diverse bacterial community, possibly preventing future proliferation of problematic bacteria. This idea, however, remains to be experimentally tested.
Conclusions
Our study shows that severe disruption of gut bacterial communities is associated with high levels of mortality in developing ostrich chicks. Large-scale shifts in taxon composition, low alpha diversity, and multiple differentially abundant OTUs underlie the dysbiosis pattern seen in diseased individuals. Several taxa associated with disease were disproportionally proliferated in the ileum, cecum, and colon (e.g., Enterobacteriaceae, Peptostreptococcaceae, Porphyromonadaceae, Clostridium, Paeniclostridium) whereas other taxa were associated with health (e.g., S24-7, Lachnospiraceae including Roseburia, Coprococcus and Blautia, Ruminococcaceae, Erysipelotrichaceae, and Turicibacter). Dysbiosis was particularly pronounced in the ileum and in individuals that died at early ages, showing that disruptions to gut microbiota develop in a distinct spatial and temporal manner. The establishment of some of the pathogenic bacteria occurred prior to 1 week of age, which predicted patterns of mortality several weeks later. Yet the rearing environment did not show any evidence of pathogenic sources. A striking feature of the dysbiosis we observed is that many of the implicated harmful and beneficial bacteria have been found to have similar effects in a diverse set of vertebrate hosts, including humans. This pattern suggests that there is a high degree of evolutionary conservatism across some host-microbe associations and that further studies on different vertebrate species may contribute to a general understanding of gut dysbiosis.
Experimental setup
Ostrich eggs were collected over a period of seven days at the Western Cape Department of Agriculture's ostrich research facility in Oudtshoorn, South Africa and artificially incubated on 19 th Aug 2014 to synchronize hatching around 30 September 2014. A total of 234 ostrich chicks hatched and were randomly divided into four groups of approximately 58 chicks each and monitored from day-old until 12 weeks of age. The groups were kept in indoor pens of approximately 4 × 8 m in the same building with access to outdoor enclosures during the day, weather permitting. To reduce potential environmental variation on the development of the gut microbiota, all individuals were reared under standardized conditions with ad libitum food and water during daytime. Multiple feeding stations were present in the pens to ensure all chicks could feed freely. The chicks were fed a balanced plant-based pelleted and crumbed diet normally given to ostrich chicks (consisting primarily of corn, soybean, and alfalfa, details in supplementary tables of [30]), and were kept in an area completely separate from adult ostriches. No medicines were given to the chicks during the study period.
Sample collection
A total of 68 individuals died of suspected enterocolitis during the 12-week period, which we have referred to throughout the text as "diseased." Many of these chicks exhibited characteristic behavior of sickness shortly before dying (poor appetite, inactivity, listlessness, depressed posture). Additionally, every other week, ten chicks (2-3 individuals from each group) were randomly selected for euthanization and dissection, to act as agematched controls for the diseased individuals that died. The control individuals were euthanized at 2, 4, 6, 8, 10, and 12 weeks of age by a licensed veterinarian who severed the carotid artery. Four individuals sustained leg or eye injuries and were removed from the study and excluded from all analyses. The contents of the ileum, cecum, and colon of all control and diseased individuals were sampled during dissection and collected in empty 2 ml microtubes (Sarstedt, cat. no. 72.693). To minimize contamination between samples and individuals, lab benches and surfaces were routinely sterilized with 70% ethanol, and dissection equipment was cleaned with hot water, 70% ethanol, and placed in the open flame of a Bunsen burner between each sample collection. During dissections, the time since death (in hours) was recorded (mean time = 6.3 h). When chicks were found dead in the morning, a conservative estimate of time since last checked was given for individuals that were cold (~12 h) and 2 h if still warm. Control individuals also varied in time since death because they were euthanized simultaneously and dissected sequentially.
In addition to the intestinal samples, we routinely collected fecal samples from live individuals at 1, 2, 4, 6, 8, 10, and 12 weeks of age. This sampling was conducted on all chicks up to the point of death (diseased and control individuals) and on the chicks that survived the full period (survivors; n = 102). Fecal samples were collected in empty 2 ml microtubes 1 day before scheduled euthanizations of control individuals took place. Weight measurements of all individuals were obtained at hatching, during each fecal collection event and immediately prior to dissection. Environmental samples were collected throughout the experiment by wetting sterile cotton swabs in phosphate-buffered saline (PBS) and swabbing food, drinking water, and the soil/floor of the ostrich chicks' enclosures during each sampling event. All samples were frozen at − 20°C after collection.
During dissections, photographs of the gastrointestinal tract of each individual were taken and later scored for inflammation using a four-point scale: 0 = no visible inflammation, 1 = minor inflammation, 2 = intermediate inflammation, 3 = major inflammation, and 4 = extreme and severe inflammation. The author (E.V.) performing the inflammation assessment was blind to whether individuals had been euthanized or died (control/diseased). Twenty-three measures (7% of 323) were given a score of NA because it was not possible to assess the inflammation (e.g., gut region not properly visible on photograph) (Table S5).
DNA sequencing
We prepared sample slurries based on the protocol in [62] and extracted DNA using the PowerSoil-htp 96 well soil DNA isolation kit (Mo Bio Laboratories, cat no. 12955-4) as recommended by the Earth Microbiome Project (www.earthmicrobiome.org). Libraries were prepared for amplicon sequencing of the V3 and V4 regions of the 16S rRNA gene using Illumina fusion primers containing the target-specific primers Bakt_341F and Bakt_805R [63] according to the Illumina 16S Metagenomic Sequencing Library Preparation Guide (Part # 15044223 Rev.B). The samples were sequenced as 300 bp paired-end reads over three sequencing runs on an Illumina MiSeq at the DNA Sequencing Facility, Department of Biology, Lund University, Sweden. We sequenced a total of 323 ileum, cecum, and colon samples from all individuals that died (n = 68) and euthanized (control) individuals at 2, 4, 6, 8, and 12 weeks of age (n = 50 in total; 10 individuals per week, excluding samples taken at 10 weeks of age due to the limited number of deaths of diseased individuals at this time point; Table S5). We also sequenced a total of 378 fecal samples from weeks 1, 2, 4, and 6: 181 from the diseased individuals, 99 from control individuals, and 98 from survivors (Table S6). The sequence data from fecal samples of control individuals and survivors have been used in a previous study, which evaluated the maturation of fecal microbiomes in healthy chicks during the full 3-month period [30]. Finally, we sequenced 24 environmental samples (8 food, 8 water, 8 soil) during weeks 2, 4, 6, and 8, and 4 negative samples (blanks) (Table S5).
Data processing
Primers were removed from reads using Trimmomatic (v. 0.35) [64] and quality-filtered using the script mul-tiple_split_libraries_fastq.py in QIIME (v. 1.9.1) [65]. Bases with a Phred score < 25 at the 3′ end of reads were removed and samples multiplexed. Forward reads were retained for downstream analyses due to lower base quality in reverse reads. Amplicon sequence variants (ASVs) were clustered in Deblur (v. 1.0.0) [66] and assigned using the RDP classifier (v. 2.2) [67]. ASVs are referred to as operational taxonomic units (OTUs) in this study to aid consistency with previous ecological and evolutionary research. In Deblur, the minimum reads option was set to 0 to disable automatic filtering and all sequences were trimmed to 220 bp. We used the OTU table produced after both positive and negative filtering, which removes reads containing PhiX or adapter sequences, and only retains 16S sequences. PCRoriginating chimeras are filtered inside Deblur by default [66]. We removed all OTUs that were either classified as mitochondria or chloroplasts, present in the negative samples, only appeared in one sample, or with a total sequence count of less than 10. We further filtered out all samples with a total sequence count of less than 500, resulting in 7 ileal and 3 environmental samples being excluded. Average read count per OTU was 1005.9 for the intestinal samples and 944.2 for the fecal samples.
Data analyses
All statistical analyses were performed in R (v. 3.3.2) [68], and all plots were made using ggplot2 [69]. A phylogenetic tree for the UniFrac and phylogenetic diversity measures was made with FastTree [70]. Bray-Curtis and weighted UniFrac [71] distances between microbiomes were calculated in phyloseq (v. 1.19.1) [72] and examined using a PERMANOVA with the adonis function in vegan (v. 2.4-2) [73]. Age effects on the microbiome were evaluated by fitting a linear term and a quadratic age term with Z-transformed values. Beta diversity was tested with a multivariate homogeneity of groups dispersions test using the betadisper function in vegan [73]. We calculated alpha diversity using Shannon's H index and phylogenetic diversity using Faith's weighted abundance of phylogenetic diversity. Variation in diversity was analyzed using a GLM with a Gaussian error distribution, health status (control versus diseased), age, and their interaction as fixed effects. Separate GLMs were used for each gut region.
To evaluate bacterial abundances, we first modelled counts with a local dispersion model and normalised per sample using the geometric mean, according to DESeq2 [74]. Differential OTU abundances between control and diseased individuals were subsequently tested in DESeq2 with a negative binomial Wald test while controlling for the age of individuals and with the beta prior set to false [74]. Results for the specific comparisons were extracted (e.g., "ileum control" versus "ileum diseased") and p values were corrected with the Benjamini and Hochberg false discovery rate for multiple testing. OTUs were considered significantly differentially abundant if they had an adjusted p value (q value) < 0.01. Environmental samples were analyzed with SourceTracker [59].
To estimate the ages at which diversity and bacterial taxa predicted survival, we analyzed the fecal samples using Cox Proportional Hazards models in the R package survival (v. 2.44-1.1) [75]. These models examine whether explanatory variables are associated with a greater risk (beta coefficient > 1) or lower risk (beta coefficient < 1) of mortality. Separate models were fitted for each measure of diversity and each bacterial family with their measurements at weeks 1, 2, 4, and 6 fitted as explanatory variables (Table S7). Later ages than week 6 were not included as there was little variation in mortality after this time (Fig. 1). Because individuals that died very early in life had missing data for later time points, it was not possible to include all explanatory variables simultaneously without restricting the data to individuals that survived past week 6. Therefore, measures from each age were sequentially entered into models in a chronological order (e.g., week 1 followed by week 1 and 2). By doing this, we were able to test how microbiome features at week 1 predicted survival past week 1, how microbiome features at week 2 predicted survival past week 2, while controlling for any microbiota differences at week 1, and so forth. | 7,148.4 | 2020-10-12T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Patchwork Conditions for Holographic Nonlinear Responses: A Computational Method for Electric Conductivity and Friction Coefficient
We propose a new method to compute nonlinear transport coefficients in holography, such as nonlinear DC conductivity and nonlinear friction coefficient. The conventional method can be applied only to the models whose action in the gravity dual has the ``square-root structure,'' i.e., the Dirac-Born-Infeld action of the probe D-branes or the Nambu-Goto action of the probe strings. Our method is applicable to a wider range of holographic models whose action does not have such a square-root structure. We propose a condition to obtain regular physical configurations in the gravity dual in the form of two simultaneous equations, which we call the patchwork condition. Our method also enables us to estimate the effective temperature of the nonequilibrium steady states in a wider range of holographic models. We show that a general model exhibits different effective temperatures for different fluctuation modes.
Introduction
The AdS/CFT correspondence (the gauge/gravity duality) [1][2][3], that is also called the holographic approach, is a promising framework for computing physical quantities in strongly coupled field theories.In this approach, the expectation values of the operators of the quantum field theory are computed by a classical theory of fields on a higher-dimensional curved spacetime, which is often called "bulk." Usually, the GKP-Witten prescription [2,3] is employed to calculate the expectation value of operators in the AdS/CFT correspondence.In this prescription, the bulk field is divided into normalizable mode and non-normalizable mode, with the non-normalizable mode corresponding to the external source and the normalizable mode corresponding to the expectation value of the operator conjugate to the source.On the other hand, in terms of the second-order differential equation that the bulk field follows, the normalizable mode and the non-normalizable mode are independent solutions.Therefore, the magnitude of the normalizable mode can be set freely.This means that the expectation value is not determined and there is no predictive power.This problem is usually resolved by imposing regularity on the bulk field.If the magnitude of the normalizable mode, i.e., the expectation value of the operator, is arbitrarily assigned for a given value of the source, the bulk field will usually be singular.However, if the magnitude of the normalizable mode is set to a "right" value, the bulk field will be regular.Thus, when regularity is achieved for the bulk field, the magnitude of the normalizable mode, i.e., the expectation value of the operator, is determined.
The above is a general guiding principle for calculating the expectation value of an operator in the AdS/CFT correspondence.However, for practical use, it is important to find an efficient way to determine the value of the normalizable mode that makes the bulk field regular.In general terms, it is necessary to request that the bulk field be regular at all points in the bulk spacetime, and to achieve this, the regularity of the bulk field must be examined at all points in the bulk.However, this is quite a tedious and inefficient task.
So far, several efficient ways of finding regular solutions have been developed.In all of the formulations proposed so far, the regularity of the bulk field in the entire region of the bulk spacetime is achieved just by realizing the regularity at a specific point in the bulk spacetime.The location of this point in the bulk where regularity should be imposed is known in advance in the models studied so far.For example, in the computation of the retarded Green's function, the normalizable mode is determined by imposing an ingoing-wave boundary condition on the bulk field at the event horizon of the bulk black hole [4].This is equivalent to imposing regularity on the bulk field on the event horizon in the ingoing Eddington-Finkelstein coordinates [5].Also, for example, in the calculation of nonlinear responses, such as the calculation of nonlinear frictional forces [6,7] or nonlinear electric conductivity [8], the magnitude of the normalizable modes has been determined by imposing a reality condition on the bulk field (or on the on-shell action).The bulk theory for these cases is given by the Nambu-Goto (NG) action or the Dirac-Born-Infeld (DBI) action.In these models, the location of the special point is first determined, and the expectation values of the transport coefficients are given by imposing a condition that ensures reality of the bulk filed, at this point.This special point is known as an effective horizon for the worldvolume theory.
In terms of regularity, the correct expectation value is obtained by imposing regularity of the bulk field at the predetermined effective horizon [9].
On the other hand, there are cases where no formulation has been established for efficiently finding the regular solutions.For example, as described in this paper, such a case is when the action of the bulk field is non-NG or non-DBI type, i.e., without the square-root structure, and contains a nonlinear term.In this type of model, as we shall see later, the location where regularity should be imposed on the bulk is not predetermined.Therefore, we have to examine the regularity of the bulk field at all positions in the bulk.In this paper, we propose a new prescription that efficiently imposes regularity on bulk fields even in such cases.
Such an efficient method of imposing regularity is not only beneficial in practical computations but also brings new physical insights.In general, the calculation of transport coefficients in nonlinear regions requires calculations on a non-equilibrium steady-state background where dissipation exists.This is a great challenge from the viewpoint of non-equilibrium statistical physics.The effective horizon is the horizon of the analog black hole on which the fluctuations of the bulk field reside.Using this effective horizon, the effective temperature characterizing fluctuations in the non-equilibrium steady-state is naturally defined within the framework of the bulk theory [10,11]. 1owever, the definition of the concept of effective temperature in a non-equilibrium steady state is itself a non-trivial problem in general.Thus, in more general models and settings, there may be instances where the effective horizon itself does not exist or cannot be defined.Therefore, it is also really a non-trivial question whether the prescription of assuming the existence of an effective horizon in advance and imposing regularity there is always valid.
We provide an answer to this question for a range of models in this paper.We find that, as far as the models we deal in this paper, the regular solution always has an effective horizon and has a notion of effective temperature.However, unlike the models with the NG-type or the DBI-type action, we also find that some model can have several different effective temperatures depending on the mode we consider.
In addition to the motivation from the non-equilibrium statistical physics perspective described above, the motivations for conducting this study are as follows.
i)Even in the analysis of the models with the DBI action or the NG action, it is instructive to decompose the action into a sum of basic terms and to examine the contribution of each term.Understanding the role of each term tells us the information on the physical mechanism behind the phenomenon we study.Since the square-root structure of the action is broken after the decomposition, we need a method applicable to the models without the square-root structure.ii)Many phenomenological models of holography, such as the holographic superconductors proposed in [16][17][18][19][20], do not have action with the square-root structure.Even though some phenomenological models exhibit only linear responses, it is worthwhile developing a computational method of nonlinear responses for their possible nonlinear extensions.
iii)We also have a technical reason.For example, let us consider the models with non-Abelian global symmetries.We may construct a model with a non-Abelian global symmetry by considering multiple probe D-branes overlapped on top of each other in the dual geometry (see, for example, Refs.[21,22]).In this case, we need to employ the non-Abelian DBI action [23].However, in order to apply the symmetrized trace in the non-Abelian DBI action, we need to expand the action in the power of the worldvolume fields.The symmetrized trace must be treated for each term of the expanded action.Furthermore, the validity of the non-Abelian DBI action has been confirmed to finite orders of the derivative expansion [23,24].In order to study the models with non-Abelian DBI action, we need a method that is applicable to the models without the square-root structure.
With the above motivations in mind, we present a new method to compute the nonlinear responses for the models without the square-root structure in the action.In this paper, we focus on the models where the action contains only the derivative of the fields, such as the field-strength of the bulk gauge field, but not the field without the derivative.
We find that a characteristic behavior of the solutions to the equations of motion is the signal for physical regular solutions.We call this behavior a "patchwork phenomenon." We propose two simultaneous equations associated with the patchwork phenomenon as a condition for the physical regular solution from which we obtain the nonlinear responses.
We believe that our approach provides a general treatment applicable to a broader range of systems.
Before closing this section, we would like to emphasize the importance of studying nonlinear responses from the perspective of applications of holography not only to statistical physics but also to condensed matter physics.Nonlinear responses have been widely used as essential tools to obtain information about electrons and ions in solids.For example, the second harmonic generation (SHG), which is the emission of 2ω-frequency light under the injection of ω-frequency light, is a commonly used tool to probe the breaking of the inversion symmetry of electronic states or lattice structures [25].Nonlinear responses are also important for technological applications: photovoltaic devices, telecommunications, laser technologies, quantum computers, and so on.However, there still remain theoretical problems to be resolved in the theory of nonlinear responses.(i) One is the systematic treatment of non-perturbative effects.
The standard knowledge of nonlinear optical responses has been established on the basis of perturbation theory with respect to the external field strength [25].Thus, the understanding of the non-perturbative effects is relatively poor and still under intensive investigation [26].(ii) Another problem is the many-body effects.Previous theories are mainly based on freefermion models, which describe the electron dynamics in many solid-state materials well.
Because of this, the quantum many-body theory of nonlinear responses is less established even in the perturbative regime.Indeed, recent studies have started to examine general properties of nonlinear responses in quantum many-body systems [27][28][29].Thanks to the power of holographic techniques, it is possible that both problems can be resolved by the holographic approach.For example, the negative differential resistance (NDR) 2 has been obtained theoretically [30], and a possible origin of the NDR was also revealed through this approach [31].This approach has also been utilized in the study of nonequilibrium phase transitions [32][33][34][35][36][37] associated with nonperturbative responses to external fields.In recent condensed matter experiments, the nonperturbative effects in high harmonic generation 3 and nonlinear responses in strongly correlated materials [39] are attracting great attention and the importance of the above problems is expected to increase.We believe that the development of flexible and useful holographic techniques for nonlinear responses will help 2 This may also be called negative differential conductivity (NDC). 3High harmonic generation (HHG), which is the nω-frequency light emission under the injection of light with frequency ω, is a nonlinear process in the interaction between electrons and the external electromagnetic field.The n = 2 case of HHG is nothing but the SHG.In recent experiments, the large-n (e.g., n ≈ 10 or larger) regime has been extensively studied and the observation of non-perturbative effects (plateau and cut-off behavior) has been reported [38].
to solve these problems and this study can be considered as one of the significant steps in this direction.
The organization of this paper is as follows.The overview of our proposal is described in Sec. 2. We revisit the conventional computational method in Sec. 3. We show how the conventional method breaks down when the action in the gravity dual is neither the DBI type nor the NG type.In Sec. 4, we propose a new method to compute the nonlinear transport coefficients efficiently.Section 5 is devoted to demonstrating how our proposal works for the calculations of nonlinear conductivity in several examples.We show our method also works for the models with the DBI action, in Sec. 6.We describe how our method is applicable in the computation of nonlinear friction coefficients in Sec. 7. Computations of the effective temperatures based on our method are given in Sec. 8. We find models in which the effective temperature differs from one mode to another.In Appendix A, we overview the method proposed in [9] with comments.We discuss the result at T = 0 in our model in Appendix B. We employ the ingoing Eddington-Finkelstein coordinates for the bulk spacetime in most parts of the main text.We exhibit the computations in the Schwarzschild coordinates in Appendix C for the convenience of the readers.
Overview of our proposal
Before moving into detailed discussions, let us provide an overview of our proposal in this section to clarify our basic idea.We consider the nonlinear DC electric conductivity as a prototypical example of nonlinear responses.
A computational method of nonlinear DC electric conductivity in holography was first proposed in [8].The basic idea presented in [8] is that the conductivity is determined by requesting the reality condition: the gauge field, hence the on-shell action, has to be real in the entire region of the dual bulk spacetime.The same idea was employed in the holographic computation of nonlinear frictional force acting on a particle in a medium [6,7].We call these methods the "real-action method" in this paper.
The real-action method is strongly based on the special circumstances that the action in the gravity dual is the DBI type or the NG type.The DBI action of a Dp-brane has the following form: where τ p is the tension of the Dp-brane, ξ a are the worldvolume coordinates, Φ is the dilaton field, F ab = ∂ a A b − ∂ b A a is the field-strength of the worldvolume U(1) gauge field A a . 4h ab is the induced metric where g M N is the metric of the target spacetime, and X M (ξ a ) represents the configuration of the Dp-brane.We employ the probe approximation.The real-action method works by virtue of the "square-root structure" of the integrand of (1).The Nambu-Goto action for fundamental strings, which we employ in the computation of frictional force, has the same structure.We shall review how to compute the nonlinear conductivity in the conventional methods in Sec. 3.
In the absence of the square-root structure, the real-action method does not work, as we shall show in detail in Sec.3.2.This is explicitly shown in the following example. 5Let us consider an action, in the gravity dual, which is obtained by truncation of the derivative expansion of the DBI action: where • • • denotes higher-order contributions of F ab which we ignore.The field-strength appears in the form of power series, and the "square-root structure" of the original DBI action has disappeared in this action.Since we keep the term of the second order of F ab , we expect that the expectation value of the current agrees between the models (3) and (1) to the linear order of E, where E is the electric field acting on the system.However, one finds that the on-shell action of the model (3) has no chance to be complex, as we shall show in Sec.3.2, even if we employ an unphysical solution for A a .Thus the reality condition for the model of (3) does not fix the conductivity at all.When we have higher-order terms of F ab in the action, we shall see in Sec. 5 that the on-shell action can be complex.However, the reality condition for the action gives only a constraint for the range of possible values of the conductivity, and it does not fix the conductivity uniquely.
In the present paper, we propose an alternative method to compute the nonlinear DC conductivity and nonlinear frictional constant that is applicable to the models without the square-root structure.Our argument consists of two parts: the basic idea of requiring regularity of the solution, and the proposal of a simplified method to achieve the regularity.
Our basic idea is as follows.We require that the gauge-invariant scalar quantity be regular over the bulk spacetime to obtain the correct electric conductivity (or the correct frictional constant).For example, we impose regularity of F ab F ab in the computation of conductivity: F ab F ab has to be regular in the entire region of the bulk spacetime. 6We call this condition the regularity condition.The details will be discussed in Sec.4.1.
In order to check the regularity of the given solution, we need to survey the entire region of the bulk spacetime.However, surveying the entire region of the bulk spacetime for all possible solutions is awkward.We propose a short-cut method to realize the regularity condition.Our proposal is that the correct conductivity is obtained by just solving the following simultaneous equations: 7 where L is the Lagrangian density of the bulk theory, u is the radial coordinate in the bulk spacetime, and i is a spatial direction in which we have the current.We call (4) and (5) the "patchwork condition."The patchwork condition gives us possible combinations of (u * , F ui (u * )) that satisfy (4) and (5).The expectation value of the current density J(u, F ui ) is given in terms of F ui (u) and u, but J(u, F ui ) is independent of u by virtue of the equation of motion d du J(u, F ui ) = 0. Therefore, we obtain the expectation value of the current density by substituting u = u * and F ui = F ui (u * ), that we obtain from the patchwork condition, into J(u, F ui ).See Sec.4.2 for the details.
The origin of the term "patchwork condition" is as follows.When the gauge-invariant scalar quantity becomes regular in the entire region of the bulk spacetime, we find a phenomenon that we call "patchwork" occurs: two singular (unphysical) solutions recombine at a point in the bulk (which we define u = u * ) to form a smooth regular physical solution.
As far as the models we studied, the regularity condition is equivalent to the condition that this "patchwork" occurs.We find that the condition for this patchwork occurs is that the simultaneous equations ( 4) and (5) hold at some point (u = u * ) in the bulk.We conjecture that the patchwork condition is equivalent to the regularity condition.
As discussed in Secs. 4 and 8, we find that u = u * is the location of the effective horizon for the fluctuation of the bulk field around the background on-shell configuration.It has been 6 In general, we need to impose regularity of other gauge-invariant scalar quantities such as ∇ c (F ab F ab )∇ c (F ab F ab ) as well.The precise meaning of the "entire region of the bulk spacetime" should also be clarified.The details will be given in Sec. 5. 7 The corresponding equations for the computation of nonlinear friction coefficient are proposed that the Hawking temperature associated with the effective horizon for the fluctuations gives an effective temperature that characterizes the nonequilibrium steady states we consider [10][11][12][13][14][15].Our proposal enables us not only the computation of nonlinear conductivity but also this effective temperature for the wide range of models that do not have the squareroot structure in the action.We find, in Sec. 8, that different modes of fluctuation can have different effective temperatures when we consider general models without the square-root structure, while the models with the DBI action (or the NG action) have a unique common effective temperature regardless of the modes we consider.
We have two remarks on the relation between our proposal and the method proposed in earlier works.The first is the relation between the real-action method and our method.
When we employ the real-action method for the models with the square-root structure, we obtain u * as a function of E by solving Eq. ( 15) in Sec. 3. On the other hand, our method argues that u * is determined from the simultaneous equations ( 4) and ( 5).When we employ our method for the models with the square-root structure, Eq. ( 4) decouples from Eq. ( 5), and it reduces to Eq. ( 15).Then u * is determined solely by Eq. ( 4) without knowing F ui (u * ) that is given by Eq. ( 5).This is however special for the models with the square-root structure.For the models without the square-root structure, u * depends on F ui (u * ) in general8 : we need to determine u * and F ui (u * ) simultaneously.This is the reason why we need the two equations for determining The second comment is about how the regularity condition has been applied to holographic calculations in previous studies.Requiring regularity of solutions is quite common in the gauge/gravity duality, and it is not a new idea.However, finding an efficient way to realize regular solutions in practice is significant for calculations in the gauge/gravity duality.Therefore, in past studies, conditions to realize regular solutions have been investigated empirically.For example, it was found in Ref. [40] that regular D-brane configurations can be achieved by requesting that the D-brane reaches the event horizon of the bulk geometry or the point where the compact manifold on which the D-brane wraps collapses.One finds that the equation of motion for the D-brane configuration, which is a second-order differential equation, degenerates to a first-order differential equation at these points.Therefore, the first derivative of the worldvolume field is given as a function of the worldvolume field itself there: this provides a boundary condition.The boundary condition given in these points of degeneration was used to efficiently achieve the regular solutions.(See, e.g., Ref. [41]. 9 ) The method to request the regularity of the solutions at the point of degeneration was also used in [43] for the systems out of equilibrium.In Ref. [43], the real-action method was not used, and the method of imposing regularity on the solution was adopted.The same idea was employed in [44,45], which is not in the context of the gauge/gravity duality, in the analysis of the energy-extraction mechanism from the black hole.
Conventional method
In this section, we review the conventional computational method of nonlinear conductivity in holography, which we refer to as the real-action method.Another method proposed in [9] is discussed in Appendix A.
We consider the D3-D7 model as an example of holographic realization of conductors [8].
The DBI action for the probe D7-brane is given as, where τ 7 is the tension of the D7-brane, ξ a are the 8-dimensional worldvolume coordinates.g ab and F ab are the induced metric and the field-strength of the U(1) gauge field on the worldvolume, respectively. 10The bulk geometry is the 5-dimensional AdS-Schwarzschild black hole times S 5 .The line element of the 5-dimensional AdS-Schwarzschild black hole is given by where f (u) = 1 − u 4 /u 4 h .u is the radial direction and (t, ⃗ x) is the (3+1)-dimensional coordinates on which the dual field theory is defined.The boundary of the bulk geometry is located at u = 0, and u = u h is the location of the horizon of the black hole.The Hawking temperature T is given by T = (πu h ) −1 .The line element of the unit 5-sphere, which we write dΩ 5 , is given by where dΩ 3 denotes the line element of the unit 3-sphere.We employ the static gauge where the worldvolume coordinates are identical with the target-space coordinates (t, ⃗ x, u, Ω 3 ).The S 3 part is covered by the D7-brane.θ(u) and ψ(u) describe the configuration of the D7-brane.
In this study, we consider the configuration θ = ψ = 0. We employ the probe approximation throughout this paper where the background geometry is not modified by the back reaction.
We assume that a constant electric field E is applied in the x 1 -direction.Hereafter, we write x 1 as x for simplicity.We employ the following ansatz for A x and we set the other components of the gauge field to zero.Let us integrate the S 3 part first in (6) to reduce it to the (4 + 1)-dimensional theory.The Lagrangian density after the integration in our convention is given by where N = τ 7 (2π 2 ) takes account of the volume of the unit S 3 .
The expectation value of current density J in x-direction is obtained by the GKP-Witten prescription as 11 Note that the equation of motion for In this sense, J is a constant of integration specified by the boundary condition when we solve the equation of motion.Therefore, the problem of obtaining the correct value of J is that of identifying the proper condition we impose on the solution.
Real-action method
The computational method proposed in [8] requests the reality of physical quantities.
We call this method the real-action method.The same computational method has also been employed in the computation of drag force [6,7,46,47].
The real-action method goes as follows.Eq. ( 11) can be rewritten as where we defined 11 In this paper, we define the current density as x (u) with including the factor 1/N for simplicity.
Then the solution to the equation of motion is given by where we have selected the branch of a ′ (u) > 0. Since g tt (u h ) = 0, both of ξ(u) and χ(u) are negative in the vicinity of u = u h whereas ξ(u) and χ(u) are positive at the boundary u = 0: ξ(u) and χ(u) change the sign somewhere between the boundary and the horizon.In order to make a ′ (u) real in the entire region of the bulk spacetime, we need to request that both ξ(u) and χ(u) go across zero simultaneously at some point, which we define u = u * .Therefore, the reality condition for this model is expressed as the following simultaneous equations from which we obtain where the sign of the right-hand side has been determined so that J • E > 0 in the last equality.
Eq. ( 15) determines u * explicitly in terms of E. For the present model, u * is given by and then we obtain where we have defined dimensionless electric field and current density by Now we have a comment on the point u = u * .The point u = u * corresponds to the effective horizon of an open-string metric [48,49] given by γ ab = g ab − F ac g cd F db .In the present case, the components of the open-string metric are given by We can evaluate the effective temperature [10,11,15], the Hawking temperature associated with the effective horizon, as
Failure of real-action method
The computational methods of conductivity presented in the previous subsection is strongly based on the "square-root" structure of the DBI action.We demonstrate that the method in the previous subsection do not work in more general cases where the action does not have the "square-root" structure. 1213 Let us consider a toy model whose action in the gravity dual is given by where We use the same metric as that is given in (7) and the same ansatz as (9) for the gauge field.We have substituted them in the last line.It is known that this model exhibits linear conductivity.This is the simplest model to demonstrate the failure of the real-action method.Now, the current density is given by In contrast to (11), the relationship between J and a ′ is linear, and a ′ has no chance to be complex as far as we choose the constant of integration J to be real: the reality condition for the gauge field (hence for the on-shell action) just states J has to be real, and it gives no further constraint for J. Therefore, the reality of the action itself cannot fix the physical value of the current density in this simplest case.As we will see in the next section, imposing regularity is more useful than the reality in a general model.4 Regularity condition and patchwork condition
Regularity condition
Let us see how the regularity condition works for the model of (25).Since this model exhibits linear conductivity, the conductivity can be calculated by a method using the Kubo formula without using the method we are about to propose.However, we use this simplest model to demonstrate how our method works.As we have mentioned in Sec. 2, our regularity condition requests that F is regular in the entire region of the bulk spacetime.In the Schwarzschild coordinates, we have F(u) has a chance to be divergent only at the event horizon where g tt vanishes. 14In order to avoid the divergence of F(u), we need to set E 2 g xx − J 2 = 0 at the horizon which gives the right answer provided that we choose the correct sign.When (29) holds, F = E 2 u 2 h /4 at u = u h .Since F is scalar, the regularity of F leads us to the same conclusion in the ingoing Eddington-Finkelstein coordinates.However, it is instructive to see what is going on there.The 5-dimensional AdS-Schwarzschild geometry in the ingoing Eddington-Finkelstein coordinates is given by where f (u) = 1 − u 4 /u 4 h .The u coordinate is common between ( 7) and (30), and dτ = dt − du f (u) .Let us employ the following ansatz for the gauge field and we set the other components of the gauge field to zero.Note that h(u) and a(u) are different from each other.In the ingoing Eddington-Finkelstein coordinates, F is expressed as If gττ gτu h ′ is nonzero at the horizon, h ′ has to diverge there.This leads F divergent at the horizon.Therefore, regularity of F ensure gττ gτu h ′ → 0 at the horizon.Now, the current density is given as As we have seen in ( 27), the reality of h ′ does not give any constraint for J except for the trivial statement, the reality of J.However, if we request the regularity of F, the h ′dependence of the right-hand side of (33) vanishes at the horizon.Then, J is given by (29).
One finds that the idea given in the past studies mentioned in Sec. 2 of requesting the regularity of the solution (h ′ for this case) at the point of degeneration (event horizon in the present model) also works for this case.
Patchwork condition
Let us consider the behavior of h ′ in more detail in the ingoing Eddington-Finkelstein coordinates. 15When u ̸ = u h , g τ τ is nonzero and we obtain from (33).Since f (u) is a monotonically decreasing function that goes across zero at u = u h , the behavior of h ′ can be categorized into the following three types. ( One finds that F is divergent at the horizon. ( We find h ′ | u→u h = −E/4 and finite.h ′ (u) is a smooth single-valued function of u which goes across the horizon smoothly.One finds that and is regular even at the horizon.F is singular at u = ∞, but this singularity is causally disconnected from the observer outside the horizon. (3 One finds that F is divergent at the horizon. 15If we employ the Schwarzschild coordinates in the model of (25), the "patchwork" we explain in this section cannot be seen.The reason is that the point u = u * for this model is located at the event horizon.However, we can observe the "patchwork" in the models with higher-derivative terms given in Sec. 5 and Sec.6 even on the Schwarzschild coordinates (see Appendix C).This is because the points of u = u * for these models are located outside the event horizon where the Schwarzschild coordinates cover.
The behaviour of h ′ (u) is shown in Fig. 1. Figure 2 shows the behaviour of F as well.We defined h′ (u) = h ′ (u)u 2 h .Suppose that we keep E fixed and change the value of J, which is a constant of integration, from J > J phys to J < J phys , where J phys = g xx (u h )E is the physical value.When J > J phys , the plot of h ′ (u) is separated into two curves that diverge at the horizon.These two curves approach each other and merge when J goes to J phys .h ′ (u) separates into two divergent curves again when J goes lower than J phys .We call this behaviour of h ′ (u) at J = J phys "patchwork."In this model, the regularity condition is equivalent to the condition that the "patchwork" occurs.Fig. 1 (a) Contour plot of J(u, h ′ ) for the model given by (25).Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the proper boundary condition at u = 0. (b) Surface plot of J(u, h ′ ) for the model given by (25).The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 in these panels.
The condition that the patchwork occurs can be expressed by two equations.To illustrate the behavior of the solution, we will use the analogy of a topographic map.Suppose that J(u, h ′ ) is "height of land" at a location given by "two-dimensional coordinates" (u, h ′ ).Then the curves we have in Fig. 1(a) show the contour map (the curves of equal height J) of the Fig. 2 F(u) for the model given by ( 25) with respect to the various values of J.The cyan curves correspond to those in Fig. 1. ∆j is defined as ∆j = u 3 h (J − J phys ).Here, we set e = 2. "landform."Then we find that the location where patchwork occurs is the saddle point of the landform, and J = J phys is the height of the saddle point.A point where contours cross is a saddle point in general.Since the patchwork point we call is a point where contours of J = J phys cross, it is a saddle point obviously.From this observation, we impose the following condition of the presence of a stationary point in the u-h ′ plane: where u = u * is the location of the saddle point.In the present case u * = u h , but u * can be different from u h in more general models as we shall see later.We also emphasize that eq. ( 35) is a condition for the partial derivative ∂J ∂u rather than the ordinary derivative dJ du .It is because this condition arose from the presence of the saddle point in the parameter space of (u, h ′ ).Moreover, dJ du always vanishes because it is just the equation of motion of x = h ′ = F ux in our convention, the foregoing two equations are rewritten as which we have exhibited at (4) and (5) in Sec. 2. We call the simultaneous equations (35) and (36), or (37) and (38), as the "patchwork condition." 16 Note that the regularity condition for the present model has reduced to the simultaneous equations.Then, the conductivity can be obtained by solving the simultaneous equations.Indeed, (36) gives √ g xx g τ τ /g τ u u=u * = 0 which yields u * = u h . 17We obtain h ′ | u=u * =u h = −E/4 from ( 35) and we find gττ gτ u h ′ vanishes at u = u * = u h .Then we obtain ( 29) from (33).We can interpret the physical meaning of the patchwork condition as follows.Let us decompose A µ as A µ = µ + a µ and consider the small fluctuation a x around the background solution Āx .We expand the Lagrangian density to the quadratic order of a x as where Here, γ µν denotes the effective metric that governs the dispersion relation of the small fluctuation a x .γ is the determinant of the effective metric.Although the effective metric agrees with the metric of the bulk geometry in the present model, it depends on E in general when the original Lagrangian contains higher powers of F. In the case of the DBI theory, the effective metric is given by the open-string metric [48,49].We obtain Note that the left-hand side of ( 40) is ∂ 2 L ∂Fµx∂Fρx evaluated at the given background.We assume that √ −γγ xx is non-vanishing (Assumption 1).Then (38) is understood as the condition for γ uu | u=u * = 0 which means that u = u * is the location of the "effective horizon": the small fluctuation a x cannot propagate from inside the effective horizon u > u * to outside the effective horizon u < u * beyond u = u * . 16In the higher-order models such as we consider in Sec. 5, Eq. ( 36) can also be understood as a condition that the equation of motion has multiple roots at u = u * .The equation of motion is given as a polynomial of h ′ , P (h ′ ) ≡ J(u, h ′ ) − J = 0, in general.For example, the equation of motion ( 44) is a cubic equation of h ′ (u).The discriminant of the polynomial, disc h ′ (P ), is given by the resultant of the polynomial and its derivative: res h ′ (P, ∂ h ′ P ).The resultant vanishes if and only if the equations P = 0 and ∂ h ′ P = 0 have a common root.P = 0 is the equation of motion that has to be satisfied.Eq. (36) states that ∂ h ′ P = ∂ h ′ J(u, h ′ ) = 0 has a root at u = u * .Therefore, the discriminant vanishes at u = u * , and the equation of motion has multiple roots at u = u * .This is consistent since the patchwork point is the point where multiple roots exist.For the present model, however, the equation of motion ( 33) is a linear equation of h ′ , and this comment does not apply. 17For the models with higher powers of F, as we shall see in Sec. 5, we need to solve both (35) and ( 36) to obtain u * .
The other condition (37) is understood as follows.The equation of motion of A x is written as Let us assume that dFux du vanishes at u = u * (Assumption 2).Then ( 37) is equivalent to the equation of motion (41) at the location of the effective horizon u = u * where (38) holds.
Recall that (37) is the condition for the partial derivative with respect to u.
When Assumption 1 and Assumption 2 hold, the patchwork condition (and hence the regularity condition) requests the presence of the effective horizon on the background solution.As far as the authors have checked, all the models we deal with in this paper satisfy these assumptions at u = u * .This suggests that the concept of effective temperature in nonequilibrium steady states exists in a wide range of holographic models not only for the models whose action has the square-root structure.
Now we have a comment on the coordinate system we employ.If we use the Schwarzschild coordinates, the patchwork phenomenon is not observed for the field a ′ (u) explicitly, in the present model linear in F. The reason is as follows.The patchwork phenomenon occurs in such a way that the regular solution penetrates the effective horizon from outside the effective horizon into the inside of it.In the case of the present model, the location of the effective horizon coincides with that of the event horizon of the bulk geometry.Since the Schwarzschild coordinates cover only outside the event horizon, we cannot observe the recombination of the solutions where a solution inside the horizon and that outside the horizon merge into a single regular solution.However, we can work on the Schwarzschild coordinates in the cases of more general models we present in Sec. 5 and in Sec.6 where the effective horizon, the point where the patchwork occurs, is located outside the event horizon.In this case, the inside of the effective horizon is at least partly covered by the Schwarzschild coordinates.
Even though we work on the ingoing Eddington-Finkelstein coordinates in Sec. 5 and in Sec.6, we also present how it works on the Schwarzschild coordinates in Appendix C.
Higher-order models
We demonstrate that the patchwork condition we have proposed in Sec.4.2 works in more general models that contain higher powers of F but without the square-root structure.
The bulk geometry is the AdS-Schwarzschild black hole whose metric is given by (30).
F 2 model
Now, we consider the following action involving a nonlinear term.
where c is a real constant. 18F is given by the first equality of Eq. ( 26).We will refer to this model as F 2 model.We consider the finite temperature cases here. 19et us employ the ingoing Eddington-Finkelstein coordinates on which the metric is given by (30).The Lagrangian density with our ansatz ( 31) is explicitly written as The equation of motion for A x is given by where J is an integration constant.This is a cubic equation of h ′ (u) that has three roots.
The explicit expressions of these roots are very complicated, and we do not exhibit them here.
Let us see the asymptotic behaviors of the three roots.In the vicinity of the boundary of the bulk geometry, the solutions have series expansions, as follows: Only the first solution (45) matches with our boundary condition h(u) = 0 at u → 0. On the other hand, the solutions in the vicinity of the event horizon are expanded as follows: Note that we have rescaled E and J as e = u 2 h E and j = u 3 h J.We also obtain respectively.If we choose the first two solutions ( 47) and (48), F(u) that is given by ( 50) and ( 51), respectively, remain finite at u = u h whatever combinations of j and e we have at non-zero value of e: the regularity of F(u) at u = u h alone does not fix the conductivity at all. 20et us survey the behavior of the solutions in the entire region of 0 ≤ u ≤ u h .The numerical results are shown in Fig. 3.In Fig. 3, h ′ (u) as a function of u is given for various values of j, for e = 2 and c = 1.We find that the patchwork occurs when j = j phys = 2.8424.The solutions with j = j phys are indicated by the cyan solid curve and cyan dotted curve in Fig. 3 and in Fig. 4. In Fig. 4, we find that F(u) is regular in the entire region of the bulk spacetime for the cyan solid curve.Furthermore, it shows the desired behavior (45) at u = 0 as is indicated in Fig. 3. Therefore, the cyan solid curve is the physical solution.We also find that h ′ (u) for the physical solution is finite at u = u h whereas that for the cyan dotted curve goes to −∞ (see Fig. 3(a)).Therefore, we can identify that (47) and hence (50) correspond to the physical solution when j is chosen to j = j phys for given e.
The location of the patchwork (we call it the patchwork point) is the saddle point of the "landform" in Fig. 3(b) if we regard j = u 3 h J(u, h ′ (u)) as the "height."The location of the saddle point (u * , h ′ (u * )) is given by the patchwork condition.Eqs. ( 35) and (36) for the present case are explicitly given as There are naively six choices for the set of ũ4 * and h′ (u * ), but there are actually five choices because h′ (u * ) is degenerate when we choose (55).We can determine the physical choice by imposing that the results reduces to those in the linear theory under taking e → 0, i.e., lim e→0 ũ * = 1 and lim e→0 h′ (u * ) = 0.In the current case, only the choice of eq. ( 57) and the positive sign for the double sign in (58) agrees with the desired behavior.For this reason, we choose (57) then ũ * is given by The current density is obtained by substituting these results into Eq.( 44) evaluated at Our choice correctly leads the result of j > 0 for e > 0, which corresponds to choosing the positive sign for the double sign in (58).The location where patchwork occurs in Fig. 3 and the value of j phys we numerically found are given by substituting e = 2 and c = 1 to the above equations.
Let us check the validity of our computations by comparing them with the results from the DBI theory (10).The action of the DBI theory can be expanded as If we employ the ansatz (31), the each term contributes as respectively.Dropping the constant term, the Lagrangian density of (61) is written as that agrees with Eq. ( 43) with c = 1. 21Therefore, our results for c = 1 need to agree with those of the DBI theory to the sub-leading order of the expansion with respect to the electric field e. Eqs. ( 59) and ( 60) are expanded as and respectively.Indeed, they agree with ( 21) and ( 22) to the sub-leading order of the small-e expansion when c = 1.
F 3 model
Let us add an F 3 term to the action of the F 2 model with c = 1: where b is a real constant.We will refer to the model as F 3 model.If we set b = 1, this action equals the truncated DBI action to the 6th order of F µν with the ansatz of ( 31) (when we do not consider the fluctuations).We employ the ingoing Eddington-Finkelstein coordinates.
The equation of motion for A x is explicitly given by within the ansatz (31).In this case, the equation of motion is a quintic equation of h ′ (u).
Let us examine the behavior of the solutions numerically since there is no algebraic formula for the roots of the quintic equation.
Let us consider the case where b = 1 and e = 2, for example.We find the patchwork condition formally gives four different values of u 4 * , but one is imaginary and two are negative.Only the physical value of u * we obtain is u * = 0.68475u h .For this value of u * , the patchwork condition gives two values of h ′ which are h ′ = −0.389243u−2 h and h ′ = −4.73799u−2 h .If we use the former, we obtain j = 2.9621 while the latter gives j = −2.9621.We choose j > 0. As a result, we obtain j phys = 2.9621 with the location of the patchwork (u * , h ′ ) = (0.68475u h , −0.389243u −2 h ). Figure 5 shows the solutions for various values of j with b = 1 and e = 2.We find that the patchwork indeed occurs when j = j phys at (u * , h ′ ) = (0.68475u h , −0.389243u −2 h ).Although we could not obtain exact analytic representations of u * and h ′ (u * ), we get their approximate representations by employing the small-e approximation.We expand u * and h ′ (u * ) as We have taken the physical choice of ũ * and h′ (u * ) in the same manner as we did in the F 2 model.The current density is obtained as Eq. ( 71) agrees with Eq. ( 22) in the D3-D7 model to the order of e 5 with b = 1.If we set b = 0, the coefficient of e 5 in Eq. ( 71) becomes (−1) × 3/16 which agrees with the coefficient of e 5 in Eq. (66) in the F 2 model with c = 1.Interestingly, the order of e 5 contribution of j vanishes when b = 2.
We have a comment on the regularity of the solutions in this model.Figure 6 shows F as functions of u for various values of j.One finds that there are solutions that satisfy the right boundary condition at u = 0 and F is regular for 0 ≤ u ≤ u h .They start from F = 0 at u = 0 and penetrate the event horizon of the bulk geometry with negative but finite F without encountering the "patchwork."In this sense, these solutions are regular outside the event horizon.However, we find that ∂ u F of these solutions diverge at a point of u > u h : (∇ µ F)(∇ µ F), in terms of a scalar quantity, is singular inside the event horizon.We exclude these solutions because of the following reason.
Let us consider the effective metric γ ab on these solutions.We define the singular point as u = u sing where (∇ µ F)(∇ µ F) diverges.Since these solutions reach the singular point without encountering the patchwork phenomenon, γ uu < 0 throughout the region of 0 ≤ u < u sing including the location of the event horizon.We find that other components of the effective metric do not change the sign and det γ ab > 0 in 0 ≤ u ≤ u sing .Therefore we regard that the singularity at u sing is not hidden by any (effective) horizon from the viewpoint of the fluctuations governed by the effective metric, and we discard these solutions. 22 It is interesting to refer to the results for the F 2 model at T = 0 given in Appendix B. The F 2 model yields singularity at u = ∞ in the presence of J and E. There is no event horizon in the bulk since T = 0.However, the singularity is hidden by the effective horizon in this case. 22In general, the speed of propagation of signals can be modified in the theories with higher-derivative terms, and it can even be superluminal.When it is the case, the causal structure has to be discussed based on the fastest mode, but not the null hypersurface of the bulk geometry.See, for example, [51] for the discussions in the Gauss-Bonnet gravity.
DBI model
For comparison, let us consider the model with the DBI action (6).The Lagrangian density after the integration of the S 3 part in the ingoing Eddington-Finkelstein coordinates in our convention is explicitly given by where N is the same as that in (10).
The current density J in x-direction is obtained as The condition (36) gives Note that u * can be determined solely by (74) for the present case.h ′ (u * ) can be determined by solving the condition corresponding to (35) together with (74).However, we do not need to do it in the present model.If we substitute (74) into (73), h ′ (u) + Eg τ u /g τ τ is cancelled in (73) at u = u * up to the sign, and we obtain where we have chosen the sign in such a way that J > 0 for E > 0 in the last equality. 23his agrees with what we have obtained in (17).Of course, if we work in the Schwarzschild coordinates, we obtain (15) instead of (74), and ( 17) is reproduced.
Let us see the patchwork indeed occurs for the regular solution, explicitly.Figure 7 shows various solutions h ′ (u) as functions of u for various values of j.The cyan curve corresponds to the solution with j = j phys , where j phys is the value of j obtained from Eq. (19).We find that the patchwork occurs when j = j phys .This is only the solution that is smoothly connected from u = 0 to u = u h .Figure 8 shows F(u) for various j = j phys + ∆j.We can see that F(u) is regular only when j = j phys .F(u) for the DBI model [Eq.( 6)] with respect to the various values of j.The cyan curves correspond to those in Fig. 7. ∆j is defined as ∆j = u 3 h (J − J phys ).The vertical solid line indicates the locations of the black hole horizon u = u h .Here, we set e = 2.
Nonlinear friction coefficient
In this section, we apply our computational method to the calculation of the nonlinear friction coefficient.We employ the model where a Nambu-Goto string is dragged in the AdS-Schwarzschild black hole geometry (7) in 5-dimensions [6,7]. 24Let us call this model the NG model for short.The action of the test string is where h αβ = g µν ∂ α x µ ∂ β x ν is the induced metric and α, β denote indices of worldsheet coordinates.We take the worldsheet coordinates as σ α = (t, u) and employ an ansatz ⃗ x = (vt + ξ(u), 0, 0) for the description of the configuration of the string.v is the velocity of the endpoint of the string at the boundary of the bulk geometry.The action becomes where the dot represents the differentiation with respect to t.We define the Lagrangian density by S NG = 1 2πα ′ dt du L NG .The system has a constant of motion for u-coordinate which is given by Now the patchwork condition is given by the following two simultaneous equations: The condition (80) is explicitly given by Note that (81), which is f (u * ) − v 2 = 0, solely gives u * for this case as we have seen for the DBI theory.The locus of the effective horizon is given by u 4 * /u 4 h = 1 − v 2 .We can also obtain ξ(u * ) by solving (79) together with (80), but we do not need to do it for the present case.
Substituting f (u * ) − v 2 = 0 into (78), ξ ′ is cancelled up to the sign in (78), and we obtain π ξ = ±u −2 * v which is explicitly given as where we have chosen the sign so that π ξ > 0 for v > 0. This quantity corresponds to the drag force acting on the system of velocity v.The result completely agrees with those in Refs. [6,7].Note that the factor of 2πα ′ is absent from our definition of π ξ .Now, we consider a truncated action that is obtained by the derivative expansion of (77) to the order of (∂ α x) 4 .We call this model the (∂x) 4 -truncated model.The relevant part of the Lagrangian density is given by The constant of motion is given by We obtain the patchwork condition as In this case, we need to solve both of the above equations to obtain u * and ξ ′ (u * ).The suitable solution is Evaluating (84) at u = u * with these results, we obtain The result agrees with (82) to the order of v 3 .
We expect that the patchwork condition (79) and (80) realize a regular configuration., where π phys is π ξ that satisfies the patchwork condition.π phys is explicitly given by ( 82) and ( 88) for the NG model and the (∂x) 4 -truncated model, respectively.The cyan curves that realize the patchwork phenomenon corresponds to the solutions with ∆π ξ = 0. ξ ′ (u), which is not a gauge-invariant scalar quantity, is divergent at u = u h because of the artifact of the coordinate singularity of the Schwarzschild-AdS 5 geometry.
An appropriate quantity to see the regularity is the scalar curvature R constructed by the induced metric h αβ [9]. 25 In these panels, we set v = 0.9.
For the (∂x) 4 -truncated model, the behavior of the scalar curvature is more complicated.
From Eq. (84), we obtain three different ξ ′ (u) as solutions for given π ξ , and they give different R each other.Figure 10(b), (c) show R as a function of u/u h for several ∆π ξ .The red, blue, and green families of curves correspond to the three roots ξ ′ (u) of Eq. (84).In the region of u < u * , the red curves correspond to the solutions satisfying the desired boundary condition at u = 0. We see that the red curves diverge at u = u * when ∆π ξ ̸ = 0.When ∆π ξ = 0, the red curve in the region of u < u * reaches u = u * with a finite value of R, and it connects smoothly to the blue curve that belongs to the different branch of the solution.The smooth switching between the different roots is the same as what happened in the models studied in the former sections.Note that the blue smooth curves in Fig. 10(b), (c) do not satisfy the right boundary condition at u = 0.After all, we find that R is regular everywhere in 0 < u < u h only when π ξ = π phys for the solutions with the appropriate boundary condition at u = 0.
Anisotropy of effective temperature
In this section, we consider the effective horizon and the effective temperature of the F 2 model.We apply the patchwork condition for finding the location of the effective horizon.
We find that the F 2 model exhibits different effective temperatures for different modes.For simplicity, we use the Schwarzschild coordinates in this section. 26et us consider an effective temperature of F 2 model ( 42) that is detected by a fluctuation of the vector potential a M around the background configuration.In general, the Lagrangian density can be expanded as where Using the Lagrangian density of Eq. ( 42), we obtain
Effective temperature for longitudinal modes
We consider a perturbation a x that is longitudinal to the background current.We assume that a x depends only on t and u.In this case, a x (t, u) decouples from the other fluctuation modes.The Lagrangian density of the quadratic order of the perturbation is given by where the M -N component of the inverse effective metric for a x .Remarkably, the u-u component can be also written as Since Eq. ( 38) holds at u = u * , γ uu;xx vanishes at u = u * .Therefore, u = u * is the effective horizon for a x .
In the vicinity of u = u * , we obtain where • • • denotes the higher order terms of (u − u * ).In the derivation of the above expansions, we have used 0 < u * < u h .Note that c-dependence accommodates in the expression of u * .We obtain the effective temperature as
Conclusion and Discussions
In this paper, we have proposed a new method to compute nonlinear transport coefficients such as nonlinear DC conductivity and nonlinear friction coefficient in holography.Our method works even for the models without the square-root structure, thus having a wider application than the already-existing conventional method.
Our basic idea is to request regularity which is a standard philosophy in holography.
However, the point is that we conjecture that the regularity condition is equivalent to the patchwork condition (with the appropriate boundary conditions at the boundary).We have checked that our conjecture works for several concrete examples of models.Since the patchwork point plays a role of an effective horizon, the presence of the notion of the effective temperature is suggested to be quite common and important in the realization of the nonequilibrium steady states in holography.
Our method enables us to compute not only the nonlinear responses but also the effective temperatures in nonequilibrium steady states for a wider range of holographic models.We found that the effective temperature for the longitudinal modes can be different from that for the transverse modes in some models such as the F 2 model.These analyses thus may shed some light on the mechanism of the isotropization process of the (effective) temperatures.
In the present work, we have analyzed only the models where the Lagrangian is given in terms of the powers of F = − 1 4 F µν F µν for conductors and in terms of the powers of (∂ α x) 2 for test objects that undergo frictional force.It is interesting to generalize our work to more general models.The above-mentioned points should be investigated in further studies.
where we have employed u * given in (59). 29The O e 4 contribution differs from the correct conductivity J/E = u −1 h j/e with j given by (66) from the regularity.
B T = 0 limit of F 2 model In the main part of this paper, we have considered the nonlinear conductivity at finite temperatures.We make a comment for the zero-temperature cases.The D3-D7 model has a nonlinear conductivity even at zero temperature which is given by J = E 3/2 .We obtain this result by taking the e → ∞ limit in (19).
Let us consider the zero temperature limit in the F 2 model. 30In the limit of T = 0, Eqs. ( 59), ( 60) and (96) become respectively.Here we have chosen the positive sign for J.Although the E-dependencies are One can see some qualitative difference in F(u) between the F 2 model and the DBI model for E ̸ = 0. F(u) for the F 2 model with E ̸ = 0 diverges at u = ∞ as is shown in Fig. B1(a).This behavior can be also understood from Eq. ( 50).At finite temperatures, F(u) at u = u h is written as and F(u h ) diverges in the limit of u h → ∞ where T → 0. However, one should note that this singularity is hidden by the effective horizon located at u = u * for E ̸ = 0.
For the DBI theory, F(u h ) is evaluated by substituting ( 14) into (26) as and F(u h ) goes to 1/2 in the limit of u h → ∞ where T → 0. We can see this behavior in C The higher-order models in the Schwarzschild coordinates We present computations in the Schwarzschild coordinates where the metric is given by (7).
C.1 F 2 model
We consider the action given in (42).The Lagrangian density with our ansatz (9) is explicitly written as The equation of motion for A x is given by ≡J(u, a ′ (u)).
(C2)
This is a cubic equation of a ′ (u) that has three roots.The explicit expressions of these roots are very complicated, and we do not exhibit them here.
Let us see the asymptotic behaviors of the three roots.In the vicinity of the AdS boundary, the solutions have series expansions as follows: (C4) Only the first solution (C3) matches with our boundary condition a(u) = 0 at u → 0. On the other hand, the solutions in the vicinity of the event horizon are expanded as follows: For these solutions, we obtain respectively.If we choose the first two solutions (C5), F(u) remain finite at u = u h whatever combinations of j and e we have: the regularity of F(u) at u = u h alone does not fix the conductivity at all.However, these solutions are not necessarily smoothly connected to (C3) that satisfies the right boundary condition.
Let us survey the behavior of the solutions in the entire region of 0 ≤ u ≤ u h .The numerical results are shown in Fig. C1.In Fig. C1, ã′ (u) ≡ u 2 h a ′ (u) as a function u is given for various values of j = u 3 h J(u, a ′ (u)), for e = 2 and c = 1.We find that the patchwork occurs when j = j phys = 2.8424 and a ′ (u) satisfies the desired behavior at both u = 0 and u = u h .Indeed, the location of the patchwork is the saddle point of the landform if we regard j = u 3 h J(u, a ′ (u)) as the "height."Note that F(u) is regular at the horizon although a ′ (u) is divergent there in this solution.
The location of the saddle point (u * , a ′ (u * )) is given by the patchwork condition.(35) and (36) for the present case are explicitly given as 0 = ũ2 * 3 − The current density is obtained by substituting these results into Eq.(C2): .
(C12)
The sign should be chosen in such a way that j • e > 0. This agrees with the result (60) we have obtained in the ingoing Eddington-Finkelstein coordinates.
C.2 F 3 model
Let us consider the F 3 model given by (67).The equation of motion is explicitly given by within the ansatz (9).In this case, the equation of motion is a quintic equation of a ′ (u).Let us examine the behavior of the solutions numerically since there is no algebraic formula for the roots of the quintic equation.We show the solutions for various values of j in Fig. C2.We find that the patchwork occurs when (j phys , u * ) = (2.9621,0.68475u h ) for e = 2.We can employ the small-e approximation as we did in Sec.5.2.We express ũ * and ã(u * ) in powers of e and substitute them into the patchwork condition.Then we obtain j in powers of e order by order.The results agree with Eq. ( 71).
Fig. 3
Fig. 3 (a) Contour plot of J(u, h ′ ) for the F 2 model [Eq.(42)].Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys = 2.8424 that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the right boundary condition at u = 0.The vertical dotted and solid lines indicate the locations of the saddle point (effective horizon) u = u * and the black hole horizon u = u h , respectively.(b) Surface plot of J(u, h ′ ) for the F 2 model [Eq.(42)].The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 and c = 1 in these panels.
Fig. 5
Fig. 5 (a) Contour plot of J(u, h ′ ) for the F 3 model [Eq.(67)].Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys = 2.9621 that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the right boundary condition at u = 0.The vertical dotted and solid lines indicate the locations of the saddle point (effective horizon) u = u * and the black hole horizon u = u h , respectively.(b) Surface plot of J(u, h ′ ) for the F 3 model [Eq.(67)].The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 and b = 1 in these panels.
Fig. 6 F
Fig. 6 F(u) for the F 3 model [Eq.(67)] with respect to the various values of j.The cyan curves correspond to those in Fig. 5. ∆j is defined as ∆j = u 3 h (J − J phys ).The vertical dotted and solid lines indicate the locations of the saddle point u = u * and the black hole horizon u = u h , respectively.Here, we set e = 2 and b = 1.
Fig. 7
Fig. 7 (a) Contour plot of J(u, h ′ ) for the DBI model [Eq.(6)].Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys = 2.9907 that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the right boundary condition at u = 0.The vertical solid line indicates the black hole horizon u = u h .Note that u = u * coincides with the cyan dashed line.(b) Surface plot of J(u, h ′ ) for the DBI model [Eq.(6)].The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 in these panels.
Fig. 8
Fig.8F(u) for the DBI model [Eq.(6)] with respect to the various values of j.The cyan
Figure 9 (
b) shows the scalar curvature as a function of u/u h for various values of ∆π ξ for the NG model.One finds R is regular when ∆π ξ = 0. R asymptotes to the scalar curvature of AdS 2 with an unit radius, R = −2, in the vicinity of u = 0.
Fig. 9
Fig. 9 Description of the patchwork phenomenon for the NG model.(a) ξ ′ (u) vs. u/u h for various values of ∆π ξ ≡ u 2 h (π ξ − π phys ).The vertical dashed cyan line is located at u = u * .The solid cyan curve is the regular physical solution of ξ ′ (u).(b) Scalar curvature constructed from h αβ as a function of u/u h for various ∆π ξ .The vertical dotted line shows the location of u = u * .The curve is smoothly connected at u = u * only when ∆π ξ = 0, i.e. π ξ = π phys .In these panels, we set v = 0.9.
Fig. 10
Fig. 10 Description of the patchwork phenomenon for the (∂x) 4 -truncated model.(a) ξ ′ (u) vs. u/u h for various values of ∆π ξ ≡ u 2 h (π ξ − π phys ).The vertical dotted line indicates the location of u = u * .The solid cyan curve is the regular physical solution of ξ ′ (u).(b) Scalar curvature as a function of u/u h for various π ξ .The red, blue, and green families of curves correspond to the three roots ξ ′ (u) of Eq. (84).The brightness of each curve represents the value of ∆π ξ in such a way that the brighter curves have lower ∆π ξ .The range of ∆π ξ is [−0.5, 0.5].The vertical dotted line indicates the location of u = u * .(c) The enlarged view of the bottom left panel around u = u * .In all the panels, we set v = 0.9.
the fluctuation of the field-strength around the background configuration F M N .L (0) is the zeroth order of f µν , and M µνρλ ≡ ∂ 2 L(F,u) ∂Fµν ∂F ρλ .By definition, this tensor satisfies M µνρλ = M ρλµν = −M νµρλ = −M µνλρ .The linearized equation of motion for the perturbations is given by Fig. B1 (a) F(u) for E = 2 in the F 2 model with c = 1 at T = 0.The dotted line shows u = u * .(b) F(u) for E = 2 in the D3-D7 model at T = 0.The dotted line shows u = u * = 1/ √ E. Both plots are shown in a log-log scale.
c 11ũ 8 * − 14ũ 4 * 4 *√ 9c 2 e 4 + 72ce 2 + 16 − 3ce 2 − 8 2ce 2 − 4 1/ 4 ,
Fig.C1(a) Contour plot of J(u, h ′ ) for the F 2 model in the Schwarzshild coordinates.Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys = 2.8424 that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the right boundary condition at u = 0.The vertical dotted and solid lines indicate the locations of the saddle point (effective horizon) u = u * and the black hole horizon u = u h , respectively.(b) Surface plot of J(u, h ′ ) for the F 2 model in the Schwarzshild coordinates.The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 and c = 1 in these panels.
Fig.C2(a) Contour plot of J(u, h ′ ) for the F 3 model in the Schwarzshild coordinates.Each contour represents the solution h ′ (u) for given J. Here, J and h ′ are rescaled as j = u 3 h J and h′ = u 2 h h ′ , respectively, and ∆j = j − j phys .The solid and dashed cyan curves are the contours at j = j phys = 2.9621 that go through the patchwork point.The solid cyan curve is the regular physical solution and satisfies the right boundary condition at u = 0.The vertical dotted and solid lines indicate the locations of the saddle point (effective horizon) u = u * and the black hole horizon u = u h , respectively.(b) Surface plot of J(u, h ′ ) for the F 3 model in the Schwarzshild coordinates.The orange surface represents ∆j, while the top plane of the translucent blue box represents ∆j = 0.The solid and dashed cyan curves are the section of these two surfaces.The crossing point of these curves is the saddle point of the orange surface.Here, we set e = 2 and b = 1 in these panels.
respectively.For later use, we expand u * and the current density in powers of e, 7+ O e 8 .
respectively, and substitute them into the patchwork condition.Then we obtain the coefficients u (k) and h ′ (k) order by order.The results with arbitrary b are as follows. | 17,686.2 | 2023-03-05T00:00:00.000 | [
"Physics"
] |
Remote creation of coherent emissions in air with two-color ultrafast laser pulses
We experimentally demonstrate the generation of narrow-bandwidth emissions with excellent coherent properties at ∼391 and ∼428 nm from N2+ (B2Σu+ (v′ = 0) → X2Σg+ (v = 0, 1)) inside a femtosecond filament in air by an orthogonally polarized two-color driver field (i.e. 800 nm laser pulse and its second harmonic). The durations of the coherent emissions at 391 and 428 nm are measured to be ∼2.4 and ∼7.8 ps, respectively, both of which are much longer than the duration of the pump and its second harmonic pulses. Furthermore, the measured temporal decay characteristics of the excited molecular systems suggest an ‘instantaneous’ population inversion mechanism that may be achieved in molecular nitrogen ions at an ultrafast time scale comparable to the 800 nm pump pulse.
the measured temporal decay characteristics of the excited molecular systems suggest an 'instantaneous' population inversion mechanism that may be achieved in molecular nitrogen ions at an ultrafast time scale comparable to the 800 nm pump pulse.
Introduction
In recent years, lasing actions created remotely in air have attracted increasing interest due to their promising applications in remote detection of multiple pollutants based on nonlinear spectroscopy [1][2][3][4][5][6][7][8][9][10]. Early experiments demonstrated remote amplified spontaneous emission (ASE)-based lasers, which have enabled operation either at ∼ 391 and 337 nm using molecular nitrogen [3][4][5] or at ∼845 nm using molecular oxygen [6] as the gain medium. The generation of population inversion was ascribed to the recombination of free electrons with molecular nitrogen ions (N + 2 ) [3][4][5] and resonant two-photon excitation of atomic oxygen fragments [6]. For the backward 845 nm ASE from atomic oxygen and the 337 nm ASE laser from neutral molecular nitrogen, the population inversion mechanisms are well understood [3][4][5]11]. However, the mechanism responsible for the 391 nm ASE from N + 2 is not totally clear; that is, the question of how the population inversion in the ASE of the 391 nm is established is still open [4].
Remarkably, a series of recent experiments showed that coherent multi-wavelength emissions with perfectly linear polarization (i.e. different from the random polarization of ASE) could be realized in nitrogen (N + 2 ) and carbon dioxide (CO + 2 ) gases using a wavelength-tunable optical parametric amplifier (OPA) laser system with wavelengths in the range of 1.2-2.4 µm, which can produce the third and fifth harmonics in air with spectral ranges overlapping the fluorescence lines of N + 2 and CO + 2 [7][8][9]. These emissions in N + 2 (330, 357, 391, 428, 471 nm) and CO + 2 (315, 326, 337, 351 nm) are found to be generated in an unexpected femtosecond time scale comparable to that of the pump lasers, indicating that population inversion in N + 2 and CO + 2 could have been achieved only with intense ultrafast driver pulses. This observation challenges the previous conjecture on the population inversion mechanism based on the recombination of free electrons with the molecular ions because such a process occurs on a time scale of a few nanoseconds [6]. To shed more light on the mechanisms underlying the ultrafast population inversion as well as on the coherent emissions themselves, which are both now under hot debate, temporal characterizations of these phenomena based on the concept of pump-probe measurement are important. The fact that the ultrafast coherent emissions observed in previous experiments employing mid-infrared driver pulses always show a linear polarization parallel to that of the harmonic or supercontinuum indicates that a seeding effect may exist [7][8][9]. However, with the mid-infrared pump pulses, it is difficult to separate the self-generated harmonics or supercontinua from the driver pulses, making it difficult to vary the delay between the driver pulses and the seeding pulses. In this paper, we will address this problem by remote generation of coherent emissions in air with an orthogonally polarized two-color laser field. In this new scheme, the driver pulses are provided by a 40 fs, 800 nm laser amplifier, whereas the 400 nm seed pulses are externally produced by a second harmonic generation process with a nonlinear crystal.
Experimental setup
The pump-probe experiment scheme is illustrated in figure 1. A commercial Ti:sapphire laser system (Legend Elite-Duo, Coherent, Inc.), operated at a repetition rate of 1 kHz, provides ∼40 fs (full-width at half-maximum (FWHM) intensity profile) Fourier-transform-limited laser pulses with a central wavelength at ∼800 nm and a single pulse energy of ∼6 mJ. The laser beam is first split into two arms using a 1 : 1 beam splitter with a variable delay: one is used as the pump beam (Pulse 1) and the other will pass through a 0.2 mm-thickness BBO crystal to produce the second harmonic probe pulse at 400 nm wavelength (Pulse 2) whose polarization is perpendicular to that of the pump pulses. The pump pulses have a pulse energy of ∼1.9 mJ and a diameter of ∼11 mm, whereas the probe pulses have a pulse energy of ∼3 µJ and a diameter of ∼6 mm, which are much weaker than the pump pulses. We have confirmed that the narrow-bandwidth emissions at 391 and 428 nm cannot be generated with the probe pulses alone. The pump and probe pulses are combined using a dichroic mirror (DM) with high reflectivity at 400 nm and high transmission at 800 nm and then are collinearly focused by an f = 40 cm lens into a chamber filled with 180 mbar of nitrogen gas to generate a ∼1 cmlong filament and coherent emission. A small portion of the 800 nm beam split from the output beam of the laser system with an energy of 440 µJ (indicated as Pulse 3 in figure 1) is used for performing a cross-correlation measurement of the coherent emissions generated from the gas chamber. After passing through the gas cell, the 400 nm probe pulses containing coherent emissions are combined with Pulse 3 by another DM and then are launched into a 2 mmthick BBO crystal. The sum frequency generation (SFG) signal of the 800 nm and the coherent emission is produced and recorded by a grating spectrometer (Shamrock 303i, Andor) with a 1200 grooves mm −1 grating. The time-resolved SFG signal provides temporal information on the coherent emissions generated in N 2 .
Coherent emissions driven by a two-color laser field
Figures 2(a) and (b) show two typical spectra measured in the forward propagation direction with the narrow-bandwidth emissions generated, respectively, at the wavelengths of ∼391 and ∼428 nm in N 2 . The emissions at the ∼391 and ∼428 nm correspond, respectively, to the transitions (0, 0) and (0, 1) between the vibrational levels of the excited state B 2 + u and ground state X 2 + g of N + 2 , as indicated by the inset of figure 1. In these two measurements, the BBO crystal for generating the second harmonic 400 nm laser light was finely tuned to optimize the ∼391 or ∼428 nm emissions, and the temporal and spatial overlap between the 800 and 400 nm pulses are optimized by maximizing the intensities of the coherent emissions. It is also confirmed that when either the 800 nm pump beam or the 400 nm probe beam is blocked, the line emissions will disappear, indicating that both the pump and probe pulses are important for their creation. Furthermore, by placing a Glan-Taylor polarizer in front of the spectrometer, we examine the polarization of the line emissions at the ∼391 and ∼428 nm wavelengths. As indicated in the insets of figures 2(a) and (b), when the transmitted polarization direction is parallel to that of the 400 nm pulse, which is defined as 0 • , both the ∼391 and ∼428 nm emissions are the strongest. In contrast, when the polarizer is rotated by ±90 • , the emissions become too weak to be detected. The polarization contrast of coherent emissions at both ∼391 and ∼428 nm is measured to be ∼10 3 . Therefore, the line emissions at ∼391 and ∼428 nm are confirmed to have a nearly perfect linear polarization parallel to that of the second harmonic probe pulses. This important fact indicates that the weak second harmonic pulses play a role as a seed to activate the coherent emissions. Furthermore, we fitted the experimental curves with theoretical ones calculated using the Malus law, as shown in the insets of figures 2(a) and (b). The deviation between experimental and theoretical results is mainly due to the intensity fluctuation of the measured signals. Specifically, the original spectrum of the 400 nm probe pulses is shown in the left-hand side inset of figure 2(b). Although the spectral intensities at both ∼391 and ∼428 nm are very low for the probe pulses, the weak signals are necessary for the creation of the laser-like coherent emissions in figures 2(a) and (b). Once the probe pulses are blocked, we observe that the coherent emissions at both ∼391 and ∼428 nm disappear.
The dependence of coherent emissions on the time delay
To gain a deeper insight, we investigate the intensities of the coherent emissions at both ∼391 and ∼428 nm as functions of the time delay between the pump and the probe pulses (τ 1 ), as shown in figures 3(a) and (b), respectively. Here, the zero time delay is indicated by the green arrows in both figures 3(a) and (b) and the positive delay means that the second harmonic 400 nm probe pulse is behind the fundamental 800 nm pump pulse. As shown in figure 3(a), the emission at ∼391 nm first increases rapidly on the time scale of ∼400 fs (see the inset of figure 3(a)), which reflects the long pulse duration of the second harmonic (∼700 fs, see later), and then shows a slow exponential decay with a decay constant ≈ 46.2 ps, as indicated by the red dashed line. It is noteworthy that when the time delay is above ∼1 ps, the pump pulses at 800 nm and the second harmonic probe pulses are essentially temporally separated, because the pulse durations of both the pump and probe pulses are significantly shorter than ∼1 ps. However, even when the pump and probe pulses are temporally separated, the line emission at ∼391 nm can still be generated with perfectly linear polarization parallel to the 400 nm probe light. Not surprisingly, like most strong field molecular phenomena which are sensitive to molecular alignment and revival, we observe in this pump-probe experiment the modulation of the line emission at the times 1/2T rot , 1T rot and 3/2T rot (T rot is the revival period of nitrogen molecules) [12,13] as indicated in the inset of figure 3(a). The mechanism behind this might be due to the modulation of the intensity of the probe pulses owing to the periodic focusing and defocusing in the filament due to the dynamic change of the alignment degree of the N 2 molecules [14,15]. Figure 3(b) shows a similar decay behavior of the line emission at ∼428 nm, but with a much shorter decay time of ∼2 ps.
Temporal structures of coherent emissions
Lastly, by introducing the third laser beam at 800 nm (Pulse 3), a cross-correlation measurement is carried out to obtain the temporal information on the coherent line emissions at both ∼391 and ∼428 nm. Figures 4(a)-(c) show the frequency-and time-resolved SFG signals of the 800 and 400 nm probe pulses at ∼267 nm, the 800 nm and the ∼391 nm line emission at ∼263 nm and the 800 nm and the ∼428 nm line emission at ∼279 nm, respectively. We confirm that the narrow-bandwidth signals at ∼263 and ∼279 nm are unambiguously from the SFG of coherent line emissions and 800 nm pulses on the basis of the following two points. Firstly, in comparison with the SFG signal of the 800 and 400 nm probe pulses as shown in figure 4(a), both the SFG signal of 800 nm and the coherent emission at ∼391 nm and the SFG signal of 800 nm and the coherent emission at ∼428 nm, as shown in figures 4(b) and (c), respectively, have much narrower spectra, because the coherent emissions of ∼391 and ∼428 nm have narrower bandwidths than the second harmonic 400 nm pulses. Secondly, the SFG signals at ∼263 and ∼279 nm cannot be observed in vacuum or argon. Here, the zero point of the time delay 2 is defined as the point at which 800 and 400 nm probe pulses are well overlapped and the positive delay indicates that the second harmonic 400 nm probe pulse is behind the fundamental 800 nm pump pulse. It should also be pointed out that to obtain the three afore-mentioned SFG signals, we have carefully adjusted the phase-matched angle φ of the nonlinear crystal to optimize each SFG signal. Figure 4 that the SFG signals centered at 263.1 and 278.9 nm, which reflect the temporal profiles of the line emissions at 391 and 428 nm, start to rise gradually after the SFG signal centered at 267.2 nm (i.e. the contribution from the broad bandwidth 400 nm probe pulses and the 800 nm pulse). From the SFG signal centered at 267.2 nm, the pulse duration of 400 nm (FWHM) at the crystal is obtained to be ∼700 fs due to the positive chirp induced by the dispersion in the windows, crystals, etc and the cross-phase modulation during filamentation. In contrast, the pulse durations of coherent emissions at ∼391 and ∼428 nm (FWHM) are ∼2.4 and ∼7.8 ps, respectively, which are much longer than that of the 400 nm probe pulses.
Discussion
The mechanism responsible for coherent forward emissions is yet to be clarified. Noting that the polarization of the line emissions is determined by the polarization of the 400 nm probe pulses despite their completely different pulse durations, a possible scheme for the seed amplification that can be enabled by generation of population inversion in N + 2 is considered. In this situation, the population inversion has to be established within an ultrashort time period for initiating the amplification of the second harmonics, which are resonant with the transitions of electronic states in N + 2 . This finding suggests an 'instantaneous' population inversion mechanism in molecular nitrogen ions. It is noteworthy that here, the word 'instantaneous' is used as a counterpart of the relatively slow pumping processes in the previously demonstrated ASEbased remote lasing experiments [3][4][5][6], in which the build-up of population inversion occurs at the nanosecond time scale. As shown in figures 3(a) and (b), the instantaneous creation of population inversion is evidenced by pump-probe characterization of the coherent emissions generation, i.e. the amplification of the probe pulses is achieved within an ultrashort time scale after the pump and probe pulses begin to temporally overlap.
It is known that the ejection of an inner-valence electron (HOMO-2) of N 2 leaves the ion N + 2 in the excited B 2 + u state, whereas the ionization of an outer-valence electron (HOMO) leads to N + 2 lying on the ground X 2 + g state [16]. Although it has been observed experimentally that the lower-lying orbitals such as HOMO-1, HOMO-2, etc indeed can participate in the ionization process [17,18], numerical calculations [19][20][21][22] have shown that the ionization probability of HOMO-2 is about one to two orders of magnitude lower than that of HOMO in an intense laser field of similar parameters as our experiment. Thus, the population inversion in nitrogen molecular ion system cannot be achieved merely by the photoionization of nitrogen molecules from their neutral ground state. There must be some other mechanisms for achieving the population inversion between the upper and lower levels if the seed-amplification scheme works. Because of the high laser intensity inside the filament, a nonlinear absorption process in N + 2 ions in the ground state, as shown in figure 4(e), could occur, which induces the absorption of a few photons to deplete the population of N + 2 in the lower vibrational levels of the ground state, and enhances the upper level of the B state with a Raman-type scheme, thus achieving the population inversion between B(0)-X(0) and B(0)-X(1).
With this population inversion scheme, the faster decay of the 428 nm emission than that of the 391 nm emission shown in figure 3 can be well understood. The vibrational relaxations, as indicated by the shortest green arrows in figure 4(e), first lead to an increase of the population on X(1) and then that on X(0) [23]. Thus, the cascade vibrational relaxation process makes the lifetime of the population inversion of B(0)-X(1) significantly shorter than that of B(0)-X(0), giving rise to the faster decay observed in figure 3(b) than that in figure 3(a).
Last but not least, it should be pointed out that other than the temporal characteristics, the intensity dependence of the coherent emissions is also an important aspect for examination as such a dependence will provide valuable information on understanding the physical mechanism of observation reported here. Indeed, our previous work has shown that the signal intensity of coherent emission at 391 nm critically depended on the intensities of the pump and the seed (probe) pulses [9]. However, in the current experiment, to accurately characterize the temporal behaviors of the coherent emissions, first of all, the energy of the pump pulses was finely adjusted and fixed at a value at which a single filament with a good spatial property was obtained. Secondly, the probe pulse at 400 nm remained sufficiently weak to avoid its participation in the process of population inversion. The intensity dependence of the coherent emission will be systematically investigated in the future.
Conclusions
In conclusion, we have observed coherent emissions at ∼391 and ∼428 nm from nitrogen in an orthogonally polarized two-color laser field and measured their temporal profiles with crosscorrelation measurements. We found that the pulse durations of the line emissions at both ∼391 and ∼428 nm are much longer than the 400 nm seed pulse, which is mainly due to the narrow bandwidths of the two line emissions. The results suggest that the coherent line emissions could originate from seed-injected amplification enabled by the remotely generated populationinverted molecular systems in air. | 4,240.6 | 2012-11-07T00:00:00.000 | [
"Physics"
] |
Lepton flavor violating decays of Standard-Model-like Higgs in 3-3-1 model with neutral lepton
The one loop contribution to the lepton flavor violating decay $h^0\rightarrow \mu\tau$ of the SM-like neutral Higgs (LFVHD) in the 3-3-1 model with neutral lepton is calculated using the unitary gauge. We have checked in detail that the total contribution is exactly finite, and the divergent cancellations happen separately in two parts of active neutrinos and exotic heavy leptons. By numerical investigation, we have indicated that the one-loop contribution of the active neutrinos is very suppressed while that of exotic leptons is rather large. The branching ratio of the LFVHD strongly depends on the Yukawa couplings between exotic leptons and $SU(3)_L$ Higgs triplets. This ratio can reach $10^{-5}$ providing large Yukawa couplings and constructive correlations of the $SU(3)_L$ scale ($v_3$) and the charged Higgs masses. The branching ratio decreases rapidly with the small Yukawa couplings and large $v_3$.
I. INTRODUCTION
The observation the Higgs boson with mass around 125.09 GeV by experiments at the Large Hadron Collider (LHC) [1][2][3][4][5] again confirms the very success of the Standard Model (SM) at low energies of below few hundred GeV. But the SM must be extended to solve many well-known problems, at least the question of neutrino masses and neutrino oscillations which have been experimentally confirmed [6]. Neutrino oscillation is a clear evidence of lepton flavor violation in the neutral lepton sector which may give loop contributions to the rare lepton flavor violating (LFV) decays of charged leptons, Z and SM-like Higgs bosons.
The LFVHD of the neutral Higgses have been investigated widely in the well-known models beyond the SM [10][11][12], including the supersymmetric (SUSY) models [13][14][15]. The SUSY versions usually predict large branching ratio of LFVHD which can reach 10 −4 or higher, even up to 10 −2 in recent investigation [13], provided the two following requirements: new LFV sources from sleptons and the large tan β-ratio of two vacuum expectation values (vev) of two neutral Higgses. At least it is true for the LFVHD h 0 → µτ under the restrict of the recent upper bound of Br(τ → µγ) < 10 −8 [16]. In the non-SUSY SU(2) L × U(1) Y models beyond the SM such as the seesaw or general two Higgs doublet (THDM), the LFVHD still depends on the LFV decay of τ lepton. The reason is that the LFVHD is strongly affected by Yukawa couplings of leptons while the SU(2) L × U(1) Y contains only small Yukawa couplings of normal charged leptons and active neutrinos. Therefore, many of non-SUSY versions predict the suppressed signal of LFVHD.
Based on the extension of the SU(2) L × U(1) Y gauge symmetry of the SM to the SU(3) L × U(1) X , there is a class of models called 3-3-1 models which inherit new LFV sources. Firstly, the particle spectra include new charged gauge bosons and charged Higgses, normally carrying two units of lepton number. Secondly, the third components of the lepton (anti-) triplets may be normal charged leptons [17,18] or new leptons [19][20][21][22][23] with non-zero lepton numbers. These new leptons can mix among one to another to create new LFV changing currents, except the case of normal charged leptons. The most interesting models for LFVHD are the ones with new heavy leptons corresponding to new Yukawa couplings that affect strongly to the LFVHD through the loop contributions. This property is different from the models based on the gauge symmetry of the SM including the SUSY versions. In the 3-3-1 models, if the new particles and the SU(3) L scale are larger than few hundred GeVs, the one-loop contributions to the LFV decays of τ always satisfy the recent experimental bound [24]. While this region of parameter space, even at the TeV values of the SU(3) L scale, favors the large branching ratios of LFVHD. The one-loop contributions on LFV processes in SUSY versions of 3-3-1 models were given in [14,25], but the non-SUSY contributions were not mentioned. The 3-3-1 models were first investigated from interest of the simplest expansion of the SU(2) L gauge symmetry and the simplest lepton sector [17]. They then became more attractive by a clue of answering the flavor question coming from the requirement of anomaly cancellation for SU(3) L ×U(1) X gauge symmetry [18]. The violation of the lepton number is a natural property of these models, leading to the natural presence of the LFV processes and neutrino oscillations. Many versions of 3-3-1 models have been constructed for explaining other unsolved questions in the SM limit: solving the strong CP problem [26] with Peccei-Quinn symmetry [27]; allowing the electric charge quantization [28],... More interesting, the neutral heavy leptons or neutral Higgses can play roles of candidates of dark matter (DM) [23]. Besides, the models with neutral leptons are still interesting for investigation of precision tests [19].
From the above reasons, this work will pay attention to the LFVHD of the 3-3-1 with left-handed heavy neutral leptons or neutrinos (3-3-1LHN) [23]. It is then easy to predict which specific 3-3-1 models can give large signals of LFVHD. As we will see, the 3-3-1 models usually contain new heavy neutral Higgses, including both CP-even and odd ones. But the recent lower bound of the SU(3) L scale is few TeV, resulting the same order of these Higgs masses. At recent collision energies of experiments, the opportunity to observe these heavy neutral Higgses seems rare. We therefore concentrate only on the SM-like Higgs.
Our work is arranged as follows. The section II will pay attention on the formula of branching ratio of LFVHD which can be also applied for new neutral CP-even Higgses, listing the Feynman rules and the needed form factors to calculate the amplitudes for general 3-3-1 models. In the section III, the model constructed in [23] will be improved including adding new LFV couplings; imposing a custodial symmetry on the Higgs potential to cancel large flavor neutral changing currents in the Higgs sector and simplify the Higgs self-interactions.
II. FORMULAS FOR DECAY RATES OF NEUTRAL HIGGSES
For studying the LFVHD, namely h 0 → τ ± µ ∓ , we consider the general form of the corresponding LFV effective Lagrangian as follows − L LF V = h 0 (∆ L µP L τ + ∆ R µP R τ ) + h.c., (1) where ∆ L,R are scalar factors arisen from the loop contributions. In the unitary gauge, the one-loop diagrams contributing to ∆ L,R are listed in the figure 1. They can be applied for the models beyond the SM where the particle contents include only Higgses, fermions and gauge bosons. The amplitude decay is [10]: where u 1 ≡ u 1 (p 1 , s 1 ) and v 2 ≡ v 2 (p 2 , s 2 ) are respective Dirac spinors of the µ and τ . The partial width of the decays is where m h 0 , m 1 and m 2 are the masses of the neutral Higgs h 0 , muon and tauon, respectively.
They satisfy the on-shell conditions for external particles, namely p 2 i = m 2 i (i=1,2) and is an arbitrary even-CP neutral Higgs in the 3-3-1 models, including the SM-like one.
In the unitary gauge, the relevant Feynman rules for the LFV decay of h 0 → l ± 1 l ∓ 2 are represented in the figure 2. For each diagram, there is a corresponding generic function expressing its contribution to the LFVHD. These functions are defined as The notations are introduced as follows. The set of the form factors (4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19) was calculated in details in the appendix B which we find them consistent with calculations using Form [29]. These form factors are simpler than those calculated in the appendix because they contain only terms contributing to the final amplitude of the LFVHD. The excluded terms are come from the two reasons: i) those do not contain the neutral leptons in the loop so they vanish after summing all virtual leptons, reflecting the GIM mechanism; ii) the divergent terms defined by (A3). The second is true only when the final contribution is assumed to be finite. This is right for the models having no tree level LFV couplings of µ − τ . The 3-3-1 LHN model we will consider in this work satisfies this condition and the divergent cancellation is checked precisely in the appendix B.
Another remark is that the divergent term (A3) contains a conventional choice of ln µ 2 /m 2 h in which m h can be replaced by an arbitrary fixed scale. We find that only the contributions of the diagram 1d) and sum of two diagrams 1g) and 1h) are finite. Now the form factors ∆ L,R can be written as the sum of all E L,R functions. The one loop contributions to the LFV decays such as ∆ L,R are finite without using any renormalization procedure to cancel divergences. In addition, ∆ L,R do not depend on the µ parameter arising from the dimensional regularization method used to derive all above scalar E L,R functions in this work. But in general contributions from the separate diagrams in the figure 1 do contain the divergences and therefore the particular finite parts E L,R do depend on µ, so it will be nonsense for computing separate contributions. There is another simple analytic expressions given details in [15], updated from previous works [30]. It can be applied for not only SUSY models but also the models predicting new heavy scales including 3-3-1 models. The point is that this treatment uses the C-functions with approximation of zero-external momentums of the two charged leptons, i.e. p 2 1 = p 2 2 = 0. Unlike the case of LFV decays of τ → µγ, the LFVHD contains a large external momentum of neutral Higgs: 2p 1 , which should be included in the C-functions, as discussed in the appendix A. This is consistent with discussion on C-functions given in [31].
III. 3-3-1 MODEL WITH NEW NEUTRAL LEPTON
In this section we will review a particular 3-3-1 model used to investigate the LFVHD, namely the 3-3-1LHN [23]. We will keep most of all ingredients shown in ref. [23], while add two new assumptions: i) in order to appear the LFV effects, we assume that apart from the oscillation of the active neutrinos, there also exists the maximal mixing in the new lepton sector; ii) The Higgs potential satisfies a custodial symmetry shown in [22] to avoid large loop contributions of the Higgses to precision tests such as ρ-parameter and flavor neutral changing currents. More interesting, the latter results a very simple Higgs potential in the sense that many independent Higgs self-couplings are reduced and the squared mass matrix of the neutral Higgses can be solved exactly at the tree level. The following will review the needed ingredients for calculating the LFV decay of h 0 → l + i l − j .
A. Particle content • Fermion. In each family, all left-handed leptons are included in the SU(3) L triplets while right-handed ones are always singlets, where the numbers in the parentheses are the respective representations of the SU(3) C , SU(2) L and U(1) X gauge groups. The prime denotes the lepton in the flavor basis.
Recall that as one of the assumption in [23], the active neutrinos have no right-handed components and their Majorana masses are generated from the effective dimension-five operators. There is no mixing among active neutrinos and exotic neutral leptons.
• Gauge boson. The SU(3) L ×U(1) X includes 8 gauge bosons W a µ (a=1,8) of the SU(3) L and the X µ of the U(1) X , corresponding to eight SU(3) L generators T a and a U(1) X generator T 9 . The respective covariant derivative is Denote the Gell-Mann matrices as λ a , we have T a = 1 2 λ a , − 1 2 λ T a or 0 depending on the triplet, antitriplet or singlet representation of the SU(3) L that T a acts on. The T 9 is defined as T 9 = 1 √ 6 and X is the U(1) X charge of the field it acts on.
• Higgs. The model includes three Higgs triplets, As normal, the 3-3-1 model has two breaking steps: The non-zero U(1) G charged field η 0 2 and χ 0 1 have zero vacuum expectation (vev) values: η 0 2 = χ 0 Others neutral Higgs components can be written as As shown in ref. [22], after the first breaking step, the corresponding Higgs potential of the 3-3-1 model should keep a custodial symmetry to avoid large FCNCs as well as the large deviation of ρ-parameter value obtained from experiment. This only involves to the ρ and η Higgs scalars which generate non-zero vevs in the second breaking step. Applying the Higgs potential satisfying the custodial symmetry given in [32], we obtain a Higgs potential of the form, where f is assumed to be real. Minimizing this potential leads to v 1 = v 2 and two additional conditions, We stress that if the custodial symmetry is kept in this 3-3-1 model, the model automatically satisfies most of the conditions assumed in ref. [23] for purpose of simplifying or reducing independent parameters in the Higgs potential. For this work, which especially concentrates on the neutral Higgses, the most important consequence is that all of the mass basis of Higgses, including the neutral, can be found exactly without reduction of the number of Higgs multiplets.
In the following, we just pay attention to those used directly in this work, i.e. the mass spectra of leptons, gauge bosons and Higgses. Other parts have been mentioned in [23].
B. Mass spectra
Leptons
We use the Yukawa terms shown in [23] for generating masses of charged leptons, active neutrinos and heavy neutral leptons, namely where the notation ( T being the Dirac spinor and its charge conjugation, respectively. The Λ is some high energy scale. Remind that ψ L = P L ψ, ψ R = P R ψ where are the right-and left-chiral operators. The corresponding mass terms are This means that the active neutrinos are pure Majorana spinors corresponding to the mass Λ . This matrix can be proved to be symmetric [33] (chapter 4), therefore the mass eigenstates can be found by a single rotation expressed by a mixing matrix U that where V L ab , U L ab and V R ab are transformations between flavor and mass bases of leptons. Here unprimed fields denote the mass eigenstates. Remind that ν ′c aR = (ν ′ aL ) c = U ab ν c aR . The fourspinors representing the active neutrinos are ν c a = ν a ≡ (ν aL , ν c aR ) T , resulting the following equalities: ν aL = P L ν c a = P L ν a and ν c aR = P R ν c a = P R ν a . The upper bounds of recent experiments for the LFV processes in the normal charged leptons are very suppressed [7], therefore suggest that the two flavor and mass bases of charged leptons should be the same.
The relations between the mass matrices of leptons in two flavor and mass bases are where Y ν and Y N are Yukawa matrices defined as (Y ν ) ab = y ν ab and (Y N ) ab = y N ab . The Yukawa interactions between leptons and Higgses can be written according to the lepton mass eigenstates, where we have used the Marojana property of the active neutrinos: ν c a = ν a with a = 1, 2, 3. In addition, using the equality e c b P L ν a = ν a P L e b for this case the term relating with η ± in the last line of (31) is reduced to √ 2η + ν a P L e b .
Gauge bosons
It is simpler to write the charged gauge bosons in the form of W a T a with T a being the gamma matrices, namely The masses of these gauge bosons are: where we have used the relation v 1 = v 2 = v √ 2 and the matching condition of the W boson mass in 3-3-1 model with that of the SM.
The covariant derivatives of the leptons contain the lepton-lepton-gauge boson couplings,
Higgs bosons
• Singly charged Higgses. There are two Goldstone bosons G ± W and G ± V of the respective singly charged gauge bosons W ± and V ± . Two other massive singly charged Higgses have masses Denoting s θ ≡ sin θ, c θ ≡ cos θ, we get some useful relations The relation between two flavor and mass bases of the singly Higgses are • CP-odd neutral Higgses. There are three Goldstone bosons G Z , G Z ′ and G ′ U 0 , and two massive CP-odd neutral Higgses H A 1 and H A 2 with the values of squared masses are The relations between the two bases are: • CP-even neutral Higgses. Apart from the three exactly massive Higgses shown in the ref. [22], the model predicts one more Goldstone boson G U and another massive Higgs.
The masses and egeinstates of these Higgses are The transformations among the flavor and the mass bases are where s α = sin α, c α = cos α defining by In the limit t ≪ 1 the expression of the lightest neutral even-CP Higgs is where both λ 1 and λ 2 must be positive to guarantee the vacuum stability of the potential (25). This Higgs is easily identified with the SM-like Higgs observed by LHC.
C. Couplings for LFV decay of the SM-like Higgs and the amplitude
From the detailed discussions on the particle content of the 3-3-1LHN, the couplings of SM-like Higgs needed for calculating LFVHD are collected in the table I.
Vertex
Coupling Vertex Couplinḡ Here we only consider the couplings the unitary gauge.
Matching the Feynman rules in the figure 2, we have the specific relations among the vertex parameters and the couplings in the 3-3-1LHN, namely for the exotic leptons and the active neutrinos, The expression of ∆ L is separated into two parts, namely from neutral exotic leptons and Similarly for the ∆ R we have Before going to the numerical calculation we remind that the divergent cancellations in two separate sectors of neutrinos and exotic leptons are presented precisely in the second subsection of the appendix B. Higgs self-couplings in the scalar potential: λ 1 , λ 2 , λ 12 and f . The first two free parameters we choose are the v 3 and mass of the H 2 given in (35). Then the f parameter can be determined by Another parameter that can be fixed is the mass of the neutral SM-like Higgs [5] shown in (40) are roots of the equation giving a relation among λ 2 , λ 1 and λ 12 : Because the λ 1 , λ 2 and λ 12 are factors of quartic terms in the Higgs potential (25), they must satisfy the unbounded from below (UFB) conditions that guarantee the stability of the vacuums of the considering model. According to the ref. [42], these conditions are easily found as follows. Defining ρ † ρ + η † η = h 2 1 and χ † χ = h 2 2 , the quartic part of the Higgs potential (25) In the basis (h 2 1 , h 2 2 ) the V 4 corresponds to the 2 × 2 matrix that must satisfy the conditionally positive conditions as follows: In our calculation, apart from positive λ 1 and λ 2 we will choose λ 12 > 0 so that all conditions given in (49) are always satisfied.
To identify h 0 1 with the SM Higgs, the h 0 1 must satisfy new constrains from LHC, as discussed in [43]. Namely, the mixing angle α of neutral Higgses, defined in (42), should be constrained from the h 0 1 W + W − coupling. Following [43] the we can identify that −c α ≡ 1 + ǫ W where ǫ W = −0.15 ± 0.14 is the universal fit for the SM Higgs. This results the constraint of c α as By canceling a factor of t in (42), we have a simpler expression which shows that c α < 0 when m H 2 > can be written as If the lower constraint in (50) is not considered, m 2 H 2 can be arbitrary large when |c α | → 1. In contrast, the constraint (50) gives a consequence GeV and λ 1 is large enough. On the other hand, this relation will not hold if the custodial symmetry assumed in the Higgs potential (25) is only an approximation. Hence in the numerical calculation, for the general case we will first investigate the LFVHD without the constraint (50). This constraint will be discussed in the final.
Regarding to the parameters of active neutrinos we use the recent results of experiment.
In particularly, if the mixing parameters in the active neutrino sector are parameterized by Because U L has a small deviation from the well-known neutrino mixing matrix U M N P S so we ignore this deviation [34]. We will use the best-fit values of neutrinos oscillation parameters given in [35], ∆m 2 21 = 7.60 × 10 −5 eV 2 , ∆m 2 31 = 2.48 × 10 −3 eV 2 , sin 2 θ 12 = 0.323, sin 2 θ 23 = 0.467, sin 2 θ 13 = 0.0234, and mass of the lightest neutrino will be chosen in range 10 −6 ≤ m ν 1 ≤ 10 −1 eV, or 10 −15 ≤ m ν 1 ≤ 10 −10 GeV. This range satisfies the condition b m ν b ≤ 0.5 eV obtained from the cosmological observable. The remain two neutrino masses are m 2 ν b = m 2 ν 1 + ∆m 2 ν b1 . We note that the above case corresponds to the normal hierarchy of active neutrino masses. In the 3-3-1LHN, the inverted case gives the same result so we do not consider here.
The mixing matrix of the exotic leptons is also parameterized according to (52). In particularly it is unknown and defined as V L ≡ U L (θ N 12 , θ N 13 , θ N 23 ). If all θ N ij = 0, all contributions from exotic leptons to ∆ L,R will be exactly zero. In the numerical computation, we consider only the cases of maximal mixing in the exotic lepton sector, i.e. each θ N ij gets only the value of π/4 or zero. There are three interesting cases: i) θ N 12 = π/4 and θ N 13 = θ N 23 = 0; ii) θ N 12 = θ N 13 = θ N 23 = π/4; and iii) θ N 12 = θ N 13 = π/4 and θ N 23 = −π/4. The other cases just change minus signs in the total amplitudes, and do not change the final results of LFVHD branching ratios. For example the mixing matrix of first case is Our numerical investigation will pay attention to the first case, where the third exotic lepton does not contribute to the LFVHD decays. The two other cases are easily deduced from this investigation.
From the above discussion, we chose the following unknown parameters as free parameters: v 3 , m H 2 , λ 1 , λ 12 , m ν 1 and m Na (a = 1, 2, 3). The vacuum stability of the potential (25) results the consequence λ 1,2 > 0. In order to be consistent with the perturbativity property of the theory, we will choose λ 1 , |λ 12 | < O(1). The numerical check shows that the LFVHD branching ratio depends weakly on the changes of these Higgs self-couplings in this range. Therefore we will fix λ 1 = λ 12 = 1 without loss of generality. These values of λ 1 and λ 12 also satisfy all UFB conditions (49). In addition, the Yukawa couplings in the Yukawa term (27) should have a certain upper bound, for example in order to be consistent with the perturbative unitarity limit [36]. Because the vev v 3 generates masses for exotic leptons from the Yukawa interactions (28), following [10] we assume the upper bound of the lepton masses as follows After investigating the dependence of the LFVHD on the Yukawa couplings through the ratio m Na v 3 we will fixed m N 2 /v 3 = 0.7 and 2 corresponding to the two cases of lower and larger than 1 of the Yukawa couplings.
Unlike the assumption in [23] where f = v 3 /2, we treat f as a free parameter relating [39], addressing directly for 3-3-1 models [19,40], where m Z ′ must be above 2.5 TeV. It is enough using an approximate relation of m Z ′ and v 3 : The figure 5 shows the dependence of LFVHD on the mass of m H 2 . The first property we can see is that the LFVHD branching ratio always has an upper bound that decreases with increasing v 3 . In other word, it has an maximal value depending strictly on the constructive correlation of v 3 and m H 2 . But if the Yukawa couplings are small, this maximum seems never reach the value of 10 −6 . The case of the large Yukawa couplings is more interesting because maximal LFVHD can be asymptotic 10 −5 , provided that v 3 is small enough, see the right panel. general for many other models beyond the SM with the same class of particles. In numerical investigation the LFVHD in the case of maximal mixing between the first two exotic neutral leptons, we find that the branching ratio Br(h 0 1 → µτ ) depends the mostly on Yukawa couplings of neutral exotic leptons and the SU(3) L scale v 3 . For small y N ij ≃ 1, equivalently m N 2 /v 3 ≃ 0.7, this branching ratio is always lower than 10 −6 , and even that of about 10 −7 , the parameter space is very narrow. In contrast, with large Yukwa couplings, for example leptons, such as [20], can predict large LFVHD. So when calculating the LFVHD in SUSY versions, the non-SUSY contributions must be included. In contrast, the 3-3-1 models with light leptons [21] give suppressed signals of LFVHD, and the SUSY-contributions in [44] are dominant.
where i = 1, 2. In addition, D = 4 − 2ǫ ≤ 4 is the dimension of the integral. The notations M 0 , M 1 , M 2 are masses of virtual particles in the loops. The momenta satisfy conditions: i and C 0,1,2 are PV-functions. It is well-known that C i is finite while the remains are divergent. We define where γ E is the Euler constant and m h is the mass of the neutral Higgs. The divergent parts of the above scalar factors can be determined as 1 ] = Div[B 1 ] = Div[B We remind that the finite parts of the PV-functions such as B-functions depend on the scale of µ parameter with the same coefficient of the divergent parts.
The analytic formulas of the above PV-functions are: 0,1,2 = Div[B The b (1) 0 can be found in a very simple form in the limit p 2 where x k , (k = 1, 2) are solutions of the equation The final expression of b The B i 1 , B (12) i are calculated through the B 0 and A 0 functions, namely The C i functions can be found through the equation The C 0 function was generally calculated in [45], a more explicit explanation was given in [46]. In the limit p 2 1 , p 2 2 → 0, we get the following expression where both δ and δ ′ are positive and extremely small, x 0 and x 3 are defined as and x 1 , x 2 are solutions of the equation (A9). The limit of p 2 1 , p 2 2 = 0 will be used in our work, even when the loops contain active neutrinos with masses extremely smaller than these quantities, because of the appearance of heavy virtual particles. The explanation is as follows. The denominator in the first line of (A13) has the general form of D = Our calculation relates to the two following cases: • Only M 0 is the mass of the active neutrino, • M 1 = M 2 is the mass of the neutrino: We use the following result given in [45] R where i = 1, 2, 3 and Li 2 (z) is the di-logarithm defined by We also use the real values of x 0 to give the result η(−x i , 1 leading to Using the following equalities with any real A, B, δ, δ ′ positive real and extremely small; and This results the very simple expression of C 0 function where x 1,2 are solutions of the equation (A9), and x 0,3 are given in (A14). This result is consistent with that discussed on [31].
For simplicity in calculation we will also use other approximations of PV-functions where where x k is the two solutions of the equation (A9),
Appendix B: Calculations the one loop contributions
In the first part of this section we will calculate in details the contributions of particular contributions of diagrams shown in the figure 1 which involve with exotic neutral lepton N a , a = 1, 2, 3. From this we can derive the general functions expressing the contributions of particular diagrams.
Amplitudes
It is needed to remind that the amplitude will be expressed in terms of the PV-functions, so the integral will be written as where µ is a parameter with dimension of mass. This step will be omitted in the below calculation, the final results are simply corrected by adding the factor i/16π 2 . As an example in the calculation of contribution from the first diagram, we will point out a class of divergences that automatically vanish by the GIM mechanism. More explicitly for any terms which do not depend on the masses of virtual leptons, they will vanish because of the appearance of the factor a V L 1a V L * 2a = 0. The contribution from diagram 1a) is: where We can see that P 1 does not contain any divergent terms. The formula of P 2 is We can see that the terms like B The contribution from P 3 is Again all terms in the first and third lines do not contribute to the amplitude. But the four terms m 2 2 B (2) 1 , do contain divergences. The first two terms have divergent parts having the corresponding forms of (−m 2 2 ∆ ǫ ) and m 2 1 ∆ ǫ , which do not depend on the masses m a of the virtual leptons. Hence they also vanish by the GIM mechanism. The finite parts of these terms still contribute to the amplitude. The remain two terms include the most dangerous divergent parts. They have factors m 2 a which can not cancel by the GIM mechanism. We remark them by the bold and will prove later that they finally vanish after summing all diagrams. From now on we can exclude all terms that do not depend on the masses of virtual leptons.
the expression of the total contribution from the diagram 1a) is simply where E F V V L,R is defined in (4) The contribution from diagram 1b) is: The contribution to the total amplitude is The contribution from diagram 1c) is: .
The contribution to the total amplitude is The contribution from diagram 1d) is: (10) and (11), the contribution to the amplitude is The contribution from diagram 1e) is: The final result is written as where E V F F L,R are defined in (12) and (13).
The contribution from diagram 1f) is The final result is written as where E HF F L,R are defined in (14) and (15).
The contribution from diagram 1g) is: The contribution from diagram 1h) is: 1 − (2 − d)B The total amplitude from the two diagrams 1g) and 1h) is: We note that the divergence part in the above expression is zero. The final result is where E F V L,R are defined in (16) and (17).
The contribution from the diagram 1i) is: The final result is written as where E F H L,R are defined in (18) and (19). After calculating contributions from all diagrams with virtual neutral leptons N a we can prove that all divergent parts containing the factor m 2 a will be canceled in the total contribution. The details are shown below. For active neutrinos the calculation is the same.
Particular calculation for canceling divergence
In this section, for contribution of exotic neutral leptons N a we use the following relations And we concentrate on the divergent parts which are bolded in the expressions of the amplitudes calculated above. With the notations of the divergences shown in the appendix A, all of divergent parts are collected as follows, where It is easy to see that the sum over all factors is zero. Furthermore, it is interesting to see that the sums of the two parts having factor c α and √ 2s α independently result the zero values. From (41), the factor c α arises from the contributions of neutral components of η and ρ, while the s α factor arises from the contribution of χ.
For contribution of the active neutrinos, the two diagrams (b) and (c) of the fig.1 do not give contributions due to absence of the H − 2 H + 2 W couplings. Using the following properties We see again that sum of all divergent terms is zero. | 8,023.4 | 2015-12-10T00:00:00.000 | [
"Physics"
] |
Electrospun Nanofibrous Scaffolds: Review of Current Progress in the Properties and Manufacturing Process, and Possible Applications for COVID-19
Over the last twenty years, researchers have focused on the potential applications of electrospinning, especially its scalability and versatility. Specifically, electrospun nanofiber scaffolds are considered an emergent technology and a promising approach that can be applied to biosensing, drug delivery, soft and hard tissue repair and regeneration, and wound healing. Several parameters control the functional scaffolds, such as fiber geometrical characteristics and alignment, architecture, etc. As it is based on nanotechnology, the concept of this approach has shown a strong evolution in terms of the forms of the materials used (aerogels, microspheres, etc.), the incorporated microorganisms used to treat diseases (cells, proteins, nuclei acids, etc.), and the manufacturing process in relation to the control of adhesion, proliferation, and differentiation of the mimetic nanofibers. However, several difficulties are still considered as huge challenges for scientists to overcome in relation to scaffolds design and properties (hydrophilicity, biodegradability, and biocompatibility) but also in relation to transferring biological nanofibers products into practical industrial use by way of a highly efficient bio-solution. In this article, the authors review current progress in the materials and processes used by the electrospinning technique to develop novel fibrous scaffolds with suitable design and that more closely mimic structure. A specific interest will be given to the use of this approach as an emergent technology for the treatment of bacteria and viruses such as COVID-19.
State of the Art
Electrospinning is widely attractive to industry and researchers for its scalability, versatility, and potential applications in many fields [1,2]. It is considered one of the most suitable techniques for fabricating nanofibrous scaffolds, which are known for their high physical porosity and huge potential to mimic defects, such as bone defects [3][4][5]. In terms of geometry, the diameter of each electrospun fiber depends on the polymer specificity and electrospinning processing parameters [3]. The electrospinning set-up consists of a high voltage source, an infusion syringe pump, and a collector; the collector might be stationary or portable metal or a coagulating bath [4][5][6]. Electrospinning technique produces more thinner, smoother, and folded scaffolds and achieves more uniform drug distribution with less residual liquid than the solvent casting [7]. Therefore, electrospinning induces well-controlled drug release profiles. In the next paragraph, the potential applications of nanofiber scaffolds will be further highlighted.
Nowadays electrospinning is a well-known technology and it has been under a thorough investigations. One of the earliest studies of the electrified jetting phenomenon has been published by Zeleny [8]; in this paper the role of the surface instability in electrical discharges from charged droplets has been studied. A series of patents from 1934 to 1944 has been published by Formhals [9][10][11][12]. These patents are mainly describing an experimental setup utilizing electrostatic forces to produce fine dried polymer filaments. Another apparatus for the production of patterned, ultrathin, low weight, non-woven fabrics by using electrical spinning has been filled by Simons [13] in 1966. A deformation of a charged liquid meniscus has been studied by Taylor [14][15][16][17], Taylor described a conical stable geometry at the end of the meniscus which is now known as a Taylor cone. In 1971, Baumgarten [18] used electrospinning apparatus to produce ultra-fine acrylic fibers with diameters below 1.1 µm down to 500 nm. Although the electrospinning process has been extensively studied by many researchers since then for several decades in the literature, many parameters are still under investigation and not yet completely understood [19].
A research team in Akron University, USA lead by Professor Reneker [20] reintroduced the electrospinning technique to make submicron fibers from different types of synthetic and natural polymers. Yaril's team reported the production of hollow nanotubes by co-electrospining two polymer solutions (PCL and PMMA nanofibers) [21,22]. Researchers particularly highlighted the desorption limited mechanism of release from polymer nanofibers [23] and discussed electrospinning jets in the form of a polymer fiber with a diameter that can often be conveniently stated in nanometers [24].
Polyacrylonitrile (PAN) and copolymers of PAN are considered a very important candidate for electrospinning as they have commercial/ technological applications. Among various precursors of carbon nanofibers (CNFs), PAN is considered the best candidate, mainly due to its high carbon yield (up to 56%), flexibility and the ease of its heat treatment stabilization stage to form a stable Zigzag structure during nitrile polymerization [25][26][27][28][29][30][31][32][33]. Also, PAN has excellent characteristics, such as spinnability, environmentally friendly nature and varieties of commercial applications.
Inagaki et al. [34] describes the chemistry and applications of CNFs. Barhate and Ramakrishna [35] published a review on nanofibers as a filtering media for tiny materials. Li and Xia [36] discussed about the trends in nanofibers with emphasis on electrospinning techniques to produce nanofibers. PAN nanofibers and carbon nanotube (CNT) reinforced PAN nanofibers were successfully electrospun [37]. In our research team, Ali and et al. [38][39][40][41][42][43][44][45][46][47][48][49] published a series of publications studying the characteristics of the electrospun PAN/N, N-dimethylformamide (DMF) polymer solution using different types of collectors. Hot pressing technique has introduced as well to electrospun PAN nanofibers with and without nano reinforcements to produce carbon nano fibers. In their work optimization of the process for PAN nanofibers has been introduced by response surface methodology (RSM).
RSM has been used as a successful tool for optimizing the process in both polymer electrospinning and polymer hydrogels [50][51][52]. Process optimization of nanofibers has been investigated by RSM in order to predict the electrospinning parameters affecting the producing nanofiber diameter in order to achieve minimum fiber diameter in nano level. A quantitative relationship between electrospinning parameters and the responses (mean diameter and standard deviation) was established and then the final multi-layers structure of nanofibers and nanoparticles has been achieved for a controlled and robust process [53][54][55].
In the last decade, nanofibers have been generated by electrospinning under pressurized CO 2 . In fact, an evolution of the traditional technique was clarified by several researchers by adding CO 2 in the liquid polymeric solution [56][57][58]. The presence of CO 2 in the solution reduced the surface tension of the liquid to be processed and its viscosity. Wahyudiono et al. [59] demonstrated that CO 2 dissolved in the starting liquid polymeric solution introduced greater flexibility in the process with respect to the classical technique by the production of micro-and nanofibers of poly(vinylpyrrolidone (PVP) through the adoption of a more advanced configuration of the process. Baldino et al. [60] reported a more developed technique called supercritical assisted electrospraying (SA-ESPR), in which the addition of supercritical CO 2 (sc-CO 2 ) to a starting polymeric liquid solution produced a controlled size of micro-or nanoparticles. Attempts to produce nano-composite (polymer + drug) using this electrohydrodynamic process have shown encouraging results in the last years [61].
Electrospinning Parameters
Electrospinning parameters are essential to understand not only the nature of electrospinning but also the conversion of polymer solutions into nanofibers through Electrospinning. These parameters that affecting electrospinning process and electrospun fiber diameters can be classified under three main categories named as: polymer solution, processing, and environmental.
These parameters can affect morphological characteristic of electrospun fibers as well as its size. A summary discussion of these parameters and their effect on fiber characteristics will be presented:
Solution Parameters Concentration and/or Berry's Number
The concentration or Berry's number of a polymer solution or melt play a crucial role in fiber formation during the electrospinning process. Four critical concentrations from low to high should be noted:
1.
When the concentration is very low, micro to nano beads will be obtained. Here, electrospraying occurs instead of electrospinning, owing to the low viscosity and high surface tensions of the solution [62].
3.
At a specific Berry's number and when the concentration is adequate, smooth nanofibers can be obtained.
In general, as the concentration or Berry's number of solution or melt respectively increases the fiber diameter increases within the spin-able range.
Molecular Weight
Molecular weight of the polymer reflects the degree of polymer chain entanglement in solutions accordingly indicating solution viscosity. In case of keeping the concentration fixed, as molecular weight of the polymer decreases beads formation rather than smooth fiber is the probability. Enhancing molecular weight value results in smooth fiber. Increasing in molecular weight results in micro-ribbon formation [67].
It is also important to note that by increasing molecular weight even with low concentration micro-ribbon could be formed [68,69].
Intrinsic Viscosity
Viscosity of polymer solution is considered one of the most important parameters affecting electrospun fiber diameter and morphology. There is a suitable viscosity for the electrospinning to form fiber [70,71]. Group of publications on the correlation between polymer solution viscosity and formation of electrospun fiber have been published [67,72,73]. Concentration, viscosity and polymer molecular weight all are correlated to each other and one parameter has been used to describe them all named Berry's number it measures the degree of chain entanglement inside the solvent and can be calculated by the product of concentration by intrinsic viscosity. Ali and et al. [38][39][40][41][42][43][44][45][46][47][48] correlates such parameter with the optimization of electrospun PAN fibers.
Surface Tension
Surface tension is important factor in electrospinning. In 2004, Yang and Wang investigated the influence of surface tensions on electrospun PVP fibers with three different types of solvent namely ethanol, DMF, and MC [66].
As the surface tension of the solution decreases, beaded fibers can be converted into fine fibers [74][75][76][77][78][79][80]. Also, in this study, the surface tension and solution Viscosity can been adjusted by changing the mass ratio of solvents mix and fiber morphologies.
If all other conditions are constant, surface tension identifies the upper and lower boundaries of the electrospinning window [77,79].
Conductivity/Surface Charge Density
Solution conductivity is specified by polymer bond type, solvent solubility parameter and any other additives such as salt due to its ionic bond type. Usually, natural polymers are polyelectrolytic in nature, in which the ions increase the charge carrying ability of the polymer jet, subjecting to higher tension under the electric field, resulting in the poor fiber formation on the other hand the synthetic polymers are tending to form good fibers [80]. With the aid of ionic salts, nanofibers with small diameter can be obtained [81].
Sometimes high solution conductivity can be also achieved by using organic acid instead of regular solvent. Hou et al [81] used formic acid as the solvent to dissolve nylon 6,6 and obtained nano fibers of 3 nm diameter with beads. In their study, small amount of pyridine has been added into the solution to enhance capacity of solution to carry charge aiming to eliminate the beads by increasing the conductivity of the solution. In general, increase in the solution conductivity the more possibility of formation of thinner fibers especially in synthetic polymers.
Processing Parameters Voltage
One of the important parameter in the electrospinning process is the applied voltage value. Only if the applied voltage overcomes the surface tension of the polymer meniscus then an ejected jet comes out from Taylor Cone. However, increasing in the value of applied voltage does not in the favor of the characteristics of the electrospun fiber diameter or/and morphology. Reneker and Chun [82] have proved that there is not much effect of electric field on the diameter of electrospun polyethylene oxide (PEO) nanofibers. Several groups suggested that higher voltages than required to form ejected charged jet implies to form large fiber diameter. Zhang et al [79] theoretically studied the effect of voltage on morphologies and fiber diameters distribution with poly (vinyl alcohol) (PVA)/water solution in a theoretical modeling approach.
Another research groups showed that higher applied voltages may be increase the electrostatic repulsive force on the charged jet and accordingly smaller fiber diameter can be formed. Yuan et al [83] investigated the effect of applied voltage on morphologies and fiber alignment with polysulfone (PSF)/DMAC/acetone as model. In addition to those phenomena, some groups also demonstrated that higher voltage offers the greater probability of beads formation [62,84,85].
Definitely and at the end of the previous discussion voltage does influence fiber diameters. However, the significance level differs according to polymer type, solution concentration and finally the distance between the tip of the spinneret and the nearest point on the collector [86].
Flow Rate
Another important processing parameter is the polymer solution flow rate. Generally, lower flow rate is more recommended as the polymer solution will get enough time for polarization. If the flow rate is very high, bead fibers with thick diameter will form rather than the fine fiber. That can be easily explained due to the short drying time period that the travelling jet has taken until it reaches the collector. Yuan et al [83] studied the effect of the flow rate on the morphologies of the PSF fibers from 20 % PSF/DMAC solution at 10 kV. In their study, bead fibers with thicker diameters obtained as the flow rate is 0.66 mL/h.
Collectors
Metal collector is acting as the conductive substrate (ground) to collect the charged fibers. Aluminum foil is used as a cover sheet attached to the stationary metal grounded screen but it is difficult to transfer the collected nanofibers to other substrates for various applications. With the need of fibers transferring, diverse collectors have been developed including cone [87], pin [88], metal grids [89], parallel or gridded bar [46], rotating rods, cylinder or wheel [90], wet coagulating bath [91].
Distance or Spinning Height (H) between the Collector and the Tip of the Spinneret
Traveling distance between the tip of the charged spinneret and the nearest point of the grounded collector certainly affect the fiber diameter and its morphologies [73]. Conceptually, as the travelling bath increases more time is consumed by the travelling jet and more chance and possibility of getting rid of the solvent and consequently drying of the jet into fine fiber on the collector surface is expected [83].
Environmental Parameters
Temperature and Humidity are two important environmental parameters that can also affect the fiber diameters and its morphologies. Mituppatham et al. [92] proved that as temperature increases smaller and thinner fiber diameter is collected.
Low humidity tends to dry the solvent totally and increase the velocity of the solvent evaporation. On the contrary, high humidity will lead to thicker fiber diameters. Casper et al. [93] demonstrated that the variety of humidity can also affect the surface morphologies of electrospun polystyrene (PS) fibers.
Tissue Engineering Applications
Tissue engineering integrates science of biology and medicine to design artificial organs for regeneration of tissue function [94,95]. In order to mimic the extracellular matrix and provide tissue with oxygen and nutrient circulation, functional tissues were fabricated from several materials and specific structures, particularly nanofiber scaffolds [96,97]. In fact, nanofiber scaffolds are widely used in tissue repair, whether soft or hard regeneration [98]. Faced with therapeutic problems, damaged ligament, fracted bone cartilage, and blood vessels were restored, taking advantage of ENS to repair orthopedic tissues and/or develop organs with high similarity in term of characteristics, properties, and design [99][100][101]. Exploring their characteristics, ENS is used for wound healing. In fact, it is considered a solution for the loss of skin integrity caused by injury or illness. It provides additional biological stimuli to support cell and tissue function. As consequence, substantial psychological balance is noticed in the tissue-engineered skin patients [102][103][104].
Drug or Protein Delivery
Exploring the ability to control biomaterial properties (geometry, fiber alignment fiber diameter, composition, etc.), ENS are used to incorporate drugs/proteins into scaffold [104]. In fact, known for its large surface-to-volume ratio, it was demonstrated by several researchers that ENS could be a perfect vehicle for drug delivery, either by dissolving the drug into the electrospinning solution or by mixing the drug with this solution when the drug is not soluble [105][106][107]. As consequence, the drug contained inside in the fiber will be released. Several solutions are used to master the adsorption of a drug, particularly when the solubility of the drug is limited. Pillay et al. [108] proposed an immersion step of nanofibers into a drug solution to let the drug molecules chemically and physically bond or attach to the ENS. Yoo et al. [109] demonstrated that nanofiber can biomimetically bind to a drug after electrospinning by controlling physical adsorption (both when we use simple physical adsorption or nanoparticle assembly on the surface or even layer by layer-by-layer multilayer assembly). Chemical adsorption methods can be explored by surface activation (such as plasma or ultraviolet treatments) or bioactive molecule immobilization [110,111].
The reliability of the drug delivery application is conditioned by the drug-polymer compatibility, as well as the foreign body's interaction with natural organ or tissue [112,113]. Varied results were noticed using drugs with different bioactivities, such as antimicrobial [114,115], anticancer [116,117], anti-inflammatory [118,119], cardiovascular [120,121], or antihistamine [122,123] drugs. Another factor that can influence the drug-release kinetic is the type of the polymer used as a matrix for the nanofiber's elaboration and its architecture (sandwiched with microparticle, sandwiched with microfibers, etc.) [124].
For protein delivery, surface immobilization of bioactive molecules has been used to load proteins into ENS fibers to protect the molecules from the effects of high voltage [125]. The method consists of the fixation of the protein scaffolds surface using suitable chemical conjugation with corresponding functional groups, such as carboxylate [126]. It was reported by Tigli et al. [127] that cell fate powered by peptides is conditioned by the control of the drug feeding ratio.
Cancer has become a leading disease, causing human death worldwide [128]. Cancer is responsible for 15% of human deaths worldwide, with 1.5 million new cases expected annually. Chemotherapy, hormonal therapy, radiation therapy, immunotherapy, and surgery are the current standard methods of treatment. Chemotherapeutic drugs in clinical use such as 5-fluorouracil (5-FU), cisplatin, carboplatin, paclitaxel, gemcitabine (Gemzar), etc. are characterized by either poor bioavailability, poor selectivity and specificity, or by liver accumulation and fast renal clearance [129]. Nanotechnology's evolution offers promise to tackle these problems. For instance, anticancer drug-loaded polymeric nanoparticles (NPs) are advantages to traditional anticancer drug formulations in terms of diminishing the adverse effects of drugs and enhancing their therapeutic efficacy. Generally, through the enhanced permeability and retention effect, some NPs are accumulated preferentially in tumors. Additionally, electrospun nanofibers (NFs) formulation became a promising techniquein the delivery of drug [130] and wound dressing [131]. Due to their high porosity and important surface area to molar ratio that mimics the extracellular matrix, they gained their importance in biomedical engineering [132]. Examples of these formulations include encapsulation of anticancer drugs into biodegradable polymeric nanofibers; 5-FU and salinomycin on poly(lactic-co-glycolic acid) (PLGA) [133]; Paclitaxel on PCL [134]; Doxorubicine HCL-block-poly(ethylene glycol)-block-poly(L-lactide) (HCl on PEG-PLLA) [135], hydroxycamptothecinon (Poly(lactic acid)−Poly(ethylene oxide)) PLA-PEG [136].
Pawłowska et al. [137] highlighted the development of smart drug delivery systems using the stimuli-responsive electrospun nanofibers. Their developed nanoctruture pillows have the specificity of fast photothermal responsiveness for near-infrared (NIR) light-controlled on-demand drug delivery. The innovative platform consists of electrospun PLLA loaded with a rhodamine B drug model that encapsulates platonic hydrogel P(NIPAAm-co-NIPMAAm)/AuNR [138]. The researchers demonstrated that this emergent nanotechnology is considered an excellent candidate for achieving on-demand drug release in synergy with photothermal treatment.
Agriculture, Food Industry, and Environment
As reported in many publications, technical reports, and communications, ENS are used in the biosensing [137,138], agriculture protection [139,140], the fermentation food industry [141,142], and biocatalytic remediation of the environment and energy [143,144]. In fact, to monitor food and agriculture using simpler, faster, and less expensive sensitive detection methods of foodborne illness, researchers and developers have exploited the potential given by the electrospinning technique to innovate 1D micro-and nanobiosensors [145]. Taking into account the surface properties, the high level of porosity, and the capability to interact with green elements, nanofibers have been functionalized with several types of nanomaterials such as graphene and carbone nanotubes to acquire the singularity of multifunctional hybrid electrospun nanofibers. These specificities enhance the reactivity of the materials, improve adsorption, and increase the sites' numbers of catalysts loading and interacting. Zhang et al. [146] proved that the quality of such biochemical sensors is attributed to the strong stretching forces associated with electrospinning that induce polymer orientation along the longer axis of the fiber chains. Therefore, high charge-carrier mobility or polarized photoluminescence can be created [147]. Recent laboratories have incorporated new nanomaterials to create what is called ESN-based chemical and hybrid biosensors, which have been applied in the agri-food sector to address food quality and safety [148][149][150]. The conjugated use of polymer, ceramic, and inorganic other materials are reviewed by Mercante et al. [151]. In another branch of this very fertile sector, researchers used biocompatible ESN to entrap bioactive food ingredients, both hydrophilic and hydrophobic types [152]. Besides, they synthesized a food encapsulation method based on ESN to safeguard food from oxidation and damage [153,154]. Other researchers have produced ultra-fine poly(acrylamide) PAM fibers for use as a water super-absorber and soil erosion resistant agent in irrigation systems [155]. By electrospinning a 290 nm fiber diameter, the optimization of Berry's number, spinning height, and spinning angle on PAM fiber was investigated based on empirical and experimental approaches.
Th application of ESN in environment topics and issues is varied, from energy harvesting/conversion/storage to filtration membranes and catalytic supports [156][157][158]. In fact, specific devices made in ENS were developed and incorporated as solar cells, rechargeable batteries, and fuel cells [159]. Exploring the possibility to electrospin metal oxides and/or carbon nanofibers, they can be used as electrode materials thanks to their properties such as high surface-to-volume ratio, short diffusion distance, and large specific surface area [160]. As a consequence, they are able to rapidly transfer electrons and ions, with a large electrode/electrolyte contact area [161].
As reported by [162][163][164], the mass transport of reactant is feasible using a nonwoven mat of nanofibers. In fact, thanks to their high porosity level and their good interconnection, extensive contact between reactants and active sites on electrocatalysis are provided to fabric fuel cells, particularly from polymer, ceramic, and metal electrospun nanofibers. Their durability and efficiency were confirmed by several researchers and industries [165,166].
Innovation in Biomimetic Design, Materials Properties, and Structure Architecture
Nowadays, great efforts are focused on innovation in the design of biomimetic electrospun scaffolds for biotechnological applications. Natural and synthetic polymers as well as ceramic and metallic materials have been tested to improve properties and structure architecture.
Biomimetic Design
Biomimicry is a technologically oriented approach focused on creating innovative solutions that are inspired by nature's wealth [167,168]. In relation to our topic, researchers have concentrated on a few key sources of inspiration, in terms of shape, function, materials, or ecosystem [169,170]. They benefit from the easily tunable compositions and structures of electrospun fibers to successfully biomimic via electrospinning.
Wei et al. [171] reported the design of nacre-inspired porous scaffolds for bone repair. Their work was the result of product design strategy that included electrospinning, phase separation, and 3D printing to elaborate layer-by-layer a composite film with nacre-like structure from nano-platelets and polyamide. Nacre has also inspired newly developed coatings and implants (from simple to complex geometries) that functionalize the biomaterials surface in order to induce desirable biological responses [172][173][174].
Wang et al. developed engineered biomimetic superhydrophobic surfaces of electrospun nanomaterials inspired by the lotus leaf [175]. The same property was explored by investigated silver ragwort leaf and hillock bush leaf [101,176]. Other researchers have focused on the biomimetic of structure and functions of honeycombs, polar bear fur, and spider webs to inspire tissue structure and organ architecture (membrane, bone marrow, etc.) [177,178].
Materials Properties
To electrospin submicrometric fibers, a panel of materials can be used, from a natural source such as gelatin or collagen to a material from a synthetic category such as polycaprolactone (PCL) and polylactide (PLA), as well as to hybrid types (e.g., PCL blended with collagen) [179][180][181][182][183].
Cell adhesion, migration, spreading, and differentiation are the essential characteristics related to the nanofiber surface's stiffness. In fact, after implementation, a successful integrated scaffold should provide good mechanical properties in terms of rigidity and flexibility, with an optimal porosity size and calibration [184]. Pennel et al. [185] made a direct relation between the infiltration and vascularization ability of the new material and its continual stability. Liu et al. [186] demonstrated that polylactide co-trim-ethylene carbonate nanofibers showed enhanced efficiencies as scaffold materials for tissue regeneration. Bao et al. [187] developed multifunctional fibrous scaffolds with shape memory specification. They were especially useful for tissue repair. Many other polymers such as chitosan, alginate, and silk fibroin have been explored for the production of healing mechanisms due to their biodegradability, drug release ability, acceptable hydrophilicity, and non-toxicity [188][189][190][191][192][193]. From the animal resource, a polymer solution is developed. Membranes with or without active agents are obtained by electrospining. The cultivation and separation process are operated to create new cells with antibacterial properties.
Architecture
An appropriate cellular environment is required to assume available scaffold architecture. This architecture is the result of various special arrangements (aligned, random, or cross aligned) [194]. These arrangements should ensure neo-tissue elaboration and proliferation, vascularization, and integration without risk to the original tissue [195]. Lutzweiler et al. [196] considered that the most sensitive property of a scaffold's structure is its ability to diffuse nutrients and metabolites thanks to an optimal pore size and stable architecture. It needs a suitable design for cell migration into scaffold and necessary ligand density on the scaffold surface [197]. According to the principle of regeneration, it should be gradually replaced with extracellular matrix, taking into account the biodegradability of the developed tissue, without any risk of toxication or surrounding organ disturbance [198,199].
Other progress related to the architectural point of view consists of the development of gradient structures to tailor cell orientation and their extracellular matrix deposition [200]. The evolution is related to the fabrication of scaffolding with random and aligned fibers on one section. As a consequence, a graded mechanical property throughout the tissue constructs is possible [201].
Current Progress on Elaboration Process, Implementation, and Manufacturing
The electrospun nanofibers' properties such as morphology and diameter are influenced by intrinsic (solution properties) and extrinsic (process and environment) factors [202][203][204]. Dorati et al. [205] reported that solvent concentration, viscosity, electrical conductivity, and elasticity are the most influential parameters on the geometry and morphology of electrospun nanofibers. In fact, to run electrospinning components, a researcher needs a small amount of solution with a suitable concentration to achieve a smooth and uniform nanofiber. Both low or high concentrations could induce a morphological problem or diameter variations according to the interaction of the solvent concentration with the viscosity and the surface tensions effects [206,207]. As consequence, non-uniform shape and non-mastered diameters of nanofibers have a great probability of being detected. Datta and Dhara [208] combined microfabrication and rolling process to design 3D bone grafts based on 2D ENS of synthetic polymer. This approach needed a graphical design of macro-pore that could be created by a laser-engraving machine on EN sheets to facilitate cellular infiltration into the 3D scaffold. Finally, multi-scalar porosity was rolled up to obtain a 3D scaffold, which was associated to the microfabricated nanofiber sheets for a final bio-engineered organ.
As more sophisticated method, a robocasting 3D printing technique was used to develop an electrospun organ with a complex structure, paving a new way to developing unprecedented scaffold microstructure [209,210]. A considerable amount of software is needed to control geometry, architectural structure, and manufacturing parameters to result in more closely mimicking material and highly efficient biomedical scaffolds [211]. Table 1. summarizes the most important applications of the electrospun nanofibrous scaffolds.
Application
Polymer and Solvent Used Product Characteristics Ref.
A nanofiber composite (PLA/SSS) of 50-450-nm with enhanced thermal and mechanical properties; a slight enhancement in human foreskin fibroblast cell proliferation; a decent cytocompatibility; and antibacterial [212] A gelatin solution prepared in ethanol extracted from Crude Carissa Carandas fruits (CCE) and incorporating acetic acid.
A smooth and continuous gelatin fibers mats (GFM) with an average diameter of 235.69 ± 10.45 nm. could be obtained with the optimal conditions of 30% (w/v) gelatin solution, 25% (v/v) ethanol solution, 30% (v/v) acetic acid, a fixed electrostatic field strength of 20 kV and a 15 cm distance between spinneret tip and collector. When 15% (w/w) CCE is used, the CCE-GFM shows high DPPH radical scavenging and tyrosinase inhibitory activity. [213] Anionic surfactants added to a natural biopolymer of galacturonic acid (PGuA) to enable its electrospinning to nanofibers.
Small spindled fibers of 2 to 10 µm length and 287 to 997 nm diameter. Large continuous fibers could be produced when an amount of 10 to 30% of high molecular weight PVA is used. [214] Drug Delivery A poly(vinylpyrrolidone)/PVP electrospun to encapsulate β-carotene dissolved in ethanol.
A nanofiber material with slow release rate of curcumin and with a high cytotoxicity against breast cancer cell line [216] Chitosan/pullulan carried by a shell of polylactic acid (PLA) A nanofiber with improved thermal properties and rapid dissolving capability in water. [217] Tissue engineering A platelet-derived growth factor (PDGF-BB) contained within a shell of polylactic acid (PLA) and encapsulated within nanofibers A 3D scaffolding nanofibers with microporous structure, acceptable mechanical properties and high cell compatibility. [218] Crystalline cellulose (NCC) in a matrix of cellulose acetate (CA) polymer.
[220] Cancer therapy A poly(ε-caprolactone) (PCL) A scaffolding system of long nanofibers to carry breast cancer therapy [221,222] Wound dressing A zein/Graphene oxide (GO) blend. The GO is loaded by Tetracycline hydrochloride (TCH).
A homogeneous and cohesive composite with structural characteristics, swelling and degradation behavior dependent on the size and amount of the included inorganic particles. [224] Filter media. A poly(ε-caprolactone) (PCL) Nanofibers (NF) of average diameters of 180 and 234 nm with improved bioprotective activity and filtration efficiency [225,226]
Electrospun Nanofiber Applications for Medical Care in the Coronavirus COVID-19 Pandemic Crisis
COVID-19 is a disease caused by a new kind of coronavirus called severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) manifested in Wuhan, Hubei, China at the end of 2019 [227]. This novel coronavirus can cause fever, respiratory failure, septic shock, and even death. Researchers believe that SARS-cov2 has zoonotic origin as bat coronaviruses, pangolin coronaviruses, and previously discovered SARS-CoV [228]. To date, COVID-19 is expected to greatly impact society and the economy and to widely change our daily lifestyle [229]. Despite the number of research studies, the high kinetics of publication, and global strategies in relation to this pandemic [230][231][232], development of protection methods and solutions and the search for an efficient, well-controlled, and accurate means of drug dose delivery, especially with regard to the respiratory system of newly infected COVID-19 patients by using biodegradable electrospun polymer, are still considered potential issues until an effective vaccine is developed and made widely available [233].
As soon as the pandemic appeared, Tebyetekerwa et al. [234] innovated unique electrospun nanofibers for filtration membranes in face masks. They proposed the use of durable and yet reliable electrospun nonwoven filters with 10 nm fiber diameter, as the filtration efficiency of polymeric nanofibers mats is excellent and can adsorb submicronic and nanosized organisms [235].
In this context, Zussman's research group developed a sticker to upgrade surgical masks called "Maya" [236]. This innovative product showed successful results in the protection of respirators, trapping nanometric particles and, thanks to new functionalities given by the electropunk nanofibers, efficiently neutralizing the virus from droplets that might reach the mask surface. Bin Ding's team developed other types of masks using nanofibrous membrane-based desiccants for energy efficient humidity control and atmospheric water harvesting [237]. The new technology of a wood-inspired moisture pump was adapted based on electrospun nanofibrous membrane for solar-driven continuous indoor dehumidification.
Other smart masks were developed by DooKim's group, which was exploring electrospun nanostructure performance [238]. The researchers designed a membrane to use as an additional filter for tissue-based face masks. Thanks to its nanostructured arrangement with 100-500 nm dimeter fibers, this electrospun membrane showed an excellent filtering efficiency even after being hand washed more than 20 times.
Another new generation of masks using electrospun nanostructure was proposed by Sio et al. [236]. They developed smart self-disinfecting face masks based on a multilayer electrospun membrane. They introduced nanoclusters and plasmonic nanoparticles with a hierarchical arrangement to realize chemically driven and on-demand anti-pathogen activities.
Suitable disinfection methods and protocols should be produced so filters can be reused without compromising filtration efficiency. Khanzada et al. [239] developed aloe vera and polyvinyl alcohol electrospun nanofibers for protective clothes. The efficiency of the innovative product was confirmed by using antimicrobial activity tests to check against Gram-positive and Gram-negative bacteria. An optimum composition for highantimicrobial activity against S. aureus, compared with E. coli bacteria was patented. For possible use as a fast absorbent carrier of anti-COVID-19 drug delivery, some researchers are trying to prepare scaffolding that would be able to carry such drugs while controlling the degree of a drug's absorption by the human body-something that would be indispensable [240,241]. However, this new applied nanotechnology is not yet mastered for use with newly discovered drugs.
Challenge of the Electrospun Nanofiber Scaffolds
Based on our review study, it is clear that electrospun nanofiber scaffolds are still facing many challenges with relation to the choice of materials (properties, performance, etc.), production policy (complexity, process, rate, etc.), and manipulation (scaffold storage, cost, etc.). In some applications, such as drug delivery, the improvement of drug-polymer compatibility is a primary focus for biologists, with the need to ameliorate the rate of matrix hydration and drug diffusion of the fiber-comprising polymer. Besides, ENS encounter some other practical limitations, such as scare cell infiltration and inadequate mechanical strength for load-bearing application, for example. Therefore, researchers are still focused on innovation in design, material, and architecture, as well as innovation in the manufacturing process. | 7,807.8 | 2021-03-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Materials Science"
] |
Change of Scale Formulas for Wiener Integrals Related to Fourier-Feynman Transform and Convolution
It has long been known that Wiener measure and Wiener measurability behave badly under the change of scale transformation [1] and under translations [2]. Cameron and Storvick [3] expressed the analytic Feynman integral on classical Wiener space as a limit ofWiener integrals. In doing so, they discovered nice change of scale formulas for Wiener integrals on classical Wiener space (C 0 [0, 1], m) [4]. In [5, 6], Yoo and Skoug extended these results to an abstract Wiener space (H, B, ]). Moreover Yoo et al. [7, 8] established a change of scale formula for Wiener integrals of some unbounded functionals on (a product) abstract Wiener space. Recently Yoo et al. [9] obtained a change of scale formula for a function space integral on a generalized Wiener space C a,b [0, T]. On the other hand, in [10], Cameron and Storvick introduced an L 2 analytic Fourier-Feynman transform. In [11], Johnson and Skoug developed an L p analytic FourierFeynman transform for 1 ≤ p ≤ 2 that extended the results in [10]. In [12], Huffman et al. defined a convolution product for functionals on Wiener space and, for a cylinder type functional, showed that the Fourier-Feynman transform of the convolution product is a product of Fourier-Feynman transforms. For a detailed survey of the previous work on the Fourier-Feynman transform and related topics, see [13]. In this paper, we express the Fourier-Feynman transform and convolution product of functionals in Banach algebra S as limits of Wiener integrals on C 0 [0, T]. Moreover we obtain change of scale formulas for Wiener integrals related to Fourier-Feynman transform and convolution product of these functionals. Some preliminary results of this paper were introduced as an oral presentation in 2013 AnnualMeeting of the Korean Mathematical Society [14]. Let C 0 [0, T] denote the Wiener space, that is, the space of real valued continuous functions x on [0, T] with x(0) = 0. Let M denote the class of all Wiener measurable subsets of C 0 [0, T] and let m denote Wiener measure. Then
Introduction
It has long been known that Wiener measure and Wiener measurability behave badly under the change of scale transformation [1] and under translations [2].Cameron and Storvick [3] expressed the analytic Feynman integral on classical Wiener space as a limit of Wiener integrals.In doing so, they discovered nice change of scale formulas for Wiener integrals on classical Wiener space ( 0 [0, 1], ) [4].In [5,6], Yoo and Skoug extended these results to an abstract Wiener space (, , ]).Moreover Yoo et al. [7,8] established a change of scale formula for Wiener integrals of some unbounded functionals on (a product) abstract Wiener space.Recently Yoo et al. [9] obtained a change of scale formula for a function space integral on a generalized Wiener space , [0, ].
On the other hand, in [10], Cameron and Storvick introduced an 2 analytic Fourier-Feynman transform.In [11], Johnson and Skoug developed an analytic Fourier-Feynman transform for 1 ≤ ≤ 2 that extended the results in [10].In [12], Huffman et al. defined a convolution product for functionals on Wiener space and, for a cylinder type functional, showed that the Fourier-Feynman transform of the convolution product is a product of Fourier-Feynman transforms.For a detailed survey of the previous work on the Fourier-Feynman transform and related topics, see [13].
In this paper, we express the Fourier-Feynman transform and convolution product of functionals in Banach algebra S as limits of Wiener integrals on 0 [0, ].Moreover we obtain change of scale formulas for Wiener integrals related to Fourier-Feynman transform and convolution product of these functionals.Some preliminary results of this paper were introduced as an oral presentation in 2013 Annual Meeting of the Korean Mathematical Society [14].
Let 0 [0, ] denote the Wiener space, that is, the space of real valued continuous functions on [0, ] with (0) = 0. Let M denote the class of all Wiener measurable subsets of 0 [0, ] and let denote Wiener measure.Then ( 0 [0, ], M, ) is a complete measure space and we denote the Wiener integral of a functional by A subset of 0 [0, ] is said to be scale-invariant measurable [15] provided is measurable for each > 0, and a scale-invariant measurable set is said to be scaleinvariant null provided () = 0 for each > 0. A property that holds except on a scale-invariant null set is said to hold scale-invariant almost everywhere (-a.e.).
Let C + and C ∼ + denote the sets of complex numbers with positive real part and the complex numbers with nonnegative real part, respectively.Let be a complex valued measurable functional on 0 [0, ] such that the Wiener integral exists as a finite number for all > 0. If there exists a function * () analytic in C + such that * () = () for all > 0, then * () is defined to be the analytic Wiener integral of over 0 [0, ] with parameter , and for ∈ C + we write If the following limit exists for nonzero real , then we call it the analytic Feynman integral of over 0 [0, ] with parameter and we write where approaches − through C + .Now we will introduce the class of functionals that we work on in this paper.The Banach algebra S was introduced in [16] by Cameron and Storvick.It consists of functionals expressible in the form for -a.e. in 0 [0, ], where the associated measure is a complex Borel measure on 2 [0, ] and ⟨V, ⟩ denote the Paley-Wiener-Zygmund stochastic integral ∫ 0 V()().
Fourier Feynman Transform and a Change of Scale Formula
In this section we give a relationship between the Wiener integral and the Fourier-Feynman transform on 0 [0, ] for functionals in the Banach algebra S; that is, we express the Fourier-Feynman transform of functionals in S as a limit of Wiener integrals on 0 [0, ].We begin this section by introducing the definition of analytic Fourier-Feynman transform for functionals defined on 0 [0, ].Let 1 ≤ < ∞ and let be a nonzero real number.
By the definition (4) of the analytic Feynman integral and the 1 analytic Fourier-Feynman transform (9), it is easy to see that for a nonzero real number , (1) ( In particular, if ∈ S, then is analytic Feynman integrable and Huffman et al. established the existence of the Fourier-Feynman transform on 0 [0, ] for functionals in S. Theorem 2 (Theorem 3.1 of [17]).Let ∈ S be given by (5).
We next introduce an integration formula which is useful in this paper.The proof of this lemma is essentially the same as Lemma 3 of [3] and hence we will state it without proof.
Journal of Function Spaces 3 Now we give a relationship between the analytic Fourier-Feynman transform and the Wiener integral on 0 [0, ] for functionals in S. In this theorem we express the Fourier-Feynman transform of functionals in S as a limit of Wiener integrals.
Theorem 4. Let ∈ S be given by (5).Let { } be a complete orthonormal set of functionals in 2 [0, ].Let be a nonzero real number and let { } be a sequence of complex numbers in C + such that → −.Then we have for -a.e. ∈ 0 [0, ].
Proof.Let Γ( ) be the Wiener integral on the right hand side of (15).Then by ( 5) and the Fubini theorem, where By Lemma 3, we evaluate the above Wiener integral to obtain Now by Parseval's theorem lim and so by the bounded convergence theorem lim Finally by (13) in Theorem 2, the proof is completed.
As we have seen in ( 10) and ( 11) above, if = 1, then the Fourier-Feynman transform (1) ()(0) is equal to the analytic Feynman integral of .Hence we have the following corollary.
Corollary 5 (Theorem 2 of [3]).Let ∈ S be given by (5).Let { } be a complete orthonormal set of functionals in 2 [0, ].Let be a nonzero real number and let { } be a sequence of complex numbers in C + such that → −.Then we have The following is a relationship between () and the Wiener integral for functionals in S. Theorem 6.Let ∈ S be given by (5).Let { } be a complete orthonormal set of functionals in 2 [0, ].Then for each ∈ C + we have for -a.e. ∈ 0 [0, ].
Proof.To prove this theorem, we modify the proof of Theorem 4 by replacing by whenever it occurs.Then we have lim We apply the dominated convergence theorem to obtain lim By (12) in Theorem 2, the proof is completed.
Our main result in this section, namely, a change of scale formula for Wiener integral related to Fourier-Feynman transform of functionals in S, now follows from Theorem 6.
Proof.First note that for > 0 Letting = −2 in (22), we have (25) and this completes the proof.
Letting = 0 in (25), we have the following change of scale formula for Wiener integrals on classical Wiener space.
In our next example we will explicitly compute a Wiener integral of a functional under a change of scale transformation.
for ∈ 0 [0, ] and is a real or complex number.We evaluate the Wiener integrals on each side of (25).The left hand side of (25) can be evaluated as follows: By the Paley-Wiener-Zygmund theorem (see [18]), we have Next we evaluate the Wiener integral on the right hand side of (25).Consider By the Paley-Wiener-Zygmund theorem again, we have Thus we have established that ( 25) is valid for () = exp{⟨ 1 , ⟩}.
Note that in Example 9 above, is a real or complex number.If is pure imaginary, ∈ S and is an example of a functional to which Theorem 7 applies.On the other hand, if the real part of is not equal to 0, then can be unbounded.Thus this example shows that the class of functionals for which (25) holds is more extensive than S.
Convolution and a Change of Scale Formula
In this section we give a relationship between the Wiener integral and the convolution product on 0 [0, ] for functionals in the Banach algebra S; that is, we express the convolution product of functionals in S as a limit of Wiener integrals on 0 [0, ].We start this section by introducing the definition of convolution product for functionals on 0 [0, ].if it exists.Moreover for nonzero real number , the convolution product ( * ) is defined by if it exists [12,17,19,20].
The following is the existence theorem for the convolution product of functionals in S on 0 [0, ].
Now we give a relationship between the convolution product and the Wiener integral on 0 [0, ] for functionals in S. In this theorem we express the convolution product of functionals in S as a limit of Wiener integrals.Theorem 12. Let and be elements of S with associated complex Borel measures and , respectively.Let { } be a complete orthonormal set of functionals in 2 [0, ].Let be a nonzero real number and let { } be a sequence of complex numbers in C + such that → −.Then we have for -a.e. ∈ 0 [0, ].
Proof.Let Γ * ( ) be the Wiener integral on the right hand side of (37).Then by (5) and the Fubini theorem, where By Lemma 3, we evaluate the above Wiener integral to obtain Now by Parseval's theorem lim and so by the bounded convergence theorem lim Finally by (36) in Theorem 11, the proof is completed.
The following is a relationship between the convolution product ( * ) and the Wiener integral for functionals in S.
Theorem 13.Let and be elements of S with associated complex Borel measures and , respectively.Let { } be a complete orthonormal set of functionals in 2 [0, ].Then for each ∈ C + we have for -a.e. ∈ 0 [0, ].
Proof.To prove this theorem, we modify the proof of Theorem 12 by replacing by whenever it occurs.Then we have lim Journal of Function Spaces We apply the dominated convergence theorem to obtain lim By (35) in Theorem 11, the proof is completed.
Our main result in this section, namely, a change of scale formula for Wiener integral related to convolution product of functionals in S, now follows from Theorem 13.Theorem 14.Let and be elements of S with associated complex Borel measures and , respectively.Let { } be a complete orthonormal set of functionals in 2 [0, ].Then for each > 0 for -a.e. ∈ 0 [0, ].
Proof.First note that for > 0 Letting = −2 in (43), we have (46) and this completes the proof.
In our final example we will explicitly compute a Wiener integral related to convolution product under a change of scale transformation.
Note that in Example 15 above, was a real or complex number.If is pure imaginary, and belong to S, so and are examples of functionals to which Theorem 14 applies.On the other hand, if the real part of is not equal to 0, then and can be unbounded.Thus this example shows that the class of functionals for which (46) holds is more extensive than S. | 3,102.4 | 2014-06-26T00:00:00.000 | [
"Mathematics"
] |
xProtCAS: A Toolkit for Extracting Conserved Accessible Surfaces from Protein Structures
The identification of protein surfaces required for interaction with other biomolecules broadens our understanding of protein function, their regulation by post-translational modification, and the deleterious effect of disease mutations. Protein interaction interfaces are often identifiable as patches of conserved residues on a protein’s surface. However, finding conserved accessible surfaces on folded regions requires an understanding of the protein structure to discriminate between functional and structural constraints on residue conservation. With the emergence of deep learning methods for protein structure prediction, high-quality structural models are now available for any protein. In this study, we introduce tools to identify conserved surfaces on AlphaFold2 structural models. We define autonomous structural modules from the structural models and convert these modules to a graph encoding residue topology, accessibility, and conservation. Conserved surfaces are then extracted using a novel eigenvector centrality-based approach. We apply the tool to the human proteome identifying hundreds of uncharacterised yet highly conserved surfaces, many of which contain clinically significant mutations. The xProtCAS tool is available as open-source Python software and an interactive web server.
Introduction
The characterisation of the human interactome has been fundamental for our understanding of cellular processes. Tens of thousands of human protein-protein interactions (PPIs) have been detected using a range of experimental techniques [1,2]. To date, most PPI data describe the binary interaction between two full-length proteins. Recent experimental and computational approaches have defined stable complexes, thereby building higher-order PPI networks more closely matching the protein organisation in the cell [3,4]. The interfaces of these interactions have been characterised at the amino acid resolution for only a minority of PPIs. The molecular detail of a PPI interface is extremely valuable for a detailed characterisation of protein function or, in some cases, as potential therapeutic targets. Consequently, there is a strong need for an amino acid resolution interactome. Numerous experimental approaches have been applied to characterise PPI interfaces at this level of detail, including structural, mutagenesis, or biophysical assays. However, they are generally expensive, time-consuming, and low throughput. As a result, interface identification is often supported by computational approaches to pinpoint residues or regions likely to drive PPIs [5].
Various sequence-based and structure-based features have been analysed to discover protein interfaces [6][7][8][9]. Sequence conservation tends to be the strongest discriminator of residue functionality [10,11]. A common observation is that protein interfaces are often under functional constraints and less likely to accumulate mutations. Consequently, as surface residues often lack the strong structural constraints of the hydrophobic core residues supporting a protein fold, functionally constrained interaction surfaces can often be observed as accessible surfaces with relatively high conservation compared to the remaining protein surface [12]. Many computational studies have focused on the discovery
Methods
The xProtCAS framework is a pipeline for the discovery of clusters of conserved residues on the surface of folded structural modules as a proxy for functional interfaces. The xProtCAS pipeline integrates information on residue solvent accessibility and conservation with topological data on residue proximity in three-dimensional space using graph-based methods to determine proximal clusters of relatively conserved residues on a protein's surface. The key use case of the xProtCAS framework is the analysis of human AlphaFold2 [33,34] models taking a UniProt identifier as input. However, the standalone version can use either AlphaFold2 models or PDB structures. The output is a set of conservation and accessibility metrics for the protein and an annotated and scored conserved surface on each autonomous structural module of the protein.
Framework for the Discovery of Conserved Protein Surface
As shown in Figure 1A, the workflow of the xProtCAS framework includes eight major steps: (i) definition of the autonomous structural modules of a protein; (ii) calculation of the residue-centric accessibility and topology metrics for the structural module; (iii) calculation of the per residue conservation scores; (iv) creation of an edge-weighted directed graph encoding the structural and evolutionary properties for the structural module; (v) calculation of eigenvector centrality scores; (vi) definition of the conserved accessible surfaces using hierarchical clustering; and (vii) scoring and (viii) annotation of the conserved accessible surfaces.
conservation and accessibility metrics for the protein and an annotated and scored conserved surface on each autonomous structural module of the protein.
Framework for the Discovery of Conserved Protein Surface
As shown in Figure 1A, the workflow of the xProtCAS framework includes eight major steps: (i) definition of the autonomous structural modules of a protein; (ii) calculation of the residue-centric accessibility and topology metrics for the structural module; (iii) calculation of the per residue conservation scores; (iv) creation of an edge-weighted directed graph encoding the structural and evolutionary properties for the structural module; (v) calculation of eigenvector centrality scores; (vi) definition of the conserved accessible surfaces using hierarchical clustering; and (vii) scoring and (viii) annotation of the conserved accessible surfaces. The graph encodes the proximity between residues, accessibility, and conservation of the structural module. Nodes are accessible residues and adjacent residues are connected by edges. The blue colour of the surface structure representation of the structural module reflects the conservation of residues. The conservation scores are encoded on the graph as edge weights demonstrated by edge thickness in the graph panel. Residues sharing a tetrahedron in the three-dimensional Delaunay triangulation are considered neighbours. Each residue has incoming edges from neighbouring residues. The weight of the edge depends on the conservation of the residue and the number of neighbours. For example, in a residue with a conservation score of 0.5 and five adjacent neighbours, each incoming edge weights 0.5/5. Each accessible residue is in one tetrahedron at least, resulting in the whole structural module being connected in a single graph.
Definition of the Autonomous Structural Modules of the Structural Model
In the first step of the pipeline, the AlphaFold2 model of a full-length protein of interest is retrieved from the AlphaFold protein structure database [33,34] (https://alphafold.ebi.ac.uk/). The structural model is preprocessed to define autonomous structural modules. This allows intramolecular interface surfaces to be ignored and each autonomous functional unit to be analysed separately. Autonomous structural modules are extracted from the AlphaFold2 structure model by running a graph-based community detection algorithm on the AlphaFold2 predicted aligned error (PAE) matrix [35]. First, a graph is built from residues with AlphaFold2 per-residue confidence (pLDDT) > 70, where residues are nodes and edges are placed between residues with AlphaFold2 PAE less than 5 Å. The edges are weighted based on the inverse of the predicted aligned error, and a The graph encodes the proximity between residues, accessibility, and conservation of the structural module. Nodes are accessible residues and adjacent residues are connected by edges. The blue colour of the surface structure representation of the structural module reflects the conservation of residues. The conservation scores are encoded on the graph as edge weights demonstrated by edge thickness in the graph panel. Residues sharing a tetrahedron in the three-dimensional Delaunay triangulation are considered neighbours. Each residue has incoming edges from neighbouring residues. The weight of the edge depends on the conservation of the residue and the number of neighbours. For example, in a residue with a conservation score of 0.5 and five adjacent neighbours, each incoming edge weights 0.5/5. Each accessible residue is in one tetrahedron at least, resulting in the whole structural module being connected in a single graph.
Definition of the Autonomous Structural Modules of the Structural Model
In the first step of the pipeline, the AlphaFold2 model of a full-length protein of interest is retrieved from the AlphaFold protein structure database [33,34] (https://alphafold.ebi.ac. uk/). The structural model is preprocessed to define autonomous structural modules. This allows intramolecular interface surfaces to be ignored and each autonomous functional unit to be analysed separately. Autonomous structural modules are extracted from the AlphaFold2 structure model by running a graph-based community detection algorithm on the AlphaFold2 predicted aligned error (PAE) matrix [35]. First, a graph is built from residues with AlphaFold2 per-residue confidence (pLDDT) > 70, where residues are nodes and edges are placed between residues with AlphaFold2 PAE less than 5 Å. The edges are weighted based on the inverse of the predicted aligned error, and a greedy modularity maximisation algorithm [36] is used to detect communities. Modularity measures the quality of communities in a graph by calculating the difference between two components. The first represents the edges that fall within the detected communities, and the second counts for the expectation of those edges happening randomly (the null model). The best communities are those maximising that difference. The resolution, the weight of the null model in the modularity equation, controls the size of the detected communities. A resolution of zero gives no weight to the null model and counts only the intra-community edges, resulting in one community containing the whole protein in the graph [36]. Higher resolution values favour smaller communities. The resolution of the modularity maximisation algorithm was optimised on a dataset of 832 human E3 ligases manually collected from the literature (Table S1). By iteratively decreasing the resolution from 1 to 0 by 0.01 steps, we defined an optimised resolution that minimised the size of structural modules but ensured that Pfam domains [37] were not split between different structural modules ( Figure S1). Finally, small communities with less than 30 residues, the lower bound for a stably folded domain, were excluded. The structural modules derived from this step are processed independently in the remainder of the analysis pipeline.
Defining the Residue Accessibility and Topology of a Structural Module
The residue accessibility of each structural module is calculated by Delaunay triangulation which generates a triangle-based tessellation of a protein surface. This approach has the advantage, relative to classical solvent accessible surface area (SASA) metrics, of discriminating between side chain and backbone accessibility. Delaunay triangulation [38,39] takes the three-dimensional coordinates of each heavy atom in the structural module and produces a set of non-overlapping tetrahedra (triangular pyramids composed of four triangular faces). The centre of each heavy atom is considered a vertex and is present in at least one tetrahedron in the convex hull (the smallest set of vertices enclosing the whole structural module). Atoms on the surface of the structural module can be extracted by finding vertices in faces present in exactly one tetrahedron. The xProtCAS pipeline ignores backbone accessibility and considers an amino acid accessible if at least one of its side chain heavy atoms is accessible (Table S2). The 3D Delaunay triangulation calculation is based on the implementation by Nimrod et al. [16]. Triangulation is also used to identify a residue's neighbours as those residues with surface atoms in a shared tetrahedron.
Calculating per Residue Conservation Scores of a Structural Module
Conservation scores are derived from orthologue alignments created using the GO-PHER orthologue discovery software [40] on a database of model organism sequences from the UniProt resource [41]. The orthologous sequences were aligned using the ClustalO multiple sequence alignment software [42], and guide trees were generated using the tree-building function of ClustalW [43]. The xProtCAS pipeline calculates a classical column-based weighted conservation score (WCS) as defined in SLiMPrints [44]. The contribution of each residue at a given position in the alignment is weighted by the Clustal guide tree in the WCS score to increase the contribution of conserved residues in more divergent orthologues.
Constructing an Edge-Weighted Directed Graph Representation of a Structural Module
A directed graph is constructed encoding the residue accessibility, residue conservation, and residue proximity in the three-dimensional space of the structural module ( Figure 1B). The topology, accessibility, conservation (TAC) graph contains only residues with accessible side chain heavy atoms. Directed edges are added between adjacent residues where adjacency is defined based on surface atoms that share the same tetrahedron in the Delaunay triangulation graph produced during accessibility calculation [16]. Each incoming edge is weighted by the node's conservation score, divided by the number of incoming edges to normalise the number of proximal residues ( Figure 1B).
Calculating Eigenvector Centrality Scores of a Structural Module
Although filtering residues using a simple conservation score cutoff might seem sufficient for finding functional regions, it yields several residues spread on the surface of the protein. Hence, finding conserved patches instead of single residues is crucial to pinpoint the more likely functional regions. Eigenvector centrality [45] measures a node's importance, in this case, encoded as residue conservation, but also integrates the importance of surrounding nodes. Therefore, a node connected to more highly conserved nodes receives a higher centrality score [46]. Eigenvector centrality scores are calculated for each residue in the TAC graph. The definition of a node's transitive influence in the TAC graph allows the discrimination of groups of proximal conserved residues to identify conserved accessible surfaces on a structural module.
Defining Conserved Accessible Surfaces of a Structural Module
Eigenvector centrality scores are processed using a hierarchical clustering approach to extract the most conserved cluster of residues on the protein surface. Initially, each node in the graph is considered a separate cluster on its own. Next, pairs of clusters are successively combined into larger clusters by minimising the variance of centrality scores in the merged clusters. The hierarchical clustering produces two low variance clusters, one with the highest central scoring residue nodes and the other with the remaining surface accessible residue nodes in the TAC graph. Finally, the connected residues are extracted from the highest centrality-scoring cluster by starting with the node with the highest centrality score and recursively finding connected neighbours in the graph that are also in the high-scoring cluster to define the most conserved accessible surface.
Evaluation Metrics of the Extracted Patch
Three distinct conservation metrics are calculated evaluating the conserved accessible surface: (i) absolute patch conservation: the average residue WCS conservation of the patch representing the absolute conservation of the patch in the orthologous species set; (ii) relative patch conservation: the difference between the average of WCS conservation in the patch and the average of WCS conservation in the non-patch surface representing the relative conservation of the pocket compared to the remainder of the structural module surface; and (iii) the associated p-value of the relative patch conservation p-value calculated as a Mann-Whitney U test for the alternative hypothesis that the distribution underlying patch conservations is stochastically greater than the distribution underlying non-patch conservations.
Annotations of the Pockets with Functional Data
Identified conserved accessible surfaces are cross-referenced with functional annotation from a range of sources. Domain family overlap information is defined using Pfam domains [37] data collected from UniProt [41]. The intersection between defined conserved accessible surfaces with residues in experimentally characterised interfaces is mapped based on extraction of protein interfaces from the PDB structures where interface residues were determined as residues with heavy atoms within less than 6 Å distance from the bound partner. Overlapping active sites are collected from the UniProt resource [41]. Diseaserelevant mutations data in the predicted pockets are collected from the EBI Protein API [47] and UniProt [41] and classified based on clinical significance annotation (Pathogenic, Likely Pathogenic, Disease, Risk factor, Association, Protective, Drug response, Affects) as in Pep-Tools [48]. Post-translational modifications overlapping the defined conserved accessible surfaces are collected from Phospho.ELM [49], PhosphoSitePlus [50], Ochoa et al. [51], and UniProt [41].
Defining Multiple Pockets per Structural Module
As proteins or domains can have multiple interaction interfaces, centrality scoring, and hierarchical clustering can be performed iteratively after the removal of the residues in the initial patch from the graph representation to define additional interfaces.
Processing PDB Structures
The xProtCAS pipeline can be applied beyond AlphaFold models to any PDB structures. All pipeline steps except for the "definition of the autonomous structural modules" are performed for PDB structure analyses. When required, PDB structures are mapped to UniProt [41] sequence position-centric conservation scores using SIFTS [52] and the PDBe REST API [53].
PDB Benchmarking Dataset
The xProtCAS pipeline was optimised and benchmarked on three datasets of PDB structures (Table S3): (i) 407 domain-domain interaction interfaces from the MaSIF-site dataset [54], (ii) 522 SLiM-domain interaction interface dataset extracted from PDB and filtered for redundancy (see Supplementary material note 1), and (iii) 100 active sites in structures from the two previous datasets. The MaSIF-site dataset was filtered to remove structures with >30% identity to structures in the unfiltered SLiM-domain interaction interface dataset. Interface residues, defined as pocket residues, were determined in each set as residues with heavy atoms less than 6 Å distant from the bound partner. The remainder of the residues on the protein surface not found in the pocket residues set are defined as non-pocket residues. Residues that are defined by the Delaunay triangulation as accessible are defined as surface residues. Residues that are filtered by the Delaunay triangulation as inaccessible with side chains in the hydrophobic core of the protein are defined as core residues.
Human Proteome Analysis
The xProtCAS tool was applied to 20,395 proteins of the Human Proteome (UniProt reviewed human proteins with no fragments, release 2021_02) [41] to define structural modules and identify conserved accessible surfaces.
Availability
All the pipeline steps were implemented in Python. A stand-alone open-source software with the core functionality of the pipeline is available at https://github.com/ hkotb/xprotcas. The xProtCAS framework is also accessible as a web server at http: //slim.icr.ac.uk/projects/xprotcas.
Evaluating Residue Conservation for Identifying Binding Surfaces
The PDB benchmarking dataset represents an evaluation set to benchmark xProtCAS's ability to define conserved accessible surfaces and whether these regions are likely to represent functional sites on proteins.
Weighted Residue-Based Conservation Scoring (WCS) Benchmarking
We evaluated weighted residue-based conservation scoring (WCS) on orthologue alignments from four sets of proteins from proteomes of species with different levels of evolutionary divergence (Mammalia, Vertebrata, Metazoa, Quest for Orthologs (QfO) [55], see supplementary material for a list of species in each database). The weighted residuebased conservation scoring can discriminate between the pocket and non-pocket residues in all orthologue alignment sets (Figure 2A). The metazoa-based alignment shows the most discriminatory power between the pocket and non-pocket regions (p-value: 5.66 × 10 −44 ) compared to the three other orthologue alignments sets (QfO p-value: 1.95 × 10 −40 , Vertebrata p-value: 1.31 × 10 −36 , Mammalia p-value: 1.45 × 10 −25 ). Next, we evaluated metazoa-based scoring separately on the domain-domain (DDI), SLiM-domain (SDI), and the active site PDB benchmarking dataset. We observed a significant difference between the pocket and non-pocket residues for both DDI and SDI sets (DDI p-value: 3.44 × 10 −8 , SDI p-value: 5.27 × 10 −44 ) ( Figure 2B,C). Interestingly, SDI pockets are clearly more conserved than DDI interfaces in the benchmarking dataset (p-value: 1.94 × 10 −14 ) ( Figure 2B,C). Active sites are the most conserved set in all cases and are significantly more conserved than surface residues in both interface sets (DDI p-value: 5.53 × 10 −26 , SDI p-value: 1.21 × 10 −33 ) ( Figure 2B). Finally, we compared the WCS-weighted residue-based conservation scoring to the Rate4Site scores [56,57] from ConSurf [19,20]. The WCS scoring scheme's ability to discriminate between binding pocket residues and non-pocket residues is comparable to Rate4Site scores which use slower Bayesian estimation for scoring residues (SDI AUC-Rate4Site:0.67, WCS:0.66; DDI AUC-Rate4Site:0.57, WCS:0.56) ( Figure 2D).
Eigenvector Centrality Score Benchmarking
We benchmarked the discriminatory power of the per-residue eigenvector centralitybased scores to the WCS and ConSurf Rate4Site scores. The eigenvector centrality-based scores show better discrimination between the pocket and non-pocket regions at the residue level compared to the WCS and ConSurf Rate4Site scores ( Figure 2D). Centrality scores reveal conserved patches rather than single residues; therefore, we benchmarked their ability to pinpoint the functional regions on the SLiM-domain interactors and domaindomain interfaces in our PDB benchmarking dataset. The eigenvector centrality-based approach was benchmarked by quantifying the proportion of chains where the validated pocket overlaps with the predicted pocket. We observed that eigenvector centrality correctly identified~70% of the validated pockets in the SDI and DDI PDB benchmarking datasets ( Figure 2E). As with previous benchmarks, the active sites performed significantly better than interface datasets with 84% of the expected pockets rediscovered. As many structural modules will have multiple conserved interaction surfaces, we benchmarked the centrality approach over multiple iterations showing, as expected, increasing recall with each iteration (Figures 2F and S2). Next, we compared the conservation of the validated pocket in the PDB benchmarking datasets to the quality of the return xProtCAS conserved accessible surfaces. We observed that when the known interface surface is highly conserved relative to the rest of the structural unit surface, xProtCAS is more likely to pinpoint the correct surface and the overlap with the known surface interface is higher ( Figure 2G). Finally, we compared the conserved accessible surfaces returned by the xProtCAS and PatchFinder [15,16] from the PDB benchmarking datasets. The surfaces extracted by xProtCAS were slightly smaller in size than PatchFinder surfaces, but when compared to known pockets we observed that xProtCAS had slightly better precision and recall for both the SDI (PatchFinderprecision: 0.34, recall:0.29; xProtCAS-precision: 0.41, recall:0.32) and DDI (PatchFinderprecision: 0.21, recall:0.13; xProtCAS-precision: 0.41, recall:0.18) datasets (Table S4). Most importantly, there is a significant difference in the running time of the two tools (Table S4) favouring the centrality-based approach.
Potential Novel Interfaces in the Human Proteome
The xProtCAS pipeline was applied to 20,395 UniProt-reviewed human proteins to define a set of potential novel interfaces in the human proteome. The autonomous structural unit definition resulted in 31,702 autonomous structural modules (Table S5). The top-ranked conserved accessible surface on each subunit was extracted and annotated for overlap with relevant functional information. Of the 31,702 conserved accessible surfaces, 1793 (5.6%) are identified with active sites, 3215 (10.1%) intersect known interfaces and 2893 (9.1%) with clinically significant mutations of which only 820 are known active sites or interfaces ( Figure 3A). The majority of surfaces (24,797, 78.2%) had no overlapping annotation. We also observed a large number of the surfaces overlapped with post-translational modification, with phosphorylation and ubiquitination representing the most common modifications ( Figure 3B,C). Next, we filtered the set for high-accuracy structural modules using the mean Predicted Aligned Error (PAE), leaving approximately half of the structural modules (17,477 of 31,702 with PAE less than 5Å). The high-accuracy structural module set was filtered by relative patch conservation p-value (cut-off of 1.0 × 10 −10 , representative examples of structural modules with varying relative patch conservation p-value scores are available in Supplementary Figure S3) to define the set of conserved accessible surfaces with the most significant difference between the pocket and non-pocket surface residues ( Figure 3D). The remaining 1406 structural modules showed significant enrichment of functional annotation compared to the complete dataset with 406 (28.8%) active sites, 275 (19.6%) known interfaces, and 320 (22.8%) clinically significant mutations, however, 621 (44.2%) structural modules still had no overlapping annotation and represent highly conserved and uncharacterised conserved accessible surfaces. Figure 3E-G,I provide a set of representative examples of both characterised and uncharacterised conserved accessible surfaces. The benchmarking results showed that xProt-CAS performed strongly at mapping surfaces overlapping active sites of enzymes. The returned surface in PI-PLC X domain-containing protein 3 (PLCXD3) overlaps the active sites residues 37H and 114H with phosphoric diester hydrolase activity ( Figure 3E) [41,58]. The xProtCAS tool can also pinpoint functional surfaces that drive protein-protein interactions. The returned surface on L-aminoadipate-semialdehyde dehydrogenase-phosphopantetheinyl transferase (AASDHPPT) represents a known domain-domain interface with the fatty acid synthase (FASN) ( Figure 3F) [59]. Similarly, a highly conserved surface on Peroxisomal biogenesis factor 3 (PEX3) represents a SLiM binding pocket which contributes to the assembly of membrane vesicles and by acting as a docking surface for Peroxisomal biogenesis factor 19 (PEX19) ( Figure 3G) [60]. Many of the returned surfaces overlapped clinically significant mutations linked to a wide variety of diseases ( Figure 3H). For example, the most conserved surface on Transport and Golgi organisation protein 2 homolog (TANGO2) has four clinically significant mutations linked to metabolic crises, recurrent, with rhabdomyolysis, cardiac arrhythmias, and neurodegeneration (MECRCN), and it is not characterised as a known interface or active site ( Figure 3I).
The xProtCAS Web Server
The xProtCAS pipeline and interactive visualisations have been made available as a web server at http://slim.icr.ac.uk/projects/xprotcas. Proteins can be searched using protein name, gene name, or UniProt accession. The analysis page of the query protein ( Figure 4A) provides an interactive viewer to display the defined structural units and conserved accessible surfaces. The sidebar provides a list of structural modules and metrics related to their conserved accessible surface. The server can display a selected structural module separately or in the context of the full-length protein ( Figure 4B) with structure ( Figure 4A), graph ( Figure 4C), and multiple sequence alignment ( Figure 4D) representations. The structure and structure representations allow various colouring schemes to colour residues based on centrality, conservation, or accessibility scores. The multiple sequence alignments used in the conservation score calculation are shown in the Module Alignments section of the web interface. All data can be downloaded in the downloads section in JavaScript Object Notation (JSON) format.
Conclusions
Residue conservation can indicate a function that has been maintained across divergent sequences. Consequently, the variability of conservation across a primary sequence or surface can be leveraged to identify functionally important residues. In this work, we have designed an approach for conserved protein surface annotation that encodes the accessibility, topology, and conservation of a protein as a graph. The nodes of the graph are accessible residues in a structural module, the three-dimensional topology of the structural module is encoded in the edges of the graph, and edge weights are used to encode residue conservation scores of the connected residue nodes. Eigenvector centrality gives higher scores to nodes with influential neighbours; as a result, subgraphs with high eigenvector centrality scoring residues reflect surfaces with a high concentration of relatively strongly conserved residues. We have shown that by applying eigenvector centrality to integrate the topological, accessibility, and evolutionary information encoded in the graph we can pinpoint conserved accessible surfaces, and these surfaces strongly correlate with functional surfaces on a protein. We introduced evaluation scores to rank and quantify confidence in a given surface, and we demonstrated these scores are strong discriminators for conserved accessible surfaces that overlap a known interface. In the future, integrating data from non-conservation approaches for pocket discovery with the conservation-based eigenvector centrality approach, for example, using machine learning, could significantly improve the quality of the predictions of either approach alone.
The rapid advances in deep learning methods for protein structure prediction have resulted in an explosion of high-quality structural models of proteins. In this study, we take advantage of AlphaFold2 structural models to perform evolutionary analyses on a huge set of proteins previously inaccessible for structure-conservation exploration. The direct integration of AlphaFold2 structural models into the pipeline simplifies access to the complete protein search space. The speed of the pipeline, in the range of seconds when the structure and alignment are locally available, boosts its scalability and allows proteome-wide analysis to be performed with ease. As a result, we have explored the evolutionary landscape of human protein surfaces, finding thousands of putative binding pockets without a known function in need of further experimental exploration. However, caution should be taken for surface evolution analyses of AlphaFold2 data. Models have variable levels of quality, and some modules may have partial local misfolding or buried residue side chains incorrectly appearing on the protein surface. Given the higher level of conservation of these residues, it is important to be aware of the AlphaFold2 confidence metrics and consider them when analysing returned surfaces. We utilise two scores to quantify the structure quality of the patch represented in the mean patch predicted Local Distance Difference Test (pLDDT) and Predicted Aligned Error (PAE). Both scores are AlphaFold metrics for scoring structure prediction confidence and accuracy.
The xProtCAS web server represents a fast, simple, and intuitive tool to analyse protein surface conservation. The two comparable available web-based tools for conserved accessible surface discovery, PatchFinder, and FuncPatch web servers, were no longer functional at the time of publication. There are overlaps with the functionality of the ConSurf server. However, the definition of the most conserved accessible surface and integration with AlphaFold2 models of the xProtCAS server adds key functionality not available with the ConSurf server. The xProtCAS server uses AlphaFold2 models as input. In our experience, the full-length AlphaFold2 structures reduce noise resulting from intramolecular interaction surfaces that are uncomplexed when domains are characterised independently. Furthermore, when an experimental structure is available and has been used in training, the AlphaFold2 model rarely diverges significantly from the experimental structure. However, if a non-AlphaFold2-derived structure is required the standalone software is freely available. An additional use case of the standalone software is to define multiple surfaces. Structural modules with multiple functional surfaces are an issue for many reasons. Firstly, the xProtCAS pipeline can find functional protein interaction surfaces yet incur a penalty in benchmarking if the surface is not part of the testing set.
This also makes the definition of a negative set difficult. Secondly, as the centrality-based approach returns the most conserved surface in the graph, highly conserved surfaces can be discarded. The xProtCAS pipeline can be applied in an iterative manner to remedy this issue by removing the most conserved surface with each iteration. The number of iterations can be constrained by applying a relative patch conservation p-value cut-off to return significantly conserved accessible surfaces.
In summary, we have developed xProtCAS, a graph-based pipeline to define conserved accessible surfaces in protein structures. The xProtCAS pipeline provides a novel tool to the biological community that allows rapid analysis of the surface properties of a protein to define putative functional pockets, pinpoint potential interaction interfaces, aid in experimental design, and prioritise proteins for functional characterisation.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biom13060906/s1, Table S1: List of human E3 ligases manually collected from the literature and used in tuning the resolution parameter of the community detection algorithm; Table S2: Benchmarking of accessible residues settings; Table S3: PDB dataset used in evaluating xProtCAS's ability to find functional regions. The dataset consists of SLiM-domain interactors (SDI), domain-domain interactors (DDI), and their active sites; Table S4: Comparison between functional patches extracted by 2 different approaches, PatchFinder and xProtCAS. The 2 approaches are tested to find residues in the SLiM-domain and domain-domain interfaces; Table S5: Dataset of 31,702 structural subunits of human proteins sorted with the p-values of the top-ranked patches. The dataset represents the results of running xProtCAS on 20,395 proteins of the Human Proteome. References [61,62] are cited in the Supplementary Materials. Figure S1. The number of proteins with Pfam domains distributed amongst multiple predicted communities when using resolutions starting from 1 and going down to 0 with 0.01 as a step at a time. We chose 0.4 as the default resolution at which 90% of proteins are large enough not to have Pfam domains distributed amongst multiple communities; Figure S2. Iterations of xProtCAS on five examples of E3 ligases until it finds the correct degron-binding pocket validated manually from literature and automatically using complex structures based on the closeness between heavy atoms (when the degron chain is present in the PDB structure file). Residues with white colour have low centrality scores, yellow colour for average scores, red for high scores, and yellow represents the degron residues of the interacting partner; Figure S3. (A) Procollagen galactosyltransferase 1 (Q8NBJ5) structure displaying conservation scores on the top figure (grey indicates low score, blue average score, and red high scores) and a highly significant top-ranked predicted patch on the bottom figure (the red color represent the defined patch, the grey color for the rest of the surface) (p-value: 1.29 × 10 −18 ) making it ranks at the head of the list of all patches on human proteins (rank: 343). (B) Alanine-glyoxylate aminotransferase 2 mitochondrial (Q9BYV1) structure with the top-ranked patch (on the bottom figure) having average ranking (rank: 14,875) in the list of patches on all human proteins, as the region surrounding the predicted patch is also conserved (on the top figure) which affected the patch conservation significance (p-value: 9.01 × 10 −6 ). (C) SLIT and NTRK-like protein 5 (O94991) structure with an equally conserved surface (on the top figure) leading to the top-ranked patch being barely significant (p-value: 0.002) and ranks in the tail of the list of all human protein patches (rank: 27, 898). | 7,511 | 2023-05-30T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Light quark Yukawa couplings from Higgs kinematics
We show that the normalized Higgs production $p_T$ and $y_h$ distributions are sensitive probes of Higgs couplings to light quarks. For up and/or down quark Yukawa couplings comparable to the SM $b$ quark Yukawa the $\bar u u$ or $\bar d d$ fusion production of the Higgs could lead to appreciable softer $p_T$ distribution than in the SM. The rapidity distribution, on the other hand, becomes more forward. We find that, owing partially to a downward fluctuation, one can derive competitive bounds on the two couplings using ATLAS measurements of normalized $p_T$ distribution at 8\,TeV. With 300 fb${}^{-1}$ at 13\,TeV LHC one could establish flavor non-universality of the Yukawa couplings in the down sector.
I. INTRODUCTION
The Higgs mechanism in the Standard Model (SM) has a dual role -it breaks the electroweak gauge symmetry and endows the SM charged fermions with a nonzero mass. Measurements of the Higgs production and decays by ATLAS and CMS show that the Higgs is the dominant source of EWSB [1]. The Higgs mechanism also predicts that the Higgs couplings to the SM charged fermions, y SM f , are proportional to their masses, m f , with v = 246 GeV. In the SM the Yukawa couplings are thus predicted to be very hierarchical. The prediction (1) can be distilled into four distinct questions [2,3]: i) are the Yukawa couplings flavor diagonal, ii) are the Yukawa couplings real, iii) are diagonal Yukawa couplings proportional to the corresponding fermion masses y f ∝ m f , iv) is the proportionality constant √ 2/v? Given the current experimental bounds, see below, it is still possible that light fermions Yukawa are larger than the SM predictions [4][5][6][7][8] or much smaller due to non SM masses generation mechanism of the light fermions [9].
Experimentally, we only have evidence that the Higgs couples to the 3rd generation charged fermions [1]. This means that the couplings to the 3rd generation charged fermions follow the hierarchical pattern (1) within errors from the global fits that are about O(20%) (though with some preference for increased top Yukawa and decreased bottom Yukawa). A related question is whether Higgs couplings to the 1st and 2nd generations are smaller than the couplings to the 3rd generation. This is already established for charged leptons [10,11] and up-type quarks [12], while flavor universal Yukawa couplings are still allowed for down quarks (for future projections see [13,14]), The bounds on lepton Yukawa couplings come from direct searches, while the bound on light quark Yukawa couplings come from a global fit (including electroweak precision data) varying all the Higgs couplings. Significantly looser but model independent bounds on y c from a recast of h → bb searches [12] or from measurements of total decay width [12,15] also show y c < y t .
In this manuscript we show that the indirect sensitivity to the light quark Yukawa couplings can be improved by considering normalized dσ h /dp T or dσ h /dy h distributions for the Higgs production. Higgs p T distributions have been considered before as a way to constrain new particles running in the ggh loop [16][17][18][19][20][21][22][23][24][25][26][27]. 1 In the case of enhanced light quark Yukawa couplings the h + j diagrams are due to the qq, qg, andqg initial partonic states (the effects due to the inclusion of u, d, s quarks in the ggh loops are logarithmically enhanced, but still small [34]). Since these give different dσ h /dp T or dσ h /dy h distributions than the gluon fusion initiated Higgs production, the two production mechanisms can be experimentally distinguished.
The paper is organized as follows. In Section II we discuss the state of the art theoretical predictions of the normalized p T and y h distributions, and the sensitivity to light quark Yukawas. The present constraints and future projections are given in Section III, while Conclusions are collected in Section IV.
II. LIGHT YUKAWA COUPLINGS FROM HIGGS DISTRIBUTIONS
In the rest of the paper we normalize the light quark Yukawa couplings to the SM b-quark one, and introduce [42] [42]. The sensitivity can be improved if one uses inclusive cross section at different collision energies [46].
In order to improve these bounds we exploit the fact that the Higgs rapidity and p T distributions provide higher sensitivity to the light quark Yukawa couplings than merely measuring the inclusive Higgs cross section. In the SM the leading order (LO) inclusive Higgs production is through gluon fusion. This is dominated by a threshold production where both gluons carry roughly equal partonic x. The resulting dσ h /dy h thus peaks at y h = 0. The distribution would change, however, if one were to increase the Yukawa coupling to u-quarks, such that the LO Higgs production would be due to uū fusion. Since u is a valence quark the uū fusion is asymmetric, with u quark on average carrying larger partonic x than thē u sea quark. The Higgs production would therefore peak in the forward direction. This is illustrated in Fig. 1 Somewhat different considerations apply to the case of Higgs p T distribution, shown in Fig. 1 (right). Due to initial state radiations, the Higgs p T distribution exhibits a Sudakov peak at small p T [47] (for recent works on the resummations in this region see [48][49][50][51]). The location of the peak is sensitive to the nature of the incoming partons. For gg fusion, the p T distribution peaks at about 10 GeV, while for uū scattering, the p T distribution peaks at smaller values, at about 5 GeV. This is because the effective radiation strength of gluon is α s N c , a few times larger than the effective radiation strength of quarks, α s (N 2 c − 1)/(2N c ), where N c = 3. The larger effective radiation strength of gluons also leads to a harder p T spectrum. In terms of normalized p T distribution, therefore, the uū scattering leads to a much sharper peak at lower p T compared with the gg scattering.
Many of the theoretical errors cancel in the normalized distributions so that 1 Figure 1: is under much better control than the absolute value of the cross section [52]. This is illustrated in the top panels of Fig. 2, where we compare LO, NLO and NNLO theoretical predictions for the normalized and unnormalized y h distributions at √ s = 13 TeV collision energy [53]. Similar cancellation of theoretical uncertainties is observed for normalized p T distribution, illustrated in the bottom panels of Fig. 2, although the reduction of theoretical uncertainties is not as dramatic as in the rapidity distribution. Normalized distribution also help reduces many of the experimental uncertainties. For un-normalized distribution, the total systematic uncertainties due to, e.g., luminosity and background estimates range from 4% to 12% [37]. However, most of the systematic uncertainties cancel in the normalized shape distribution. The dominant experimental uncertainties for the shape of the distribution are statistical ones, ranging from 23% to 75% [37], and can be improved with more data.
In this work we perform an initial study using the rapidity and p T distributions to constrain the light-quark Yukawa couplings. In the study we use Monte Carlo samples of events on which we impose the experimental cuts in Section III. We generate the parton level, pp → h + n jets, including the SM gluon fusion (the background) and qq and qg,qg fusion (the signal) using MadGraph 5 [56] with LO CT14 parton distribution function (PDF) [57] and Pythia 6.4 [58] for the showering, where q = u, d, s, c and n = 0, 1, 2. Events of different multiplicities are matched using the MLM scheme [59]. Further re-weighting of the generated tree-level event samples is necessary because of the large k-factor due to QCD corrections to the Higgs production [60]. We re-weight the LO cross section of different jet multiplicities Figure 2: The upper panels show the rapidity distribution dσ h /dy h (left), where the Higgs decay to γγ, and the normalized rapidity distribution 1/σ h · dσ h /dy h (right) calculated at LO, NLO and NNLO (red, black, blue lines respectively) using HNNLO [53], see text for details. The lower panels show NLL (black) and NNLL (blue) predictions for dσ h /dp T (left) and 1/σ h ·dσ h /dp T (right), obtained using HqT2.0 [54,55]. Blue bands denote scale dependence when varying m h /4 < µ < m h . merged in the MLM matching scheme, to the best available theoretical predictions so far.
For contributions proportional to top Yukawa coupling, which start as gg → h, we use N3LO predictions [61,62], while for contributions proportional to light quark Yukawa, which start as qq → h, we use NNLO predictions [63][64][65][66]. We combine the two re-weighted event samples to compute the normalized differential distributions 1/σ h · dσ h /dy h and 1/σ h · dσ h /dp T .
Our calculation is performed in the large top quark mass limit and we ignore light-quark loop in the gg fusion channel. The same procedure is applied throughout this work.
In Fig. 3, we compare our tree-level MadGraph 5+Pythia prediction for the normalized rapidity and p T distribution against the available precise QCD prediction based on NNLO [53] and NNLO+NNLL calculations [55]. We find that for the rapidity distribution the MadGraph 5+Pythia calculation describes well the shape of the normalized distribution.
Small differences at the level of O(10%) are observed for the p T distribution. In the future, when experimental data become more precise, it will be useful to redo our phenomenological analysis, presented below, using more precise resummed predictions for both the signal and background.
The difference between Higgs production kinematics with and without significant light quark Yukawas becomes smaller when going from uū fusion to dd fusion and to ss fusion (for the same value of the Yukawa coupling in each case). In Fig. 4, we set y u = y d = y s = 2.0 × y SM b to illustrate this point. Since s is a sea quark its PDF is much closer to the gluon PDF, leading to similar Higgs p T and y h distributions in the case of pure gluon fusion and when strange Yukawa is enhanced. We therefore do not expect large improvements in the sensitivity to the strange Yukawa by considering Higgs cross section distributions compared to just using the total rates. Charm quark, on the other hand, has large enough mass that the log enhanced contributions from the charm loop in gg → hj production can have a visible effect on the Higgs kinematical distributions [67].
We note that a potential direct handle on the charm Yukawa can be obtained from the h → cc inclusive rate by using charm tagging [12,14,68,69] or from a Higgs produced in association with a c-jet [70]. The sensitivity of the later may be potentially improved by considering the Higgs p T and y h distributions, or by considering a Higgs produced in association with two c-jets. Figure 4: The 1/σ h · dσ h /dy h (left) and 1/σ h · dσ h /dp T (right) when switching on up (orange), down (green) and strange (red) Yukawa coupling.
III. CURRENT CONSTRAINTS AND FUTURE PROSPECTS
In this section we perform a sensitivity study of the Higgs kinematical distributions as probes of the 1st generation quark Yukawas. We use normalized differential distributions, which, as argued above, have small theoretical uncertainties. In addition, the dependence on the Higgs decay properties, such as the branching ratios and total decay witdh, cancel in the measurements of 1/σ h · dσ h /dp T and 1/σ h · dσ h /dy h . In other words, the normalized distributions are sensitive only to the production mechanism.
The Higgs production can differ from the SM one either by having a modified ggh coupling, or by modified light quark Yukawas. The modification of the Higgs coupling to gluons can arise, for instance, from a modified top Yukawa coupling or be due to new particles running in the loop. In the normalized distribution the presence of new physics in the gluon fusion will affect the total rate and can be searched for in normalized distribution such as 1/σ h · dσ h /dp T for very hard p T , larger than about 300 GeV [20][21][22][23][24][25][26][27]. In contrast, nonzero light quark Yukawa couplings modify the Higgs kinematics in the softer part of the p T spectrum. In our analysis we assume for simplicity that the gluon fusion contribution to the Higgs production is the SM one.
We use the normalized Higgs p T distribution measured by ATLAS in h → γγ and h → ZZ channels [37], to extract the bounds on the up and down Yukawa couplings. We reconstruct the χ 2 function, including the covariance matrix, from the information given in [37]. The theoretical errors on the normalized distributions are smaller than the experimental ones, where we used the Higgs p T to derive the bounds, but not the y H distributions that are less sensitive. For each of the bounds above we marginalized over the remaining Yukawa coupling with the most conservative bound obtained when this is set to zero. Note that the inclusion of correlations is important. The bins are highly correlated because the distribution is normalized. The corresponding 2D contours are given in Fig. 5 (right). These bounds are stronger than the corresponding ones coming from the fits to the inclusive Higgs production cross sections, see the discussion following Eq. (3). In Fig. 5 (left) we also show the comparison between ATLAS data [37] (black), and the theoretical predictions for zero light quark Yukawas,κ u,d = 0 (blue), and when switching on one of them,κ u = 2 (red) orκ d = 2 (orange). The constraints from the Higgs rapidity distributions are at present significantly weaker.
To estimate the future sensitivity reach for the measurements of p T and y h distributions at 13 TeV LHC, we use the same binings and the covariance matrix as in the 8 TeV ATLAS measurements but assume perfect agreement between central values of the experimental points and theoretical predictions. We rescale the relative errors in each of the bins by the effective luminosity gain, and L 8TeV = 20.3 fb −1 we get the expected sensitivity from the p T distribution at 13 TeV for the luminosity of L 13TeV = 300 fb −1 to beκ u < 0.36 andκ d < 0.41 at 95% CL. This should be compared with the expected sensitivity at 8 TeV,κ u < 1.0 andκ d < 1.2. (Note that due to a downward fluctuation in the first bin of ATLAS data [37], cf. Fig. 5 (left), the expected sensitivity is significantly worse than the presently extracted bounds in (4).) The expected sensitivities from normalized rapidity distributions are looser,κ u < 0.84 (2.0) and κ d < 1.1 (3.7) for 13 TeV 300 fb −1 (8 TeV 20.3 fb −1 ). Note that in these rescaling we assumed that the systematic errors will be subdominant, or, equivalently, that they will scale as the statistical errors. Assuming relative error of 5% in each bin and p T bins of 10 GeV we get thatκ u < 0.27 andκ d < 0.31. This error includes both the systematic and statistical errors.
The theoretical error is presently at the level of ∼ 15% [71] (see e.g. Ref. [37]), so that significant improvement on the theoretical errors were assumed in the above projection, which we find reasonable in light of recent progress on theory [34,72,73]. To reach the quoted bounds will require a significant amount of data. For instance, the statistical error of 5% in the lowest p T bins would be reached with O(2 ab −1 ) of 13 TeV data. From the rapidity distribution, with bin size of 0.1, we getκ u < 0.36 andκ d < 0.47, see Fig. 6. In Fig. 7 we also show the projections for how well one can probe the strange Yukawa at 13 TeV LHC from p T (Fig. 7 left) and y distributions (Fig. 7 right). We show the reach as a function of relative errors in each bin with 1σ (2σ) exclusions as a dark (light) grey region. We assume p T bin sizes of 10 GeV and rapidity of 0.01, respectively.
Flavor non-universality in the down sector is established, if conclusivelyκ d <κ b . ATLAS is projected to be able to put a lower bound on the the bottom Yukawa ofκ b > 0.7 [14,74] with L 14TeV = 300 fb −1 . Therefore, given the above prospects for probing the down Yukawa from normalized p T distribution, we expect that there will be indirect evidence for non universality of the Higgs couplings also in the down quark sector.
IV. CONCLUSIONS
Light quark Yukawa couplings can be bounded from normalized p T and rapidity distributions, 1/σ h · dσ h /dp T and 1/σ h · dσ h /dy h , respectively. In these many of the theoretical and The study performed in this paper is based on LO event generator, which can be improved by using more advanced theoretical tools. For example, it would be useful to compute the rapidity and p T distribution for uū → h and dd → h to higher orders in QCD [75]. Also, the SM gg → h inclusive cross section is now known to N3LO level [61,62]. It would be very interesting to push the calculation for rapidity distribution and p T distribution to N3LO and N3LL, and including the full mass dependence for massive quark loop in the gg → hj process to NLO (for recent progress, see e.g., Ref. [34,72,73]). | 4,167.8 | 2016-06-30T00:00:00.000 | [
"Physics"
] |
Probing the CP-Violation effects in the $h\tau\tau$ coupling at the LHC
A new method used to calculate the neutrino for all major tau hadronic decay event by event at the LHC is presented. It is possible because nowadays better detector description is available. With the neutrino fully reconstructed, matrix element for each event can be calculated, the mass of the Higgs particle can also be calculated event by event with high precision. Based on these, the prospect of measuring the Higgs CP mixing angle with $h\to\tau\tau$ decays at the LHC is analyzed. It is predicted that, with a detailed detector simulation, with 3 ab$^{-1}$ of data at $\sqrt{s}=13$ TeV, a significant improvement of the measurement of the CP mixing angle to a precision of $5.2^\circ$ can be achieved at the LHC, which outperforms the sensitivity from lepton EDM searches up to date in the $h\tau\tau$ coupling.
The prospects of measuring the Higgs CP mixing angle with h → τ τ decays at the LHC is presented. The analysis is based on a new method used to reconstruct the neutrino from tau decay with high precision, and a matrix element based method used to extract the best sensitivity to the Higgs CP. All major hadronic tau decay modes are included. It is predicted, based on a detailed detector simulation, that with 3 ab −1 of data at √ s = 13 TeV, a significant improvement of the measurement of the CP mixing angle to a precision of 6.9 • can be achieved at the LHC.
To account for the large asymmetry between the matter and anti-matter in our Universe, enough CP violation effects should be presented in the theory. However, in the Standard Model, the CP phase in the CKM matrix is not sufficient for this purpose. New physics which can provide new source of CP violation is therefore needed to introduce more CP violation sources. Possible candidates are Supersymmetry, Left-Right Symmetric model, etc.
On the other hand, the new discovered Higgs boson also opens a window towards the new physics. The precision measurement of Higgs properties will be one of the most important targets of the LHC in the next running periods. Among them, the CP property is an important topic. The pure CP eigenstate assumption has already been investigated at the LHC experiments [1][2][3] in the diboson decays, and the pure CP-odd situation is excluded better that 99.9% CL. The h → ZZ * → 4l is the golden channel for this measurement, subject to the scale suppression due to dim-6 operators, whereas for the Yukawa coupling, h → τ τ is the best channel we could use and is widely investigated in the literatures [4][5][6][7][8][9][10][11][12][13][14][15][16][17]. However, the missing neutrino from the decay of each tau makes it difficult to achieve a better precision on measuring the CP mixing angle (φ) of the hτ τ interaction which we assume to have the following effective form: In this work, utilizing the mass constraints and impact parameters, a new method which can be used to calculate the missing neutrino per event with the best precision one can achieve is proposed, which is not tried for LHC before. With the momentum of the neutrino from the tau decay reconstructed, an observable based on the matrix element to retrieve the CP information in the hτ τ interaction is calculated for the first time. Unlike in the previous work where only specific tau decay modes are studied, all major tau hadronic decay modes are included and combined in this work, which gives the important prediction on the best we can do with the Higgs CP in its fermionic coupling at the LHC. A detailed simulation study shows that a significant improvement of the measurement of the CP mixing angle can be achieved. We stress that the detector response effect is important, because if the signal yield is overestimated or the background is underestimated, it can lead to unrealistic CP measurement accuracy.
As already been investigated in [4], the most promising Higgs production channel for our purpose is the VBF channel. Although the gluon-gluon fusion is the dominant production channel for the Higgs at the LHC, it has larger background and lower signal purity, which will impact a lot the measurement of the CP property [18]. Thus, we will use the VBF channel for the Higgs production which possesses a particular topology. Only hadronic decay modes for two taus from Higgs decay are used, since the leptonic decay mode contains two missing neutrinos. Not only the neutrino pair mass induces a new unknown parameter that is hard to construct, but also it dilutes the CP sensitivity by having to integrate out the relative degrees of freedom between the two neutrinos. On the other hand, the hadronic modes don't have this extra parameter, and also have the best statistics among all the modes [19]. The main backgrounds for this signal come from the Z production associated with additional jets. The processes we consider are listed in following: • Signal: p p → h j j, h → τ τ .
• Tau Decay Modes Used: and all six combinations of the tau decay modes are used in our simulation. The other backgrounds, mainly dominated by the QCD, is also important [19]. However, QCD fake background is beyond the scope of this work. It is usually estimated from real data by tau charge or ID reversion. We will just assume the same cross section after all selection cuts as the QCD Z background for simplicity, which in our opinion is a conservative choice. The VBF h → τ τ signal is generated with Powheg [20] at NLO accuracy in QCD and PDF set NNPDF30NLO [21], and interfaced to Pythia8 [22] for resonance decays, parton shower and hadronization. The QCD and EW Z+jets background, with Z → τ τ , is generated at LO with MadGraph5 [23] and PDF set NNPDF23LO [24], with up to three extra partons. Samples with different parton multiplicities are merged according to the CKKW-L method [25], and showered by Pythia8. A k-factor of 1.23 is applied to the QCD Z+jets cross section to match the NNLO prediction [26]. The spin correlation between two taus is retained for the tau decays by Pythia. The events are afterwards passed through DELPHES [27] simulating the detector response of ATLAS at LHC.
The jets are formed from the clustered energy depositions of particles in the calorimeter based on the Antik t algorithm [28] with a cone parameter of 0.4. The hadronic tau tagging is performed on these jets with an efficiency of 70% (60%) and fake rate of 2% (1%) for 1prong (3-prong) real and fake tau objects, respectively. To measure the CP with h → τ τ , it is essential to identify the different tau decay modes efficiently. The development of tau substructure algorithms in the ATLAS and CMS experiments has recently made this possible using a particle flow method [29,30]. In this work, it is assumed that different tau decay modes can be efficiently classified with no crosstalk, and the neutral pion energy can be resolved with a 15% uncertainty. The impact parameters of the tracks are used to better constrain the neutrino momenta from tau decays, as used in [5]. A simple resolution of the form a ⊕ b/p T (p T in GeV) is applied on the impact parameters, where a = 8.5 (13.5) µm and b = 110 (200) µm for d 0 (z 0 ) based on [31]. Although in HL-LHC, it is expected that the tracking range will be extended to |η| < 4.0, in this work, the tracking is still within |η| < 2.5 consistent with the current ATLAS detector. The χ 2 for a single track from the tau decay is Eq. 7 or 8 of [5]. For the 3-prong tau decay, it is assumed that the decay proceeds through the a 1 channel [32], and the combined χ 2 a1 = i χ 2 i , where i is the track index. The tau flight direction is obtained by minimizing χ 2 a1 . Fig. 1 shows the difference in η and φ between their fitted (by minimizing χ 2 a1 ) and true values for the taus which decay via a 1 ν. The momenta of the neutrinos from the hadronic tau decays can be obtained by minimizing the χ 2 of where m h = 125 GeV, m τ = 1.777 GeV, σ h = 10 GeV, σ τ = 0.1 (0.2) GeV for taus decaying to a 1 ν or πν (ρν), / E x,y is the missing transverse energy, σ mis = 0.67 √ ΣE T (ΣE T in GeV) is its resolution. The χ 2 IP is the impact parameter contribution. For 1-prong taus, it is just the sum of contributions from each tau, whereas for 3-prong taus, it is (η fit τ − η τ ) 2 /0.007 2 + (φ fit τ − φ τ ) 2 /0.007 2 , where η τ and φ τ are the tau direction obtained by minimizing χ 2 a1 in the previous step. For the states with an intermediate ρ meson, extra terms of (m ρ − 0.775) 2 /0.2 2 + (f π 0 − 1) 2 /0.15 2 are added to Eq. 2, where f π 0 is the energy scale factor multiplied to the π 0 .
In the per-event minimization of Eq. 2, the η and φ of one neutrino are firstly scanned over, from which the magnitudes of the neutrinos' momenta and the direction of the other neutrino can be obtained via the tau mass and / E x,y constraints in Eq. 2. Conversely, the scan is repeated starting from the parameters of the other neutrino. Finally, a fit using MINUIT [33] is performed around the minimal point found by the scans for a better estimation. The ∆R and momentum difference between the fitted and true neutrinos in the a 1 + π channel are shown in Fig. 2.
With the Higgs mass constraint term in Eq. 2, the background ditau mass is also biased to the nominal Higgs mass at 125 GeV. Thus in the first step, the fit is done without the mass constraint term in Eq. 2. The distribution of unconstrained m τ τ is shown in Fig. 3. In the second step, the mass constraint term is put back in Eq. 2, and the fit is done to extract the CP information.
In order to measure the CP effects which is retrieved from a differential distribution, we need to also improve S/B as much as possible. For this purpose, the following cuts are used to selection reconstructed events: Tau cuts: The tau candidate should have one or three tracks with a unit charge. The leading track has p T > 5 GeV. For the 3-prong tau, p T > 2 GeV on the other tracks. The two taus have opposite charge, and are within |η| < 2.5. To take into account the trigger, p T > 40, 30 GeV are required on the two taus. They should also have |∆φ| < 2.9 to avoid the back-to-back topology.
VBF Cut: p j1 T > 50 GeV, p j2 T > 40 GeV, |∆η jj | > 3.8, m jj > 500 GeV, η j1 × η j2 < 0 Tau Centrality: min {η j1 , η j2 } < η τ1,2 < max {η j1 , η j2 } Higgs Mass: 115 GeV < m τ τ < 150 GeV Missing Energy: / E proj − p fit T,ν1+ν2 > −6 GeV where j1 and j2 are the leading and subleading jets, m τ τ is the unconstrained mass, / E proj is the projection of / E T onto the transverse direction of the vectorial sum of two neutrinos' fitted momenta, p fit T,ν1+ν2 . This variable is useful because for the Z → τ τ events, the fitted neutrino momenta are stretched to comply to the Higgs mass constraint, resulting in a larger p fit T,ν1+ν2 than / E proj . Based on the above reconstruction and selection, the left events at 300 fb −1 LHC are listed in Table. II for signal and background processes, from which one find that the most important modes are those involving the ρ meson, whereas the π + π mode as investigated in [6] is not the best one. Using the reconstructed momentum for all final states, we calculate the matrix element event by event which according to our parameterization (Eq. 1) has the following form: where A, B and C are calculated based on Eq. 1 and the effective Lagrangians and form factors for the τ decay vertices detailed in [34], which depends on the momenta of all final state particles as inputs. From the coefficients B and C, an observable (−π < φ ME < π) can be constructed to retrieve the CP information (φ): With this definition, the matrix element square has the form of: Fig. 4 shows the true and fitted distribution of the angle φ ME for the a 1 + π channel in the pure CP even case. Although the distribution is diluted after the fitting when compared to the truth, the discriminating power is still largely retained.
The distribution of φ ME for signal will be shifted according to different value of CP mixing angle φ, which can be seen from Fig. 5, and the backgrounds have flat distributions (subject to statistical fluctuation). Note that as we have used all final states information (especially the neutrino momentum) incorporated into the matrix element to reconstruct an "angle" and retrieve the CP mixing information, compared with usual construction methods (one example is that used in [4] for Higgs CP study in VBF production channel and will be called as φ 4π ), this can achieve higher sensitivity if the neutrino can be reconstructed with high precision. This can be seen from Fig. 6(a) for ρ + ρ mode using φ 4π as an example. In this comparison, the truth-level momenta are used to reconstruct all variables for simplicity, and we stress here that the neutrino reconstruction precision will influence the final sensitivity.
After folding in the detector efficiency and resolution effects, and imposing the selection cuts described previously, all decay channels listed above are used to estimate the sensitivity. The result is presented in Fig. 6(b) for 300 fb −1 (solid line) and also 3 ab −1 (dashed line) luminosity. The 1-σ precision at 300 fb −1 can reach 27 • (0.47) and can further be pushed down to 6.9 • (0.12) at 3 ab −1 .
In conclusion, a new method is described in this paper to measure the CP-violation effect in the hτ τ interaction, based on the neutrino momentum reconstruction and matrix element. All major hadronic tau decay modes are included simultaneously. With detailed detector simulation, it is predicted that at 13 TeV LHC with 300 fb −1 (3 ab −1 ) integrated luminosity, a precision up to 27 • (6.9 • ) can be achieved for the CP-mixing angle (φ) measurement, significantly improving the previous predictions using only particular tau decay modes or partial event reconstructions. Discussion: (1) It is expected that if a MVA method is used [35], the signal purity can be further improved which is not tried in this work. (2) Although the gluon-gluon-fusion Higgs signal is not included in our calculation, its relative contribution at the order of 25% in the VBF signal region can further improve the result to some extent.
The authors would like to thank Tao Han for the helpful discussion. XC is thankful to the support from the National Thousand Young Talents program and the NSFC of China. 6. The ∆NLL as a function of the CP mixing angle φ for 300 fb −1 (solid line) and 3000 fb −1 (dashed line). In Panel (a) only ρ + ρ mode is used and the LL is calculated from the expected zero-CP events in φME distribution (black line) and also in φ4π distribution (red line) both constructed from truth-level information, while in Panel (b) all tau decay channels are used and the LL is calculated from the expected zero-CP events in φME distribution constructed from reconstructed information. | 3,805.2 | 2017-08-09T00:00:00.000 | [
"Physics"
] |
Neither a Nitric Oxide Donor Nor Potassium Channel Blockage Inhibit RBC Mechanical Damage Induced by a Roller Pump
Red blood cells (RBC) are exposed to various levels of shear stresses when they are exposed to artificial flow environments, such as extracorporeal flow circuits and hemodialysis equipment. This mechanical trauma affects RBC and the resulting effect is determined by the magnitude of shear forces and exposure time. It has been previously demonstrated that nitric oxide (NO) donors and potassium channel blockers could prevent the sub-hemolytic damage to RBC, when they are exposed to 120 Pa shear stress in a Couette shearing system. This study aimed at testing the effectiveness of NO donor sodium nitroprussid (SNP, 10-4 M) and non-specific potassium channel blocker tetraethylammonium (TEA, 10-7 M) in preventing the mechanical damage to RBC in a simple flow system including a roller pump and a glass capillary of 0.12 cm diameter. RBC suspensions were pumped through the capillary by the roller pump at a flow rate that maintains 200 mmHg hydrostatic pressure at the entrance of the capillary. An aliquot of 10 ml of RBC suspension of 0.4 L/L hematocrit was re-circulated through the capillary for 30 minutes. Plasma hemoglobin concentrations were found to be significantly increased (~7 folds compared to control aliquot which was not pumped through the system) and neither SNP nor TEA prevented this hemolysis. Alternatively, RBC deformability assessed by laser diffraction ektacytometry was not altered after 30 min of pumping and both SNP and TEA had no effect on this parameter. The results of this study indicated that, in contrast with the findings in RBC exposed to a well-defined magnitude of shear stress in a Couette shearing system, the mechanical damage induced by a roller pump could not be prevented by NO donor or potassium channel blocker.
INTRODUCTION
Cellular components of blood can be exposed to high shear stresses when they are subjected to flow in artificial environments such as hemodialysis equipment, cardiopulmonary bypass circuits and artificial organs [1,2], and mechanical trauma may result from stress levels, at least in some areas of these artificial flow environments [1][2][3][4]. Red blood cells (RBC) are affected by this mechanical trauma, with the damage ranging from slight changes in ion transport through the cell membrane to total destruction of RBC (i.e., hemolysis). In general, shear stresses higher than 300 Pa result in hemolysis [5], while lower level of shear stress may induce structural and functional alterations in RBC including mechanical impairment [1,6,7].
The improved design of medical equipment utilizing extracorporeal flow circuits has helped to limit mechanical trauma to blood and hence has significantly reduced the extent of hemolysis [8]. However, even slight degrees of hemolysis may result in clinical problems such as adverse effects due to the nitric oxide (NO) scavenging effect of free hemoglobin [9]. Arginase relased from hemolyzed RBC may also contribute to disturbed NO metabolism [10]. Decreased NO availability not only influence vascular tonus, but also may affect RBC rheological properties [11] and platelet agglutination and coagulation mechanisms [12]. Therefore, any measures that may result in reductions of mechanical damage to blood would be expected to contribute to the devel-*Address correspondence to this author at the Department of Physiology, Akdeniz University Faculty of Medicine, Antalya, Turkey; Tel: +90 242 310-1560; Fax: +80 242 310-1561; E-mail<EMAIL_ADDRESS>opment of improved medical procedures and devices that involve artificial flow environments.
It has been previously demonstrated that sub-hemolytic mechanical damage to RBC could be prevented, in part, by including NO donors or a non-specific potassium channel blocker in RBC suspensions exposed to 120 Pa shear stress [6]. The present study was designed to extend these studies in order to test the effectiveness of such agents for preventing mechanical damage in an artificial flow circuit that included a roller pump. Sodium nitroprusside (SNP) as an NO donor and the non-specific potassium channel blocker tetraethyl-ammonium (TEA) were used at concentrations that have been previously demonstrated to be most effective for preventing sub-hemolytic mechanical damage [6].
Blood Samples and Preparation
Venous blood samples were obtained from 10 healthy, human volunteers of both sexes, aged between 28-54 years, and anticoagulated with ethylnediaminetetra-aceticacid (EDTA; 1.5 mg/ml). The hematocrit of each sample was measured using the microhematocrit method (12,000 g, 5 min) and was adjusted to 0.4 l/l by adding or removing an appropriate volume of autologous plasma; the samples were centrifuged at 900 g for 5 minutes at room temperature (20 ± 2°C), if plasma removal was necessary. Viscosity of RBC suspensions was 3.9 ± 0.4 cP, measured at 750 sec -1 shear rate, using a Wells-Brookfield cone-plate viscometer (DV II + Pro, Brookfield Engineering Labs, Middleboro, MA, USA). Each sample was then divided into four aliquots of 10 ml and treated as detailed below.
Flow System
The flow system (Fig. 1) included a roller pump (Masterflex Model 7521-01, Cole Parmer Instrument Co., Vernon Hills, IL, USA) and a glass capillary tube (diameter=0.12 cm, length=33 cm) with the roller pump connected to the capillary entrance through an in-line pressure transducer. The blood sample in the reservoir was recirculated through the capillary by the roller pump at a flow rate required to generate 200 mmHg pressure at the entrance of the capillary. The reservoir was covered with a plastic cap to prevent evaporation, and no drying of blood was observed during the experimental period. The temperature of the sample and capillary was maintained at 37 °C by immersing the reservoir and capillary into a water bath. The wall shear stress in the capillary was calculated to be 24.2 Pa by using the following standard equation [13]: = P D 4 L where P is the perfusion pressure (200 mmHg or 2.67 10 4 Pa), d is the diameter of the capillary (0.12 cm) and L is the length of the capillary (33 cm).
Experimental Protocol
The four aliquots of 0.4 l/l hematocrit blood from each donor were treated as follows:
Control
This aliquot was kept at room temperature (20 ± 2°C) for one hour, and then transferred to the reservoir of the flow system described above, and the pump run for 30 minutes at 37 °C. The aliquot was removed from the system at the end of this period and used to determine plasma hemoglobin and RBC deformability;
SNP
Sodium nitroprussid (SNP, Sigma Chemical Co., item S0501) dissolved in phosphate buffered saline (PBS; pH 7.4) at a concentration of 10 -1 M was added to this aliquot at a final concentration of 10 -4 M, and then held at room temperature for one hour. The aliquot was run in the system for 30 minutes under conditions identical to Control then assayed for hemolysis and RBC deformability; TEA Tetraethlyammonium (TEA, Sigma T2265) dissolved in PBS at a concentration of 10 -4 M was added to obtain a final concentration of 10 -7 M then treated exactly like the SNP sample;
Control -No Flow
This aliquot was held for 60 minutes at room temperature and 30 minutes at 37°C, but was not subjected to flow as done for the Control sample.
Determination of Plasma Hemoglobin
Aliquots of blood were centrifuged at 900 g for five minutes at room temperature (20 ± 2°C), and plasma was harvested. Two hundred μl of plasma was mixed with 800 μl of Drabkin's solution (1.13 mM KH 2 PO 4, 0.6 mM K 3 [Fe(CN) 6, 0.8 mM KCN). Absorbance was measured at 540 nm and hemoglobin concentration was calculated using a calibration curve.
Assesment of RBC Deformability
RBC deformability was determined at various fluid shear stresses by laser diffraction analysis using an ektacytometer (LORCA, RR Mechatronics, Hoorn, The Netherlands). The system has been described elsewhere in detail [14]. Briefly, a low hematocrit suspension of RBC in an isotonic viscous medium (4% polyvinylpyrrolidone 360 solution, MW= 360 kDa) is sheared in a Couette system composed of a glass cup and a precisely fitting bob, with a gap of 0.36 mm between the cylinders. A laser beam is directed through the sheared sample and the diffraction pattern produced by the deformed cells is analyzed by a desktop personal computer, which also controls the stepper motor that generates the pre-determined shear stresses. All measurements were done at 37 °C. Based upon the geometry of the elliptical diffraction pattern, an elongation index (EI) is calculated as: EI = (L-W)/(L+W), where L and W are the length and width of the diffraction pattern.
EI values, determined for nine shear stresses between 0.3 -50 Pa, were used to calculate the shear stress required for half-maximal deformation (SS 1/2 ) by applying a Lineweaver-Burk analysis procedure [15]. Impaired RBC deformability leads to increased SS 1/2 values; SS 1/2 values are used herein since the presentation and comparison of data via this approach are more convenient than via merely displaying shear stress-EI curves.
Statistics
Values are expressed as mean ± standard error. One-way ANOVA followed by Dunnett post-test was used for comparisons between aliquots, using GraphPad Prism 4.0 Software.
RESULTS
Plasma hemoglobin values for the four aliquots are presented in Fig. (2). The Control-No Flow aliquot level was 53.7 ± 3.6 mg/dl and increased to 364.4 ± 30.8 mg/dl in the aliquots pumped for 30 minutes (p<0.001). Inclusion of SNP or TEA at the concentrations given in the Materials and Methods section did not prevent this increment in plasma hemoglobin after 30 minutes of pumping. No statistically significant differences were found between the SNP or TEA and Control aliquots.
DISCUSSION
The present results demonstrate that the flow system used in this study induces significant RBC hemolysis, with a nearly seven-fold increase of plasma hemoglobin level after 30 minutes of pumping. Note that although preliminary experiments indicated that the level of hemolysis during pumping increased linearly with time (data not shown), no meaningful differences from the relations shown in Figs. (2 and 3) were detected: a 30 minute pumping period was selected for experimental efficiency. It should also be noted that the calculated wall shear stress as used herein was 24.2 Pa (i.e., 200 mmHg pressure gradient across the capillary) and thus markedly below the previously reported hemolytic threshold of 300 Pa [5].
The effects of RBC mechanical damage, as indicated by the significantly increased plasma hemoglobin concentration, could not be prevented by the NO donor SNP or the potassium channel blocker TEA. NO has been previously shown to play a significant role in the maintenance of normal RBC mechanical properties [11,16], and it has been proposed that this effect may be partly mediated by the potassium permeability of the RBC membrane [11]. Further, both NO donors and non-specific potassium channel blockage were previously found to be effective in preventing reduction of RBC deformability following exposure to sub-hemolytic shear stress [6]. However, several experimental conditions differed between that study [6] and the current work: 1) In the prior study RBC were exposed to 120 Pa [6] and thus to a fivefold higher level than used herein; 2) RBC were constantly subjected to shear for 120 sec in a Couette geometry system [6], whereas RBC were only exposed to the calculated shear stress intermittently while passing through the capillary; 3) The flow rate in the capillary tube can be calculated to be about 1 ml/sec using Poiseuille equation. Therefore, the exposure time of RBC to the calculated 24.2 Pa stress during each passage through the capillary was estimated to be about 0.35 sec with a total exposure time during the 30 minutes pumping time to be about of 67 sec, and hence 50% less when compared to the 120 sec constant shear in the Couette geometry.
Although the abovementioned experimental differences make exact comparisons to other work difficult, our use of a mechanical roller pump to generate flow represents a major departure from the prior experimental protocol [6]. It is not possible to exactly determine the magnitude of the mechanical forces due to the roller pump, but it could easily be shown that the roller pump itself was the main source of hemolytic mechanical trauma in the present flow system. Fig. (4) demonstrates the hemolytic effects of the flow system with and without the capillary: when operated with the capillary the inlet pressure was 200 mmHg, whereas with the pump operating at the same flow without the capillary the perfusion pressure was only a few mmHg. As can be seen in Fig. (4), the degree of hemolysis was only about 30% higher if the capillary was present in the flow circuit. Therefore, the majority of the mechanical damage in the current flow system is due to events occurring in or near the roller pump rather than to simple shear in the capillary, suggesting the possibility of different mechanisms for cell damage between the current study and brief steady shear in a Couette system [6]. Fig. (4). Plasma hemoglobin concentration in 0.4 l/l hematocrit blood following 30 minutes of flow with ("Pump+Capillary") and without ("Pump only") the capillary inserted into the flow system; the Control-No flow value represents no flow through the system. The flow rate was the same for both "Pump only" and "Pump+Capillary" experiments, but the pressure was close to zero if the capillary was absent and 200 mm Hg with the capillary inserted. Values are mean ± standard error. There was no significant difference between "pump only" and "pump+capillary" experiments.
Given the very large increase of plasma hemoglobin levels following 30 minutes of pumping (Fig. 2), it is interesting to speculate as to why this level of hemolytic damage did not affect RBC deformability (Fig. 3) whereas sub-hemolytic stress levels do increase cell rigidity [6]. One possible explanation relates to the type of mechanical damage in the two systems: 1) cell damage without hemolysis may be possible to detect using these cells in the LORCA laser diffraction system [14]; 2) mechanical damage resulting in hemolysis and cell fragmentation would not be detected since only intact RBC are sensed by the LORCA. That is, decreased deformability for mechanically impaired but intact RBC would be observed, whereas the measured deformability of normal, intact cells would not be affected by the presence of small cell fragments in the suspension. Note that this explanation "begs the question" regarding the mechanical behavior of non-hemolyzed RBC in the present study: by definition, these cells were exposed to sub-hemolytic stress levels, yet unlike our prior findings [6], have unaltered deformability (Fig. 3). Possible answers to the "question" include partial hemolysis rather than an all or none mode of hemoglobin loss, or that only a small, non-detectable sub-population of intact RBC had reduced deformability. Of course, differences in the magnitude, detailed nature and duration of applied mechanical forces must also play a role.
In overview, our results indicate that: 1) neither the NO donor SNP nor the potassium channel blocker TEA are effective for preventing hemolytic mechanical trauma in a flow system that includes a roller pump; 2) the damage to RBC is determined by the stress magnitude, exposure time and flow details in the system and that knowledge of stress levels in all parts of a flow system must be considered; 3) additional studies are needed in order to fully define the mechanical factors that induce hemolysis and/or cell rigidity. | 3,529.4 | 2008-04-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Biology"
] |
Enhancing the selection of a model-based clustering with external categorical variables
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.
Introduction
In model selection, assuming that the data arose from one of the models in competition is often somewhat unrealistic and could be misleading. However this assumption is implicitly made when using standard model selection criteria such as AIC or BIC. This "true model" assumption could lead to overestimating the model complexity in practical situations. On the other hand, a common feature of standard penalized likelihood criteria such as AIC and BIC is that they do not take into account the modelling purpose. Our opinion is that it is worthwhile taking it into account to select a model, which leads to more flexible criteria favoring useful and parsimonious models. This point of view could be exploited in many statistical learning situations.
Whereas cluster analysis is an exploratory data analysis tool, any available information on the objects to be clustered, available in addition to the clustering variables, could be very useful to get a meaningful interpretation of the clusters. Here we address the case where this additional information is provided by external categorical illustrative variables. The purpose of this paper is to introduce a model selection criterion in the model-based clustering context that takes advantage of these illustrative variables. This criterion aims to select a classification of the data which achieves a good compromise: it is expected to provide a parsimonious and sensible clustering with a relevant interpretation with respect to the illustrative categorical variables. It is important to stress that we do not want the external variables to affect the classifications derived from the clustering variables: they are merely used to highlight some of them.
The paper is organised as follows. In Sect. 2, the framework of model-based clustering is described. Our new penalised likelihood criterion is presented in Sect. 3. Numerical experiments on simulated and real data sets are presented in Sect. 4 to illustrate the behavior of this criterion and highlight its possible interest. A short discussion section concludes the paper.
Model-based clustering
Model-based clustering consists of modelling the data to be classified by a mixture distribution and associating a class with each of the mixture components. Embedding cluster analysis in this precise framework is useful in many aspects. In particular, it allows to choose the number K of clusters (i.e. the number of mixture components) rigorously.
Finite mixture models
Please refer to McLachlan and Peel (2000) for a comprehensive introduction to finite mixture models.
The data to be clustered y = (y 1 , . . . , y n ), with y i ∈ R d , are modelled as observations of iid random variables with a mixture distribution: where the p k 's are the mixing proportions and φ(· ; a k ) denotes the components probability density function (typically the d-dimensional Gaussian density) with parameter a k , and θ = ( p 1 , . . . , p K −1 , a 1 , . . . , a K ). A mixture model can be regarded as a latent structure model involving unknown label data z = (z 1 , . . . , z n ) which are binary vectors with z ik = 1 if and only if y i arises from component k. Those indicator vectors define a partition C = (C 1 , . . . , C K ) of the data y with C k = {i : z ik = 1}. However these indicator vectors are not observed in a clustering problem: the model is usually fitted through the maximization of the likelihood of the model (1) and an estimated partition is deduced from it by the MAP rule recalled below in (2). The parameter estimator, denoted from now on byθ, is generally derived from the EM algorithm (Dempster et al. 1977;McLachlan and Krishnan 2008).
There are usually several models to choose among (typically, when the number of components is unknown). Note that a mixture model m is characterized not only by the number of components K , but also by assumptions on the proportions and the component variance matrices (see for instance Celeux and Govaert 1995). The corresponding parameter space is denoted by m . From a density estimation perspective, a classical way for choosing a mixture model is to select the model maximising the integrated likelihood, π(θ m ) being a weakly informative prior distribution on θ m . For n large enough, it can be approximated with the BIC criterion (Schwarz 1978) with ν m the number of free parameters in the mixture model m. Numerical experiments (see for instance Roeder and Wasserman 1997) and theoretical results (see Keribin 2000) show that BIC works well to select the true number of components when the data actually arises from one of the mixture models in competition.
Choosing K from the clustering view point
In the model-based clustering context, an alternative to the BIC criterion is the ICL criterion (Biernacki et al. 2000) which aims at maximising the integrated likelihood of the complete data (y, z) Indeed the latter can be approximated with a BIC-like approximation: But z andθ * m are unknown. Arguing thatθ m ≈θ * m if the mixture components are well separated for n large enough, Biernacki et al. (2000) replaceθ * m byθ m and the missing data z withẑ = MAP(θ m ) defined bŷ where τ k i (θ m ) denotes the conditional probability of the kth mixture component given . (3) The ICL criterion (McLachlan and Peel 2000) is then obtained by replacing the estimated labelsẑ by their respective conditional expectation given and if we denote by Ent(m) the estimated mean entropy : we get that ICL is the criterion BIC decreased by Ent (m): Because of this additional entropy term, ICL favors models which lead to partitioning the data with the greatest evidence. The derivation and approximations leading to ICL are questioned in Baudry (2009, Chapter 4) and in Baudry (2012). In practice, ICL appears to provide a stable and reliable estimate of the number of mixture components for real data sets and also for simulated data sets from the clustering viewpoint. ICL, which is not aiming at discovering the true number of mixture components, can underestimate the number of components for simulated data arising from mixtures with poorly separated components (Biernacki et al. 2000). It focuses on selecting a relevant number of clusters, for which a clustering with low uncertainty can be provided.
Selecting a clustering with the support of illustrative variables
Now, suppose that, beside y, known classifications u 1 , . . . , u r (e.g. associated to extra illustrative categorical variables with U 1 , . . . , U r levels) are available.
Let us introduce an example of such a situation which is fully presented and analysed in Sect. 4.3. The Wholesale data set (Bache and Lichman 2013) refers to customers of a wholesale in Portugal. The quantitative data y consist of the annual spending of each customer on different product categories. Besides are available for each customer its Channel (either Retail or Horeca: Hotel/Restaurant/Café) and its Region (Lisbon, Oporto or Other) which are regarded as the illustrative categorical variables. The objective of the study is to segment the customers and define profiles based on the spendings. And it is of interest to be able to relate these profiles to the Channel and the Region.
A possible approach is to insert the external variables into the model. See for instance Hennig and Liao (2013) which makes use of a latent class mixture model assuming conditional independence between variables. An important problem when dealing with continuous and categorical data in the same exercise lies in the balance of both types of variables. Which weights for the categorical variables and for the quantitative variables? This question is not easy to be answered since these variables do not shed the same light on the objects to be analysed. They carry information of different nature and are difficult to compare. It could be more beneficial and safer to consider the categorical variables as illustrative variables, especially when the quantitative variables are more numerous than the categorical variables. Thus, they could be used to assess clusterings derived from the continuous variables (see for instance Gordon 1999, p. 185). The Supported Integrated Completed Likelihood (SICL) criterion that is now presented aims at helping the user in this perspective.
For the sake of simplicity let us first consider the situation where there is only one illustrative variable. The general aim is to build a classification z based on y in a situation where relating the classifications z and u could be of interest to get a suggestive and simple interpretation of z. Therefore, we propose to build the classification z in each model, based on y only, but to involve u in the model selection step, particularly for the number of components. Hopefully u might highlight some of the solutions among which y would not enable to decide clearly. This might help to select a model providing a good compromise between the mixture model fit to the data and its ability to lead to a classification of the observations well related to the external classification u. To derive our heuristics, y and u are supposed to be conditionally independent given z, which means that all the relevant information in u and y can be caught by z. This is for example true in the particular case where u can be written as a function of z: u is a reduction of the information included in z, and we hope to be able to retrieve more information from u using the (conditionally independent) information brought by y.
Here is our heuristics. It is based on an intent to find the mixture model maximizing the integrated completed likelihood Assuming that y and u are conditionally independent knowing z, which should hold at least for models with enough components, it can be written for any θ m ∈ m: Moreover, let us denote n k. = U =1 n k . Denoting Thus, we get and, from (5) and (6), Now, log f (y, z; m, θ m )π(θ m )dθ m can be approximated by ICL as in (4). Thus Finally, this leads to the SICL criterion The last additional term U =1 K k=1 n k log n k n k· quantifies the strength of the link between the categorical variables u and z. It can be regarded as minus the weighted mean entropy of u given z. This might be helpful eventually for the interpretation of the classification z.
Taking several external variables into account The same kind of derivation enables to derive a criterion that takes into account several external variables u 1 , . . . , u r . Suppose that y, u 1 , . . . , u r are conditionally independent knowing z. Then (6) becomes withθ * m = arg max θ m f (y, u 1 , . . . , u r , z; m, θ m ). As before, we assume thatθ m ≈θ * m and apply the BIC-like approximation. Finally, and as before, log f (u j |z) = log f (u j |z; m,θ m ) is derived from the contingency table (n j k ) relating the categorical variables u j and z: for any k ∈ {1, . . . , K } and ∈ {1, . . . , U j }, U j being the number of levels of the variable u j , n j k = card i : z ik = 1 and u j i = .
Finally, with n k. = U j =1 n j k , which does not depend on j, we get the "multiple" external variables criterion:
Numerical experiments
All the numerical experiments rely on the RmixmodSICL package that we make available and which relies itself strongly on the Rmixmod package for R (Lebret et al. 2012).
We first present an illustration of the behaviour of SICL on the Iris and the Crabs data sets. The behaviour of SICL is then analysed in various situations by numerical experiments on simulated data sets. Finally an application on the real data set Wholesale is presented.
Illustrative numerical experiments
This section is not devoted to emphasise the qualities of SICL but rather to indicate typical behaviours of this criterion. In particular, it illustrates that this approach can be regarded as a way to evaluate the agreement between a true classification and the clustering induced by the internal variables.
Iris
The first example is an application to the Iris data set (Fisher 1936) which consists of 150 observations of four measurements (y) for three species of Iris (u). Two of the three first principal components for these data are plotted in Fig
Simulated numerical experiments
All the parameters of the distributions from which the data sets of this section are issued are detailed in the Appendix.
Random or True Label Experiments
We simulated 200 observations from a Gaussian mixture in R 2 depicted in Fig. 3. In the first case (Fig. 4), which we call the True Label experiment, the variable u corresponds exactly to the Gaussian mixture component z * from which each observation actually arises. In the second case (Fig. 5), which we call the Random Label experiment, the variable u is not z * as it is apparent by comparing Figs. 3 and 5: the variable u for the right-hand side components is simulated from a Bernoulli distribution with probability 1 2 . In the first case, u is a function of z * while in the second case, u is not a function of z * but it is still conditionally independent on y given z * . Thus it can be expected that the conditional independence assumption can hold in models with at least four components for the True Label experiment but it cannot in models with fewer components. For the Random Label experiment, this minimal number of components for the assumption to hold is three. Diagonal mixture models (i.e. with diagonal variance matrices: [ p k L k B k ] in Celeux and Govaert 1995) with numbers of components K ranging from 1 to 10 were fitted. The criteria AIC, BIC, ICL and SICL as functions of K for one simulation are plotted in Fig. 6. We repeated this experiment with 100 different simulated data sets, cf. Table 4. BIC gets trapped in this difficult situation (overlapping Gaussian components with reasonable number of observations): it mostly misses the true number of components (four) and selects three instead. ICL almost always selects three clusters, as expected. But SICL gives quite a different answer depending on the choice of u: when u is conditionally "random", it behaves like ICL. This illustrates that it does not necessarily push the selected number of clusters toward the number of classes of u: it depends on the quality of the relation between u and the classification obtained by the model-based clustering study with the number of components at hand. Indeed SICL mostly selects four clusters in the True Label situation where u = z * : in this case the classification obtained with the four-component model is mostly well related to u, since this is the true model. However let us notice that SICL answers in this case are quite spread. This is linked to the optimisation difficulty for the fitting of the model in this situation: the "true" distribution is mostly missed by the approximation of the MLE with the EM-algorithm, even in the four-component model. Actually it can be checked in the numerical results that SICL mostly selects four clusters when the "true" distribution is well estimated and other numbers of clusters where it is not (cf. Figs. 7, 8 for examples).
Mixture of Mixtures Experiment
In this experiment SICL gives a relevant solution different from the solutions selected with BIC or ICL. We simulated 600 observations from a diagonal three-component Gaussian mixture depicted in Fig. 9 (left) where the classes of u are in red and in black. (The red class consists of two "horizontal" Gaussian components while the black one consists of a single "vertical" Gaussian component). The most general Gaussian mixture models with numbers of components K ranging Fig. 9 (right). We repeated this experiment with 100 different simulated data sets: from Table 5, BIC almost always recovers the three Gaussian components of the simulated mixture, ICL mostly selects one cluster, and SICL selects two classes well related to u. About the conditional independence assumption The heuristics leading to SICL assumes that u and y are conditionally independent given z (see Sect. 3). This assumption is questionable. This experiment, which we call Conditionally Dependent Label, aims at studying the behaviour of SICL when this assumption can be regarded as inappropriate even in the models with relevant numbers of components. We consider a two-component diagonal Gaussian mixture and a two-class u partition "orthogonal" to this mixture. In Fig. 10 (left) the classes according to u are in red and in black: u = 1 y>0 . Diagonal mixture models with free volumes and proportions but fixed shapes and numbers of components K ranging from 1 to 10 are fitted ([ p k L k B] in Celeux and Govaert 1995). The criteria AIC, BIC, ICL and SICL as functions of K for one simulation are plotted in Fig. 10 (right). We repeated this experiment with 100 different simulated data sets. As expected in this quite easy setting, BIC and ICL almost always select two clusters (see Table 6). SICL less systematically selects the two-component solution and also highlights a little the four-component solution. Actually the conditional independence does not hold in the two-component model because it would not be fitted so that z correspond to u here but it roughly does in the four-component model for example, when it is fitted as in Fig. 11. The dispersion of the numbers of clusters selected with SICL (Table 6) illustrates that, when the conditional independence does not hold for the relevant numbers of clusters, SICL should have a tendency to select a higher number of clusters, for which the conditional independence assumption holds and for which the estimated z is well related to u.
Real data set: wholesale customers
As announced in Sect. 3 the segmentation of 440 customers of a wholesale distributor described by six continuous variables and two categorical variables is performed to illustrate the performance of the SICL criterion. 1 The continuous variables are fresh products, milk products, grocery, frozen products, detergents and paper products, and delicatessen. The two categorical variables u 1 and u 2 (see Sect. 3) are summarised in Tables 7 and 8. A simple description of the continuous variables (not reported here) clearly shows that it is highly preferable to consider a log transformation to model them with a Gaussian mixture.
The clusterings considered below are represented in Fig. 12 on the first PCA plane which is easily interpreted with respect to the original variables (see Fig. 13): the first axis is a "Grocery products" axis and is discriminant for the Channel variable while the second axis is a "Food products" axis.
As expressed in Sect. 3, it is possible to gather the log-quantitative and categorical variables in a composite mixture model with a conditional independent assumption for all the variables. This means that the variance matrices of the quantitative variables are assumed to be diagonal. This model has been recently added to the Rmixmod package. Running Rmixmod in this configuration leads to a seven-component mixture with BIC and to a five-component mixture with ICL. See Fig. 12.
We now turn to a Gaussian mixture on the log-quantitative variables using the default model in Rmixmod, so called [ p k L k C] (Celeux and Govaert 1995), namely a Gaussian mixture with free proportions and variance matrices with free determinants but a common orientation and shape. Here the component variance matrices are not restricted to be diagonal. In this case, as shown in Fig. 13, BIC selects a five-component mixture, ICL a two-component mixture and SICL a four-component mixture. The solutions selected by BIC and ICL are sensible from their respective points of view. The two-cluster solution of ICL is quite parsimonious, but it is not related at all to the two external categorical variables. It is for instance almost orthogonal to the Channel variable. On the contrary, the four-cluster solution selected with SICL is easily interpreted with respect to the two levels of the Channel variable. See Fig. 12.
Finally the solution selected with SICL is interesting: it is more parsimonious and thus easier to read and interpret than those of BIC or even ICL with the composite mixture model. And it is better related to the external variables-here particularly to the Channel variable-than the even more parsimonious solution selected by ICL with the continuous variables only (see Fig. 12, bottom left). Thus it can be an interesting solution to be considered in a study for which relating the clusters to the external variables can help to interpret or study them. Further, as compared to the ICL with the composite mixture model, SICL also provides clusters more coherent with the
Discussion
The criterion SICL has been conceived in the model-based clustering context to choose a sensible number of clusters, possibly taking advantage of an external categorical variable or a set of external categorical variables of interest (variables other than the variables on which the clustering is based). This criterion can be useful to draw attention to a well-grounded classification related to these external categorical variables.
As already stated in Sect. 3, another possible approach consists of involving the continuous and categorical variables together in a composite mixture model. This is a natural approach which can provide interesting results but, beside the aforementioned difficulty to balance the weights between different types of variables, the conditional independence (of the variables given the latent class) assumption is quite strong: it is then coherent to choose a model in which all the variables are conditionally independent. And if this assumption is broken it can be expected that the criteria should select a too high number of clusters. The SICL criterion also relies on a conditional independence assumption but only for the selection of the number of clusters and not directly for the designing of the clusters. This assumption is then less critical than in the composite mixture model approach. First our procedure enables one to model the possible dependences between the observed variables by fitting an appropriate model form since the conditional independence assumption is not involved at this step (see Sect. 4.3). Second, if this assumption does not hold, the consequence is lowered by the very fact that it is involved in the step of the number of clusters selection only. It can be expected to hold at least for models with high enough numbers of components: at worst, SICL can be led to select too high a number of clusters, particularly when some of the corresponding solutions are well related to the external variables. But the clusters can still be expected to be relevant. The risk with the composite latent-class model seems to be more severe in this situation since the assumption is involved in the designing of the clusters. This could be a further work to study this more deeply, notably with a simulation study. Moreover, as one of the referees noticed, it may be possible to get the same solution as SICL by including the external variables in the model, as it is the case for example for the Iris data set. But not including them in the model provides a stronger evidence of the link between the external variables and the clustering for the very reason that the illustrative variables are not involved in the design of the clustering.
Finally SICL could highlight partitions of special interest with respect to external categorical variables. Therefore, we think that SICL deserves to enter into the toolkit of model selection criteria for clustering. In most cases, it will select a sensible solution and when it points out an original solution, it could be of great interest for practical purposes.
Acknowledgments The authors would like to thank Christophe Biernacki for his help in the numerical experiment with the composite mixture model and helpful comments and Christian Hennig for helpful discussion which in particular helped to make the point about the conditional independence assumption. The authors are grateful to two reviewers and the Editor for their helpful comments which led to improvements of the article.
Appendix: Details for the simulated numerical experiments
Random or True Label Experiments Each data set is the observation of a sample of size 200 from a Gaussian mixture in R 2 which parameters are given in Table 9.
Mixture of Mixtures Experiment
Each data set is the observation of a sample of size 600 from a Gaussian mixture in R 2 which parameters are given in Table 10.
Conditionally Dependent Label Experiment
Each data set is the observation of a sample of size 200 from a Gaussian mixture in R 2 which parameters are given in Table 11. | 6,095.6 | 2014-06-06T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
An Adaptive, Data-Driven Stacking Ensemble Learning Framework for the Short-Term Forecasting of Renewable Energy Generation
: With the increasing integration of wind and photovoltaic power, the security and stability of the power system operations are greatly influenced by the intermittency and fluctuation of these renewable sources of energy generation. The accurate and reliable short-term forecasting of renewable energy generation can effectively reduce the impacts of uncertainty on the power system. In this paper, we propose an adaptive, data-driven stacking ensemble learning framework for the short-term output power forecasting of renewable energy. Five base-models are adaptively selected via the determination coefficient (R 2 ) indices from twelve candidate models. Then, cross-validation is used to increase the data diversity, and Bayesian optimization is used to tune hyperparameters. Finally, base modes with different weights determined by minimizing the cross-validation error are ensembled using a linear model. Four datasets in different seasons from wind farms and photovoltaic power stations are used to verify the proposed model. The results illustrate that the proposed stacking ensemble learning model for renewable energy power forecasting can adapt to dynamic changes in data and has better prediction precision and a stronger generalization performance compared to the benchmark models.
Introduction
With increasing global climatic warming and environmental issues, renewable energy sources are receiving increasing attention, especially wind and solar power.Due to the randomness and intermittency of wind and solar resources, the high penetration of wind and photovoltaic (PV) power generation causes uncertainty in the power system.Accurate and stable short-term forecasting for wind and PV output power is crucial to maintain the balance between the supply and demand of power systems, optimize the configuration of rotating reserve capacity, and make dispatching decisions in the power market environment [1,2].Data-driven prediction models for wind and solar renewable energy combined with artificial intelligence and machine learning technology are widely used, owing to their strong ability to mine historical data [3].
For data-driven renewable energy generation prediction, a complex nonlinear mapping relationship between the input features and the output power usually needs to be constructed.Traditional time series models such as regressive (AR), AR moving average (ARMA), and AR integrative moving average only define a linear mapping relationship between input and output, increasing the prediction error with each forecast interval [4].Advanced machine learning methods are capable of building a strong nonlinear inputoutput map through a black-box concept [5,6].A number of regression models use blackbox mapping, e.g., artificial neural network (ANN) [7], and support vector machine regression (SVR) [8].ANNs simulate the biological neural network constituting the brain, Energies 2023, 16,1963 2 of 20 consisting of a number of connected neurons that carry and transmit signals.Deep neural network methods, such as autoregressive neural networks [9], convolutional neural networks [10], and long-and short-term memory neural networks [11,12], have been developed rapidly due to their strong feature-capturing ability with little prior knowledge.Nevertheless, the network framework of deep learning is relatively complex, requiring a large amount of training data, and cannot outperform other prediction models with a small sample.SVR uses a kernel function to transform the original feature space to a highdimensional space, then constructs a linear map, overcoming the problem of dimensionality and achieving effective results with a small sample dataset.Therefore, SVR is selected as the candidate model in this paper.
In recent years, tree ensemble machine learning models [13,14], such as extreme gradient boosting [15] and gradient boosting trees [16], have received increasing attention in industry and academic research due to their open architecture, low computing cost, and robustness.The authors of [17] compared the performance of random forest (RF), extreme regression tree (ET) and support vector machine regression (SVR) for the prediction of photovoltaic power; the ET model achieved the best performance in terms of forecasting accuracy, calculation cost, and stability indices.The authors of [18] described the advantages of tree ensemble learning models, including RF, gradient boosting trees (GBRTs), and extreme gradient boosting (XGB), for wind speed and solar radiation prediction in comparison with the SVR method.The authors of [19] evaluated the performance of XGB and GBRT machine learning methods for solar irradiance prediction.The use of a single model for forecasting renewable energy, as mentioned above, may cause low prediction accuracy and insufficient generalizability when processing various non-stationary datasets.
A hybrid model based on ensemble learning can combine the advantages of different models to improve prediction accuracy and stability performance.Such models are more robust than a single model and are widely applied in energy generation prediction.The authors of [20] proposed a hybrid model combining ET with a deep neural network for the prediction of hourly solar irradiance.The authors of [21] combined a long short-term memory neural network with a convolutional neural network to predict solar irradiance.The authors of [22] adopted a stacking fusion framework based on RF regression tree, adaptive boosting (ADA), and XGB for the prediction of photovoltaic power and achieved improved prediction accuracy.The authors of [23,24] built a new hybrid model based on multiple deep learning methods for wind power prediction.The methods mentioned above use a combined model, improving the prediction accuracy and stability on some levels but ignoring the complex changing dynamic characteristics of the datasets.The factors affecting wind and PV output power are complicated, and the collected meteorological and historical data are high-dimensional and heterogeneous.Therefore, the ensemble learning framework adaptively selects optimal basis models according to data characteristics, representing a key technology to improve the accuracy and generalization performance of prediction models.
In this paper, we propose an adaptive, data-driven stacking ensemble learning framework for predicting renewable energy output power through the deep mining of historical data.Twelve diverse regression models that have been successfully used to mine information hidden in the raw datasets of renewable forecasts are applied as candidate forecast models [25][26][27].To reduce the negative effects of uncertainty hidden in the historical data and to enhance the generalization performance, an adaptive ensemble framework is developed, which can adaptively select five optimal models based on measurement indices.The optimal hyperparameters of each base-model are tuned using Bayesian optimization, and a linear regression method is employed as a meta-model to combine the five selected base-models.The weight of each base-model can be adaptively obtained according to the principle of cross-validation.Various case studies based on actual data from a wind farm and PV station located in Middle China verify the effectiveness of the proposed adaptive stacking ensemble learning model for renewable energy output power forecasting.In summary, the key contributions of this paper are as follows: Energies 2023, 16,1963 3 of 20 (1) A novel, data-driven, adaptive stacking ensemble learning framework is developed for the output power forecasting of renewable energy.The stacking structure and different base-models deeply explore the information hidden in the raw data, thereby boosting the regression ability for multi-dimensional heterogeneous datasets.(2) Twelve independent candidate regression models, including bagging, boosting, linear, K nearest neighbor and SVR methods, are comprehensively compared.Then, five better models are determined adaptively to integrate the stacking ensemble structure.
The diversity among the different base-models can ensure the excellent stability and generalization performance of the stacking model.(3) A meta-model is constructed using the linear regression method.The weights of base-models are determined via minimizing the cross-validation risk of the basemodels estimator.(4) The hyperparameters of base-models and meta-model are tuned and optimized using the Bayesian global optimization method, which further enhances the forecasting accuracy of the proposed model.
Adaptive Ensemble Learning Framework for Renewable Energy Forecast
Twelve methods with good performance for renewable energy power prediction in the current literature are used as candidate models, including boosting algorithms such as adaptive boosting (ADA), GBRT, XGB and light gradient boosting machine (LGBM) methods; bagging algorithms such as decision tree (DT), bagging, RF, extreme tree; linear regression (LR), K-nearest neighbor regression (KNN), elastic net regression (ELAN) and SVR algorithm.
Algorithms with different principles and structures can measure data from different perspectives, complementing each other.The diversity and excellent forecasting ability of the base-model is crucial to enhance the generalization and regression performance of the stacking ensemble learning framework.Generally, the first layer of the stacking learning framework selects three to five base learners.Too few learners have little effect on the performance of the integrated model; too many learners will cause redundancy of the model structure and an increase in computing cost, which is not conducive to the improvement of prediction accuracy.In this paper, 12 candidate models are trained and tested on the same dataset, and 5 models with better prediction performance in terms of the R 2 evaluation index are selected as base learners.The base-models adaptively selected may vary for different datasets as the module of base-model selection in Figure 1.
K-fold cross-validation is applied to prevent meta-model overfitting of the training data and enhance the generalization performance of the model.Cross-validation is a resampling method used to evaluate machine learning models, and K-fold means that a given data is spilt into K separate folds.One-fold is used to train the model, and K-1 folds are used to validate, and then an individual estimation is obtained by averaging the results of K evaluations [28].The model can be trained and validated on each fold data, increasing the model's fitness.That is to say, the input data to the meta-model is the out-of-fold predictions from multiple base-models.The overall framework of the proposed ensemble model for renewable energy output power forecasting is displayed in Figure 1; the procedure can be summarized as follows: (1) Twelve candidate models are trained and tested to select five base-models by evaluating the R 2 index.
For each base-model: a. Select a 5-fold split of the training dataset; b.
Tune hyperparameters using the Bayesian optimal method; d.
Store all out-of-fold predictions.
(2) Fit a meta-model on the out-of-fold predictions by linear regression.
(3) Evaluate the model on a holdout prediction dataset.
framework selects three to five base learners.Too few learners have little effect on the performance of the integrated model; too many learners will cause redundancy of the model structure and an increase in computing cost, which is not conducive to the improvement of prediction accuracy.In this paper, 12 candidate models are trained and tested on the same dataset, and 5 models with better prediction performance in terms of the R 2 evaluation index are selected as base learners.The base-models adaptively selected may vary for different datasets as the module of base-model selection in Figure 1.K-fold cross-validation is applied to prevent meta-model overfitting of the training data and enhance the generalization performance of the model.Cross-validation is a resampling method used to evaluate machine learning models, and K-fold means that a given data is spilt into K separate folds.One-fold is used to train the model, and K-1 folds are used to validate, and then an individual estimation is obtained by averaging the results of K evaluations [28].The model can be trained and validated on each fold data, increasing the model's fitness.That is to say, the input data to the meta-model is the outof-fold predictions from multiple base-models.The overall framework of the proposed ensemble model for renewable energy output power forecasting is displayed in Figure 1; the procedure can be summarized as follows: (1) Twelve candidate models are trained and tested to select five base-models by evaluating the R 2 index.
For each base-model: a. Select a 5-fold split of the training dataset; b.Evaluate using 5-fold cross-validation; c.Tune hyperparameters using the Bayesian optimal method; d.Store all out-of-fold predictions.
(2) Fit a meta-model on the out-of-fold predictions by linear regression.
(3) Evaluate the model on a holdout prediction dataset.
Methodology
Ensemble learning is a machine learning method that combines a series of base learn-
Methodology
Ensemble learning is a machine learning method that combines a series of base learners according to certain rules to obtain a strong learner, presenting a more robust performance than a single model.Ensemble techniques, including bagging, boosting and stacking, are popular and widely used in renewable energy generation prediction and load forecasting [29][30][31].
Regression Method Based on Boosting Learning
The boosting learning methods fit multiple weak learners on different versions of the training dataset, and then combines the predictions of the weak learners sequentially with different weights until a suitable strong learner is achieved [32].Tree-based boosting methods mainly include ADA, GBRT, XGB and LGBM.
AdaBoost uses the Cart tree as the base learner and conducts multiple iterations of learning to minimize the loss by changing the weights of base learners in each iterative step [27,32].GBRT uses a gradient boosting algorithm based on ADA and follows a shrinkage and regularization approach, which effectively improves the accuracy and stability of the prediction [27,33].
The XGB method adds several optimizations and refinements to the original GBRT, making the creation ensembles more straightforward and more generative.The details of XGB can be found in [20,22,27].LGBM is a modified XGB algorithm proposed by Microsoft in 2017.Gradient-based one-sided sampling (GOSS) and exclusive feature bundling (EFB) are used to enhance its histogram algorithm and decision tree growth strategy, improving the computing speed, stability, and robustness without reducing accuracy [18].Taking LGBM as an example, a given dataset D = {(x i , y i ) : i = 1 • • • N}, the input timeseries x i , and the output y i , constructing the nonlinear mapping y = f (x).Denoting the loss function L(y, f (x)) = (y − f (x)) 2 , the objective of model training is to find the function f *(x) = argmin f E y,x L(y, f (x)).The LGBM algorithm (Algorithm 1) steps can be written as follows: (2) Output: (d) Calculate the optimal weight for each regression tree T(x; Θ m ), where initial
presenting the initial weight of the regression tree; and
Θ is the parameters of the regression tree.
Regression Method Based on Bagging Learning
The bagging ensemble uses bootstrap replicates to obtain multiple different samples of the same training dataset as new training sets, and fits a decision tree on each new set.Due to perturbed training, the predictions for all of the created decision trees can reduce variance.Then, the predictions are combined, which can improve accuracy and prevent overfitting of the bagging method [34,35].
Random forest RF is an extension of bagging technology, which also uses bootstrap sampling to build a large number of training sample sets and fit different decision trees.Unlike bagging, to make the individual decision trees differ, RF estimates the input feature and then selects a number of samples as split candidates at each node [35].Out-of-bag (OOB) error estimation is employed to construct the forest, which can ensure unbiasedness and reduce forecast variance [36,37].
An extra regression tree (ET) is developed as an extension of the RF approach, which employs a classical top-down procedure to construct an ensemble of unpruned regression trees.As well as RF, a subset of features is randomly selected to train each base estimator.Unlike RF, ET randomly selects the best feature with the corresponding value to split the node.Additionally, ET employs the total training dataset to train each regression tree in the forest [36].These differences are likely to reduce overfitting, as interpreted in [38].
Other Regression Models
Linear regression is widely used in statistics to quantitatively analyze the dependence relationship between two or more variables.Basic linear regression is used to describe the linear relationship between variables.The least-square method is a commonly used algorithm to train the linear regression model.Elastic net is developed as an extension of linear regression.It adds L1 and L2 regularization parameters, which integrate the benefits of the least absolute shrinkage, selection operator (lasso) and ridge, resulting in a better performance for prediction [39].
K-nearest neighbor regression (KNN) carries out prediction by measuring the distance of a sample's nearest neighbor.KNN finds the K-nearest neighbors of a sample and assigns the mean value of some features of these neighbors to the sample.In other words, the mean value is the prediction value of the sample.The time series for wind power and PV power has a specific correlation in the time dimension.Theoretically, the KNN method is suitable for wind and PV power forecasting, and has been applied to renewable energy forecasting [40][41][42].
Support vector regression (SVR) is used to solve regression problems by adopting kernel functions to construct non-linear mapping.That is to say, the input space is mapped into a higher dimensional feature space, and a linear regression is performed in the feature space.The traditional empirical risk minimization principle only minimizes the training error.In contrast, SVR uses the structure risk minimization principle to minimize an upper boundary of the total generalization error with a certain confidence level.SVR is highly effective in solving non-linear problems, even with small sample events, and is popular in wind and PV power forecasting [36,43].
Stacking Ensemble
Stacking ensemble trains different base-models on the same dataset.Then, it uses a meta-model to combine the predictions generated via the base-models to achieve the ultimate predictions [44].The two-layer stacking ensemble learning framework is displayed in Figure 2. The first layer consists of multiple different basic learner models, and the input is the original data training set.The second layer is called the meta learner; the prediction from the first layer model is fed to the meta-model to make the ultimate prediction.The meta learner integrates the prediction ability of the basic learner model to improve the performance of stacking ensemble learning.Given the input dataset ), the dataset is divided into the training dataset, test dataset and validation dataset.Z h is the h-th base-model of the first layer.The prediction output of the Z h model on the validation set is Z h (x i ), and the prediction result of the Z h model on validation dataset is presented using Z * h (x i ).The output Z h (x i ) of the first layer model as a new training set is fed to the meta-model Z, and Z * h (x i ) as a test of the meta-model.The ultimate forecasting result can be written as follows:
Bayesian Hyperparameters Optimization
Bayesian optimization is derived from the famous Bayes theorem, which uses a probabilistic surrogate model to fit the objective function and selects the most "potential" evaluation point via the maximum acquisition function.The procedure of parameter optimization can reduce unnecessary sampling and make full use of the complete historical information to improve the search efficiency, and then obtain a global approximate optimal solution with low evaluation cost [45,46].Traditional optimization algorithms, such as grid search, particle swarm optimization, simulated annealing, etc., are not suitable for machine learning methods with large-scale parameters due to their expensive computing costs [46].
In this paper, the hyperparameters of the base-models and meta-model are tuned using Bayesian optimization, as shown in Figure 3. Firstly, a hyperparameter space Θ ∈ Λ, such as leaf nodes of the tree, and learning depth are defined.Given the dataset D = { (x 0 , y 0 ), • • • , (x i−1 , y i−1 )} , Bayesian global optimization can be described as Θ * ∈ argmax Θ∈Λ F(Θ), where Θ * is the optimal hyperparameter and F(Θ) is the objective function, indicating the loss of validation of the model with the hyperparameters.Assuming that F(Θ) cannot be observed directly, we can only obtain this by noise observations Y(Θ) = F(Θ) + ε, ε ∼ N(0, σ 2 noise ).The construction of a surrogate function and the selection of an acquisition function are critical technologies for Bayesian optimization.A surrogate function is built to express assumptions about the function to be optimized, and an acquisition function is selected to determine the next evaluation point.In this paper, the tree Parzen estimator (TPE) is employed to model the densities using a kernel density estimator, instead of directly modeling the objective function F by a probabilistic model p( f |D ) [47,48].More details about Bayesian optimization are discussed in [45][46][47][48].
Data
Wind speed (WS) and direction (WD), as the main meteorological features affecting wind output power, are selected as the inputs for the wind power prediction model, and wind power (WP) as the output.Data was collected from the SCADA system of a wind Wind speed (WS) and direction (WD), as the main meteorological features affecting wind output power, are selected as the inputs for the wind power prediction model, and wind power (WP) as the output.Data was collected from the SCADA system of a wind farm, located in central China.The installed capacity of the wind farm is 200 MW, and the rated power of each wind turbine is 2 MW.The historical data covers the whole year of 2020 with a 15-min time resolution, divided into four datasets depending on different seasons with 8832 samples in each season.Figure 4 gives an example of the historical dataset in Spring.
Data Standardization and Evaluation Indices
To reduce interference from outliers and differences from different data dimensions and ensure fairness of the forecast, the principle of maximum and minimum is applied for In the PV power model, the main meteorological features affecting PV output power are selected as the inputs of the prediction model, which include total irradiance (T_irr), normal vertical irradiance (V_irr), horizontal irradiance (H_irr) and temperature (Tem).The data is derived from a PV power station with 130 MW located in central China.Due to the characteristics of PV output power, the historical data from 07:00 to 18:00 is defined as effective, which consists of the whole year of 2020 with a 15-min time resolution.The dataset of each season contains 4095 time points.An example of the historical data for spring is shown in Figure 5.
From Figures 4 and 5, we can see that there are some differences between the characteristics of wind power and solar power.The time series of wind power is random, whereas the solar power time series has specific rules to follow.During the day, PV power can be generated only when the PV cells are radiated by the sun.At night, the output power from the PV station is 0. The diversity between the two datasets can be used to verify the model's universality.
Data Standardization and Evaluation Indices
To reduce interference from outliers and differences from different data dimensions and ensure fairness of the forecast, the principle of maximum and minimum is applied for normalization to (0, 1).It can be written as follows: ( 1, 2,..., ; 1, 2,..., )
Data Standardization and Evaluation Indices
To reduce interference from outliers and differences from different data dimensions and ensure fairness of the forecast, the principle of maximum and minimum is applied for normalization to (0, 1).It can be written as follows: where: x ij is the j-th sample of the variable i-th, and x ij is the corresponding normalization value; x i.max and x i.min represent the maximum and minimum values of i-th variable, respectively.Root-mean-square error (RMSE), mean absolute error (MAE) and determination coefficient R 2 are usually selected as the evaluation indices of the prediction model [49,50].The smaller the RMSE and MAE values, the smaller the prediction error will be.The determination coefficient R 2 measures the similarity between the actual and predicted values.The larger the value, the better the model fitting effect.These indices can be described as follows: where, P i and ∧ P i present the measured and prediction values, respectively; P i is the average of measured value and N is the number of samples.
Model Selection and Hyperparameter Optimization
As shown in Figure 1 of Section 2, 12 candidate models are simulated on four data cases to select 5 better base-models.The original data is divided into a training dataset (80% data) and a validation dataset (20% data).For the different datasets in spring, summer, autumn, and winter, five models with higher scores are adaptively selected as the basemodels according to the R 2 evaluation index.Especially, if the R 2 scores of the models are the same, the RMSE and MAE indices are used for further evaluation.The training and testing of the proposed model using Python 3.6 are conducted on a computer with Intel(R) Core (TM)i7-8565<EMAIL_ADDRESS>RAM 8.00 GB.
The results of the wind power prediction on the validation dataset are displayed in Table 1.Five base-models are selected with higher R 2 scores and lower RMSE and MAE values.For the spring dataset, the selected base-models are LGBM, GBRT, XGB, ADA, and RF, with corresponding R 2 values of 0.754, 0.746, 0.731, 0.701, and 0.698, respectively.For the summer dataset, the base-models are SVR, LGBM, GBRT, XGB, and ADA, with corresponding R 2 values of 0.689, 0.673, 0.667, 0.648 and 0.604 respectively.The base modes and their R 2 scores for the autumn dataset are XGB-0.869,GBRT-0.868,LGBM (0.867), RF (0.854), and KNN (0.853).For the winter dataset, they are GBRT (0.667), LGBM (0.662), ADA (0.633), SVR (0.634), and XGB (0.628).The R 2 scores of the same model vary greatly on a different dataset, such as LGBM, GBRT, and XGB, which indicate that a single forecasting model has certain limitations for different data.In addition, for the winter and summer datasets, the R 2 scores of all models are lower; the RMSE and MAE values are higher than those for the spring and autumn datasets, which is closely related to the fluctuation characteristics of the original data of wind speed, direction, and power.
The results for PV power forecast are listed in Table 2.For the spring dataset, the five models with higher R 2 scores are bagging (0.791), LGBM (0.762), RF (0.758), SVR (0.746), and ADA (0.743).Similarly, the base-models with higher R 2 scores are bagging (0.791), LGBM (0.762), RF (0.758), SVR (0.746), and ADA (0.743).For the autumn dataset, the highest R 2 scores are RF (0.615), GBRT (0.613), KNN (0.611), XGB (0.604), and bagging (0.581).For the winter dataset, the highest R 2 scores are GBRT (0.908), KNN (0.906), XGB (0.904), RF (0.896), and LGBM (0.894).The R 2 values of the ELAN model for wind power prediction and PV power prediction on all datasets are negative, which indicate that the model is unsuitable for renewable prediction.Like wind power forecasting, the evaluation indices of a model for PV power forecasting on different datasets are different.For all 12 models, the R 2 scores on the winter dataset are the highest, and RMSE and MAE values are the lowest, followed by summer, spring, and autumn.
Due to the significant difference between wind power and PV power time series, the base-models selected are also different, indicating the universality of different algorithms on different data.For example, the RF model is selected as the base-model on all four datasets for PV power prediction, whereas it is selected only on spring and autumn data for wind power forecasting, indicating that the performance of the RF method has certain limitations for data with stronger fluctuations.Similarly, the bagging method is selected only in wind power forecasting.We can see the variations among these base-models for different cases in Tables 1 and 2.
With the base-model selected; the next step is to select the meta-model.Taking wind power prediction as an example, the RF, XGB, GBRT, LGBM, and LR models with higher R 2 scores on four datasets in the above base-model experiments are tested and verified as meta-models, respectively.The results are shown in Table 3.It can be seen that the RMSE and MAE values of the linear model as the meta-model are lower and R 2 scores are higher on each dataset than the other models.Therefore, the linear model is selected as the meta-model in this paper.In a similar manner, the LR model as the meta-model for PV power prediction on four seasons has better prediction accuracy than the other models.In order to improve the prediction performance of the basic learner model, the Bayesian global optimization method is adopted to optimize the main parameters of these base-models, and the range of parameters are preset as listed in Table 4.For different datasets, the optimal parameters of a model may be different.In practical application, the hyperparameter optimization of the model can use offline training and online prediction to save calculation costs and improve the efficiency of the model prediction.
Wind Power Forecasting and Results Analysis
The single base-model is employed as a benchmark for comparing with the proposed stacking ensemble model.The evaluation index values on four test datasets are shown in Figure 4.The last day of each season, namely 29 February, 31 May, 31 August, and 31 December, is selected as the forecast day.The wind power forecast curve is shown in Figure 6.
In Figure 6, the base-models adaptively selected for each dataset are different in the four seasons.Furthermore, the RMSE and MAE values of the stacking ensemble method are lower than all the selected single base-models.In winter, the RMSE and MAE values of the stacking ensemble method are 0.152 and 0.102, respectively, which are the largest compared with the other three seasons.Nevertheless, its prediction error is still much smaller than the benchmarks, such as the SVR, XGB, GBRT, ADA, and LGBM methods, of which the RMSE and MAE values are 0.169 and 0.13; 0.181 and 0.132; 0.164 and 0.122; 0.168 and 0.134; 0.17 and 0.124, respectively, indicating its excellent stability and robustness.The GBRT model is selected as the base-model in all four datasets, and its error values and R 2 scores are less than the stacking ensemble model, indicating that the prediction performance of GBRT has a certain stability and robustness.In addition, the R 2 score values of the stacking ensemble model are higher than those of the single base-models for all datasets.Taking the winter case as an example, the R 2 score of the stacking ensemble method is 0.702, which is the lowest for the four seasons.Nevertheless, it is still much higher than the benchmark models, demonstrating its outstanding performance, i.e., the improvement in its prediction accuracy and an enhancement of its generalization ability.From Figure 6, the prediction error for autumn is the smallest, followed by summer, spring, and winter, consistent with the characteristics of data with weaker fluctuations.It can be concluded that when the input data at some time point fluctuates greatly, the accurate prediction ability of the stacking ensemble model needs to be improved.However, compared to all the benchmark models for different datasets, the prediction performance of the proposed method is still superior.In Figure 6, the base-models adaptively selected for each dataset are different in the four seasons.Furthermore, the RMSE and MAE values of the stacking ensemble method are lower than all the selected single base-models.In winter, the RMSE and MAE values of the stacking ensemble method are 0.152 and 0.102, respectively, which are the largest compared with the other three seasons.Nevertheless, its prediction error is still much smaller than the benchmarks, such as the SVR, XGB, GBRT, ADA, and LGBM methods, of which the RMSE and MAE values are 0.169 and 0.13; 0.181 and 0.132; 0.164 and 0.122; 0.168 and 0.134; 0.17 and 0.124, respectively, indicating its excellent stability and robustness.The GBRT model is selected as the base-model in all four datasets, and its error values and R 2 scores are less than the stacking ensemble model, indicating that the prediction performance of GBRT has a certain stability and robustness.In addition, the R 2 score values of the stacking ensemble model are higher than those of the single base-models for all datasets.Taking the winter case as an example, the R 2 score of the stacking ensemble method is 0.702, which is the lowest for the four seasons.Nevertheless, it is still much higher than the benchmark models, demonstrating its outstanding performance, i.e., the improvement in its prediction accuracy and an enhancement of its generalization ability.From Figure 6, the prediction error for autumn is the smallest, followed by summer, spring, and winter, consistent with the characteristics of data with weaker fluctuations.It can be concluded that when the input data at some time point fluctuates greatly, the accurate prediction ability of the stacking ensemble model needs to be improved.However, compared to all the benchmark models for different datasets, the prediction performance of the proposed method is still superior.
The prediction curves of the stacking ensemble model and the comparison benchmarks with 96 time points for the selected prediction day covering four seasons are shown in Figure 7.The stacking ensemble model can better track the actual output power change trend than the single benchmark, indicating better prediction performance.In Figure 7a,c,d for winter, their prediction curves are flat in some time periods due to the weak fluctuation of the input data, including wind speed and direction.Thus, the true values closely follow the actual values.In Figure 7c for autumn, the true measured power values The prediction curves of the stacking ensemble model and the comparison benchmarks with 96 time points for the selected prediction day covering four seasons are shown in Figure 7.The stacking ensemble model can better track the actual output power change trend than the single benchmark, indicating better prediction performance.In Figure 7a,c,d for winter, their prediction curves are flat in some time periods due to the weak fluctuation of the input data, including wind speed and direction.Thus, the true values closely follow the actual values.In Figure 7c for autumn, the true measured power values of the predicted day have higher fluctuations.According to the input data, wind speed and direction are random in the range of 48-96 time points, and the wind speed reaches a limit of 14~15 m/s at some time points.Therefore, the predicted power values during this time period deviate from the real measured power.However, compared to the benchmark models, the prediction curve of the stacking ensemble model is closer to the true measured values.It demonstrates that the stacking ensemble model integrates multiple algorithms with different principles, adaptively tracking changes in the datasets.Compared with the benchmark models, the proposed model for wind power forecasting has a better fitting performance and can produce more accurate point predictions along with better generalization performance and stability.
PV Power Forecasting and Results Analysis
Similar to the wind power forecasting cases, the proposed stacking ensemble model is further validated by forecasting the output power of a PV station.The division of the dataset and selection of the forecast day are the same as the case of wind power prediction.The evaluation index values and prediction curves are presented in Figures 8 and 9.
In Figure 8, the base-model adaptively selected for photovoltaic power prediction is different from that for wind power prediction.For example, in spring, the base-models for photovoltaic prediction are SVR, bagging, LGBM, ADA, and RF, while for wind power prediction, the base-models are ADA, XGB, GBRT, RF, and LGBM, demonstrating the different performance of the different models in data mining.Furthermore, the proposed stacking ensemble model has a lower prediction error and higher R 2 scores than the other comparison models for all the study cases.Taking the autumn dataset as an example, in Figure 8c, the RMSE and MAE values of the stacking ensemble model are 0.098 and 0.062, respectively, which are higher than the other three seasons; its R 2 score is 0.762 and is the lowest in all the four seasons.Nevertheless, compared to the benchmark models, its forecasting error is the lowest and its R 2 score is the highest, indicating the prediction superiority of the proposed method.
Due to the diversity of the data characteristics, the prediction error and fitting score in the different seasons vary.In spring, summer, autumn, and winter, the RMSE values are 0.104, 0.099, 0.098, and 0.079, respectively; the MAE values are 0.063, 0.069, 0.062, and 0.05, respectively; and R 2 scores are 0.894, 0.895, 0.762, and 0.942, respectively, which fully illustrate the ability of the data-driven stacking ensemble model to deep mine potential data.
Energies 2023, 16, x FOR PEER REVIEW 15 of 20 of the predicted day have higher fluctuations.According to the input data, wind speed and direction are random in the range of 48-96 time points, and the wind speed reaches a limit of 14~15 m/s at some time points.Therefore, the predicted power values during this time period deviate from the real measured power.However, compared to the benchmark models, the prediction curve of the stacking ensemble model is closer to the true measured values.It demonstrates that the stacking ensemble model integrates multiple algorithms with different principles, adaptively tracking changes in the datasets.Compared with the benchmark models, the proposed model for wind power forecasting has a better fitting performance and can produce more accurate point predictions along with better generalization performance and stability.
PV Power Forecasting and Results Analysis
Similar to the wind power forecasting cases, the proposed stacking ensemble model is further validated by forecasting the output power of a PV station.The division of the dataset and selection of the forecast day are the same as the case of wind power prediction.The evaluation index values and prediction curves are presented in Figures 8 and 9.In Figure 8, the base-model adaptively selected for photovoltaic power prediction is different from that for wind power prediction.For example, in spring, the base-models for photovoltaic prediction are SVR, bagging, LGBM, ADA, and RF, while for wind power prediction, the base-models are ADA, XGB, GBRT, RF, and LGBM, demonstrating the different performance of the different models in data mining.Furthermore, the proposed stacking ensemble model has a lower prediction error and higher R 2 scores than the other comparison models for all the study cases.Taking the autumn dataset as an example, in Figure 8c, the RMSE and MAE values of the stacking ensemble model are 0.098 and 0.062, respectively, which are higher than the other three seasons; its R 2 score is 0.762 and is the lowest in all the four seasons.Nevertheless, compared to the benchmark models, its forecasting error is the lowest and its R 2 score is the highest, indicating the prediction superiority of the proposed method.
Due to the diversity of the data characteristics, the prediction error and fitting score in the different seasons vary.In spring, summer, autumn, and winter, the RMSE values are 0.104, 0.099, 0.098, and 0.079, respectively; the MAE values are 0.063, 0.069, 0.062, and 0.05, respectively; and R 2 scores are 0.894, 0.895, 0.762, and 0.942, respectively, which fully illustrate the ability of the data-driven stacking ensemble model to deep mine potential data.
Conclusions
In this paper, an adaptive, data-driven stacking ensemble model is proposed for the output power prediction of renewable energy, including wind power and PV power.The proposed model is validated using datasets collected from an actual wind farm and PV station.The following conclusions can be drawn: (1) The models with different algorithm principles can deeply mine the space and structural characteristics of multi-dimensional heterogeneous datasets from multiple perspectives, realizing the performance complementarity among algorithms.The proposed stacking ensemble learning framework can track the dynamic changes within data, combining multiple base-models to improve the forecasting accuracy, as well as the generalization ability and adaptability.
Energies 2023 , 20 Figure 1 .
Figure 1.An adaptive stacking ensemble framework for renewable energy output power forecasting.
Figure 1 .
Figure 1.An adaptive stacking ensemble framework for renewable energy output power forecasting.
20 Figure 2 .Figure 2 .
Figure 2. The framework of stacking ensemble learning.Given the input dataset 1 , , , i m D x x x = ( , ) , the dataset is divided into the training da- taset, test dataset and validation dataset.h Z is the h-th base-model of the first layer.The prediction output of the h Z model on the validation set is ( ) h i Z x , and the prediction result
Energies 2023 ,
16, x FOR PEER REVIEW 8 of 20 density estimator, instead of directly modeling the objective function F by a probabilistic model ( ) p f D[47,48].More details about Bayesian optimization are discussed in[45- 48].
Figure 3 .
Figure 3. Flow chart of the prediction model with Bayesian optimization.
Figure 3 .
Figure 3. Flow chart of the prediction model with Bayesian optimization.
Figure 4 .
Figure 4. Historical data example for spring from the wind farm.
Figure 5 .
Figure 5. Historical data example for spring from the PV station.
Figure 4 .
Figure 4. Historical data example for spring from the wind farm.
Figure 4 .
Figure 4. Historical data example for spring from the wind farm.
Figure 5 .
Figure 5. Historical data example for spring from the PV station.
Figure 5 .
Figure 5. Historical data example for spring from the PV station.
Figure 6 .
Figure 6.Comparison results of the different prediction models for wind power: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 6 .
Figure 6.Comparison results of the different prediction models for wind power: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 7 .
Figure 7. Wind power prediction curve of the different comparison models: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 7 .
Figure 7. Wind power prediction curve of the different comparison models: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 8 .
Figure 8.Comparison results of the different prediction models for PV power: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 8 .
Figure 8.Comparison results of the different prediction models for PV power: (a) spring, (b) summer, (c) autumn, and (d) winter.
Figure 9
Figure9shows the prediction curves of the stacking ensemble model and the comparison models on the prediction day.From sub-graph (b) summer and (c) autumn, the real measured values of PV power have little variation, and the prediction curves of the stacking ensemble model closely follow the true output power curves, indicating a high prediction accuracy.In sub-graph (a) spring and (d) winter, the actual power value of the predicted day has greater fluctuations due to the variation of the input datasets.Therefore, there is a certain gap between the predicted values and the actual measured values, while the overall trends of the prediction curve follow the changes of the actual measured power curve, indicating the effectiveness and adaptiveness of the stacking ensemble method for PV power forecasting.In addition, for all datasets, at times with low PV output power, the prediction values of the proposed stacking model are similar to those of the benchmark model, indicating the difficulty of prediction at low power points.However, at times with high PV output, especially at time periods with large fluctuations (black box mark in sub-figures (a) spring and (d) winter), the prediction curves of the proposed stacking model more closely follow the true power curve, indicating the significant superiority and reliability of the proposed method for PV power prediction.
Figure 9
Figure9shows the prediction curves of the stacking ensemble model and the comparison models on the prediction day.From sub-graph (b) summer and (c) autumn, the real measured values of PV power have little variation, and the prediction curves of the stacking ensemble model closely follow the true output power curves, indicating a high prediction accuracy.In sub-graph (a) spring and (d) winter, the actual power value of the predicted day has greater fluctuations due to the variation of the input datasets.Therefore, there is a certain gap between the predicted values and the actual measured values, while the overall trends of the prediction curve follow the changes of the actual measured power curve, indicating the effectiveness and adaptiveness of the stacking ensemble method for PV power forecasting.In addition, for all datasets, at times with low PV output power, the prediction values of the proposed stacking model are similar to those of the benchmark model, indicating the difficulty of prediction at low power points.However, at times with high PV output, especially at time periods with large fluctuations (black box mark in sub-figures (a) spring and (d) winter), the prediction curves of the proposed stacking model more closely follow the true power curve, indicating the significant superiority and reliability of the proposed method for PV power prediction.
( 2 )
The cross-validation and Bayesian hyperparameter optimization methods are used in the model training, which can effectively improve the model's prediction accuracy.(3)The linear model is employed as a meta-model to integrate base-models.The weight of each base-model is determined by the minimum cross-validation error principle,
Table 1 .
Evaluation indices of the base-models for wind power prediction.
Table 2 .
Evaluation index of independent models for solar power prediction.
Table 3 .
Evaluation index of different meta-models.
Table 4 .
Hyperparameters of the different base learner models. | 10,221.4 | 2023-02-16T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Fine-mapping from summary data with the “Sum of Single Effects” model
In recent work, Wang et al introduced the “Sum of Single Effects” (SuSiE) model, and showed that it provides a simple and efficient approach to fine-mapping genetic variants from individual-level data. Here we present new methods for fitting the SuSiE model to summary data, for example to single-SNP z-scores from an association study and linkage disequilibrium (LD) values estimated from a suitable reference panel. To develop these new methods, we first describe a simple, generic strategy for extending any individual-level data method to deal with summary data. The key idea is to replace the usual regression likelihood with an analogous likelihood based on summary data. We show that existing fine-mapping methods such as FINEMAP and CAVIAR also (implicitly) use this strategy, but in different ways, and so this provides a common framework for understanding different methods for fine-mapping. We investigate other common practical issues in fine-mapping with summary data, including problems caused by inconsistencies between the z-scores and LD estimates, and we develop diagnostics to identify these inconsistencies. We also present a new refinement procedure that improves model fits in some data sets, and hence improves overall reliability of the SuSiE fine-mapping results. Detailed evaluations of fine-mapping methods in a range of simulated data sets show that SuSiE applied to summary data is competitive, in both speed and accuracy, with the best available fine-mapping methods for summary data.
Introduction Fine-mapping is the process of narrowing down genetic association signals to a small number of potential causal variants [1][2][3][4], and it is an important step in the effort to understand the genetic causes of diseases [5,6]. However, fine-mapping is a difficult problem due to the strong and complex correlation patterns ("linkage disequilibrium", or LD) that exist among nearby genetic variants. Many different methods and algorithms have been developed to tackle the fine-mapping problem [2,[7][8][9][10][11][12][13][14][15][16][17][18][19]. In recent work, Wang et al [17] introduced a new approach to fine-mapping, SuSiE (short for "SUm of SIngle Effects"), which has several advantages over existing approaches: it is more computationally scalable; and it provides a new, simple way to calculate "credible sets" of putative causal variants [2,20]. However, the algorithms in [17] also have an important limitation-they require individual-level genotype and phenotype data. In contrast, many other fine-mapping methods require access only to summary data, such as zscores from single-SNP association analyses and an estimate of LD from a suitable reference panel [7, 8, 11-13, 15, 16, 21]. Requiring only summary data is useful because individual-level data are often difficult to obtain, both for practical reasons, such as the need to obtain many data sets collected by many different researchers, and for reasons to do with consent and privacy. By comparison, summary data are much easier to obtain, and many publications share such summary data [22].
In this paper, we introduce new variants of SuSiE for performing fine-mapping from summary data; we call these variants SuSiE-RSS (RSS stands for "regression with summary statistics" [23].) Our work exploits the fact that (i) the multiple regression likelihood can be written in terms of a particular type of summary data, known as sufficient statistics (explained below), and (ii) these sufficient statistics can be approximated from the types of summary data that are commonly available (e.g., z-scores from single-SNP association tests and LD estimates from suitable reference panel). In the special case where the sufficient statistics themselves are available, the second approximation is unnecessary and SuSiE-RSS yields the same results as SuSiE applied to the original individual-level data; otherwise, it yields an approximation. By extending SuSiE to deal with widely available summary statistics, SuSiE-RSS greatly expands the applicability of the SuSiE fine-mapping approach.
Although our main goal here is to extend SuSiE to work with summary data, the approach we use, and the connections it exploits, are quite general, and could be used to extend other individual-level data methods to work with summary data. This general approach has two nice features. First it deals simply and automatically with non-invertible LD matrices, which arise frequently in fine-mapping. We argue, both through theory and example, that it provides a simpler and more effective solution to this issue than some existing approaches. Second, it shows how individual-level results can be obtained as a special case of summary-data analysis, by using the sufficient statistics as summary data.
By highlighting the close connection between the likelihoods for individual-level and summary data, our work generalizes results of [11], who showed a strong connection between Bayes Factors, based on specific priors, from individual-level data and summary data. Our results highlight that this connection is fundamentally due to a close connection between the likelihoods, and so will apply whatever prior is used (and will also apply to non-Bayesian approaches that do not use a prior). By focussing on likelihoods, our analysis also helps clarify differences and connections between existing fine-mapping methods such as FINEMAP version 1.1 [12], FINEMAP version 1.2 [21] and CAVIAR [7], which can differ in both the prior and likelihood used.
Finally, we introduce several other methodological innovations for fine-mapping. Some of these innovations are not specific to SuSiE and could be used with other statistical methods. We describe methods for identifying "allele flips"-alleles that are (erroneously) encoded differently in the study and reference data-and other inconsistencies in the summary data. (See also [24] for related ideas.) We illustrate how a single allele flip can lead to inaccurate finemapping results, emphasizing the importance of careful quality control when performing finemapping using summary data. We also introduce a new refinement procedure for SuSiE that sometimes improves estimates from the original fitting procedure.
Description of the method
We begin with some background and notation. Let y 2 R N denote the phenotypes of N individuals in a genetic association study, and let X 2 R N�J denote their corresponding genotypes at J genetic variants (SNPs). To simplify the presentation, we assume the y are quantitative and approximately normally distributed, and that both y and the columns of X are centered to have mean zero, which avoids the need for an intercept term in (1) [25]. We elaborate on treatment of binary and case-control phenotypes in the Discussion below.
Fine-mapping from individual-level data is usually performed by fitting the multiple linear regression model where b = (b 1 , . . ., b J ) ⊺ is a vector of multiple regression coefficients, e is an N-vector of error terms distributed as e � N N ð0; s 2 I N Þ, with (typically unknown) residual variance σ 2 > 0, I N is the N × N identity matrix, and N r ðμ; ΣÞ denotes the r-variate normal distribution with mean μ and variance S. In this multiple regression framework, the question of which SNPs are affecting y becomes a problem of "variable selection"; that is, the problem of identifying which elements of b are not zero. While many methods exist for variable selection in multiple regression, fine-mapping has some special features-in particular, very high correlations among some columns of X, and very sparse b-that make Bayesian methods with sparse priors a preferred approach (e.g., [7][8][9]). These methods specify a sparse prior for b, and perform inference by approximating the posterior distribution p(b j X, y). In particular, the evidence for SNP j having a non-zero effect is often summarized by the "posterior inclusion probability" (PIP), The Sum of Single Effects (SuSiE) model The key idea behind SuSiE [17] is to write b as a sum, in which each vector b l = (b l1 , . . ., b lJ ) ⊺ is a "single effect" vector; that is, a vector with exactly
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model one non-zero element. The representation (3) allows that b has at most L non-zero elements, where L is a user-specified upper bound on the number of effects. (Consider that if singleeffect vectors b 1 and b 2 have a non-zero element at the same SNP j, b will have fewer than L non-zeros.) The special case L = 1 corresponds to the assumption that a region has exactly one causal SNP; i.e., exactly one SNP with a non-zero effect. In [17], this special case is called the "single effect regression" (SER) model. The SER is particularly convenient because posterior computations are analytically tractable [9]; consequently, despite its limitations, the SER has been widely used [2,[26][27][28].
For L > 1, Wang et al [17] introduced a simple model-fitting algorithm, which they called Iterative Bayesian Stepwise Selection (IBSS). In brief, IBSS iterates through the single-effect vectors l = 1, . . ., L, at each iteration fitting b l while keeping the other single-effect vectors fixed. By construction, each step thus involves fitting an SER, which, as noted above, is straightforward. Wang et al [17] showed that IBSS can be understood as computing an approximate posterior distribution p(b 1 , . . ., b L j X, y, σ 2 ), and that the algorithm iteratively optimizes an objective function known as the "evidence lower bound" (ELBO).
Summary data for fine-mapping
Motivated by the difficulties in accessing the individual-level data X, y from most studies, researchers have developed fine-mapping approaches that work with more widely available "summary data." Here we develop methods that use various combinations of the summary data.
. . . ;ŝ J Þ ⊺ containing estimates of marginal association for each SNP j, and corresponding standard errors, from a simple linear regression: s j ≔ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi An alternative tob;ŝ is the vectorẑ ¼ ðẑ 1 ; . . . ;ẑ J Þ ⊺ of z-scores, Many studies provideb andŝ (see [22] for examples), and many more provide the z-scores, or data that can be used to compute the z-scores (e.g.,ẑ j can be recovered from the p-value and the sign ofb j [29]). Note that it is important that allb;ŝ andẑ be computed from the same N samples.
2. An estimate,R, of the in-sample LD matrix, R, where R is the J × J SNP-by-SNP sample correlation matrix, and where D xx ≔ diag(X ⊺ X) is a diagonal matrix that ensures the diagonal entries of R are all 1. Often, the estimateR is taken to be an "out-of-sample" LD matrix-that is, the sample correlation matrix of the same J SNPs in a suitable reference panel, chosen to be genetically similar to the study population, possibly with additional shrinkage or banding steps to improve accuracy [14].
3. Optionally, the sample size N and the sample variance of y. (Since y is centered, the sample variance of y is simply v y ≔ y ⊺ y/N). Knowing these quantities is obviously equivalent to knowing y ⊺ y and N, so for brevity we will use the latter. These quantities are not required, but they can be helpful as we will see later.
We caution that if the summary statistics come from a meta-analysis, the summary statistics should be computed carefully to avoid the pitfalls highlighted in [24]. Importantly, SNPs that are not analyzed in all the individual studies in the meta-analysis should not be included in the finemapping.
SuSiE with summary data
A key question-and the question central to this paper-is, how do we use summary data to estimate the coefficients b in a multiple linear regression (1)? And, more specifically, how do we use them to estimate the single-effect vectors b 1 , . . ., b L in SuSiE (3)? Here, we tackle these questions in two steps. First, we consider a special type of summary data, called "sufficient statistics," which contain the same information about the model parameters as the individuallevel data X, y. Given such sufficient statistics, we develop an algorithm that exactly reproduces the results that would have been obtained by running SuSiE on the original data X, y. Second, we consider the case where we have access to summary data that are not sufficient statistics; these summary data can be used to approximate the sufficient statistics, and therefore approximate the results from individual-level data.
The IBSS-ss algorithm. The IBSS algorithm of [17] fits the SuSiE model to individuallevel data X, y. The data enter the SuSiE model only through the likelihood, which from (1) is This likelihood depends on the data only through X ⊺ X, X ⊺ y, y ⊺ y and N. Therefore, these quantities are sufficient statistics. (These sufficient statistics can be computed from other combinations of summary data, which are therefore also sufficient statistics; we discuss this point below.) Careful inspection of the IBSS algorithm in [17] confirms that it depends on the data only through these sufficient statistics. Thus, by rearranging the computations we obtain a variant of IBSS, called "IBSS-ss", that can fit the SuSiE model from sufficient statistics; see S1 Text. We use IBSS(X, y) to denote the result of applying the IBSS algorithm to the individuallevel data, and IBSS-ss(X ⊺ X, X ⊺ y, y ⊺ y, N) to denote the results of applying the IBSS-ss algorithm to the sufficient statistics. These two algorithms will give the same result, However, the computational complexity of the two approaches is different. First, computing the sufficient statistics requires computing the J × J matrix X ⊺ X, which is a non-trivial computation, requiring O(NJ 2 ) operations. However, once this matrix has been computed, IBSS-ss requires O(J 2 ) operations per iteration, whereas IBSS requires O(NJ) operations per iteration.
(The number of iterations should be the same.) Therefore, when N � J, which is often the case in fine-mapping studies, IBSS-ss will usually be faster. In practice, choosing between these workflows also depends on whether one prefers to precompute X ⊺ X, which can be done conveniently in programs such as PLINK [30] or LDstore [31]. SuSiE with summary data: SuSiE-RSS. In practice, sufficient statistics may not be available; in particular, when individual-level data are unavailable, the matrix X ⊺ X is also usually unavailable. A natural approach to deal with this issue is to approximate the sufficient statistics,
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model then to proceed as if the sufficient statistics were available by inputting the approximate sufficient statistics to the IBSS-ss algorithm. We call this approach "SuSiE-RSS".
For example, letV xx denote an approximation to the sample covariance V xx ¼ 1 N X ⊺ X, and assume the other sufficient statistics X ⊺ y, y ⊺ y, N are available exactly. (These are easily obtained from commonly available summary data, andR; see S1 Text.) Then SuSiE-RSS is the result of running the IBSS-ss algorithm on the sufficient statistics but with NV xx replacing X ⊺ X; that is, SuSiE-RSS is IBSS-ssðNV xx ; X ⊺ y; y ⊺ y; NÞ.
In practice, we found that estimating σ 2 sometimes produced very inaccurate estimates, presumably due to inaccuracies inV xx as an approximation to V xx . (This problem did not occur whenV xx ¼ V xx .) Therefore, when running the IBSS-ss algorithm on approximate summary statistics, we recommend to fix the residual variance, σ 2 = y ⊺ y/N, rather than estimate it.
Interpretation in terms of an approximation to the likelihood. We defined SuSiE-RSS as the application the IBSS-ss algorithm to the sufficient statistics or approximations to these statistics. Conceptually, this approach combines the SuSiE prior with an approximation to the likelihood (8).
To formalize this, we write the likelihood (8) explicitly as a function of the sufficient statistics, Replacing V xx with an estimateV xx is therefore the same as replacing the likelihood (11) with Note that whenV xx ¼ V xx , the approximation is exact; that is, ℓ RSS (b, σ 2 ) = ℓ(b, σ 2 ; X, y). Thus, applying SuSiE-RSS with V xx is equivalent to using the individual-data likelihood (8), and applying it withV xx is equivalent to using the approximate likelihood (12). Finally, fixing s 2 ¼ 1 N y ⊺ y is equivalent to using the following likelihood: General strategy for applying regression methods to summary data. The strategy used here to extend SuSiE to summary data is quite general, and could be used to extend essentially any likelihood-based multiple regression method for individual-level data X, y to summary data. Operationally, this strategy would involve two steps: (i) implement an algorithm that accepts as input sufficient statistics and outputs the same result as the individual-level data; (ii) apply this algorithm to approximations of the sufficient statistics computed from (non-sufficient) summary data (optionally, fixing the residual variance to σ 2 = y ⊺ y/N). This involves replacing the exact likelihood (18) with an approximate likelihood, either (12) or (13).
Special case when X, y are standardized. In genetic association studies, it is common practice to standardize both y and the columns of X to have unit variance-that is, y ⊺ y = N and x ⊺ j x j ¼ N for all j = 1, . . ., J-before fitting the model (1). Standardizing X, y is commonly done in genetic association analysis and fine-mapping, and results in some simplifications that facilitates connections with existing methods, so we consider this special case in detail. (See [32,33] for a discussion on the choice to standardize.) When X, y are standardized, the sufficient statistics are easily computed from the in-sample LD matrix R, the single-SNP z-scoresẑ, and the sample size, N: where we definez and we define D z to be the diagonal matrix in which the jth diagonal element is N=ðN þẑ 2 j Þ [21]. Note the elements of D z have the interpretation as being one minus the estimated PVE ("Proportion of phenotypic Variance Explained"), so we refer toz as the vector of the "PVEadjusted z-scores." If all the effects are small, the estimated PVEs will be close to zero, the diagonal of D z will be close to one, andz �ẑ.
Substituting Eqs (14)- (16) into (11) gives When the in-sample LD matrix R is not available, and is replaced withR � R, the SuSiE-RSS likelihood (13) becomes These expressions are summarized in Table 1.
Connections with previous work. The approach we take here is most closely connected with the approach used in FINEMAP (versions 1.2 and later) [21]. In essence, FINEMAP 1.2 uses the same likelihoods (18,19) as we use here, but the derivations in [21] do not clearly distinguish the case where the in-sample LD matrix is available from the case where it is not. In addition, the derivations in [21] focus on Bayes Factors computed with particular priors, rather than focussing on the likelihood. Our derivations emphasize that, when the in-sample LD matrix is available, results from "summary data" should be identical to those that would have been obtained from individual-level data. Our focus on likelihoods draws attention the Table 1. Summary of SuSiE and SuSiE-RSS, the different data they accept, and the corresponding likelihoods. In the "likelihood" column,z ≔ D 1=2 zẑ is the vector of adjusted z-scores; see (17). In this summary, we assume X, y are standardized, which is common practice in genetic association studies. Note that when SuSiE-RSS is applied to sufficient statistics and σ 2 is estimated (second row), the likelihood is identical to the likelihood for SuSiE applied the individual-level data (first row). See https:// stephenslab.github.io/susieR/articles/susie_rss.html for an illustration of how these methods are invoked in the R package susieR.
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model generality of this strategy; it is not specific to a particular prior, nor does it require the use of Bayesian methods. Several other previous fine-mapping methods (e.g., [7,8,12,16]) are based on the following model:ẑ where z = (z 1 , . . ., z J ) ⊺ is an unobserved vector of scaled effects, sometimes called the noncentrality parameters (NCPs), (Earlier versions of SuSiE-RSS were also based on this model [34].) To connect our method with this approach, note that, whenR is invertible, the likelihood (19) is equivalent to the likelihood for b in the following model: (See S1 Text for additional notes.) This model was also used in Zhu and Stephens [23], where the derivation was based on the PVE-adjusted standard errors, which gives the same PVEadjusted z-scores. Model (22) is essentially the same as (20) but with the observed z-scores,ẑ, replaced with the PVE-adjusted z-scores,z. In other words, whenR is invertible, these previous approaches are the same as our approach except that they use the z-scores,ẑ, instead of the PVE-adjusted z-scores,z. Thus, these previous approaches are implicitly making the approximation X ⊺ y � ffi ffi ffi ffi N pẑ , whereas our approach uses the identity X ⊺ y ¼ ffi ffi ffi ffi N pz (Eq 15). If all effect sizes are small (i.e., PVE � 0 for all SNPs), thenz �ẑ, and the approximation will be close to exact; on the other hand, if the PVE is not close to zero for one or more SNPs, then the use of the PVE-adjusted z-scores is preferred [21]. Note that the PVE-adjusted z-scores require knowledge of N; in rare cases where N is unknown, replacingz withẑ may be an acceptable approximation.
Approaches to dealing with a non-invertible LD matrix. One complication that can arise in working directly with models (20) or (22) is thatR is often not invertible. For example, ifR is the sample correlation matrix from a reference panel,R will not be invertible (i.e., singular) whenever the number of individuals in the panel is less than J, or whenever any two SNPs are in complete LD in the panel. In such cases, these models do not have a density (with respect to the Lebesgue measure). Methods using (20) have therefore required workarounds to deal with this issue. One approach is to modify ("regularize")R to be invertible by adding a small, positive constant to the diagonal [7]. In another approach, the data are transformed into a lower-dimensional space [35,36], which is equivalent to replacingR À 1 with its pseudoinverse (see S1 Text). Our approach is to use the likelihood (19), which circumvents these issues because the likelihood is defined whether or notR is invertible. (The likelihood is defined even ifR is not positive semi-definite, but its use in that case may be problematic as the likelihood may be unbounded; see [37].) This approach has several advantages over the data transformation approach: it is simpler; it does not involve inversion or factorization of a (possibly very large) J × J matrix; and it preserves the property that results under the SER model do not depend on LD (see Results and S1 Text). Also note that this approach can be combined with modifications toR, such as adding a small constant to the diagonal. The benefits of regulariz-ingR are investigated in the experiments below.
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model New refinement procedure for more accurate CSs As noted in [17], the IBSS algorithm can sometimes converge to a poor solution (a local optimum of the ELBO). Although this is rare, it can produce misleading results when it does occur; in particular it can produce false positive CSs (i.e., CSs containing only null SNPs that have zero effect). To address this issue, we developed a simple refinement procedure for escaping local optima. The procedure is heuristic, and is not guaranteed to eliminate all convergence issues, but in practice it often helps in those rare cases where the original IBSS had problems. The refinement procedure applies equally to both individual-level data and summary data.
In brief, the refinement procedure involves two steps: first, fit a SuSiE model by running the IBSS algorithm to convergence; next, for each CS identified from the fitted SuSiE model, rerun IBSS to convergence after first removing all SNPs in the CS (which forces the algorithm to seek alternative explanations for observed associations), then try to improve this fit by running IBSS to convergence again, with all SNPs. If these refinement steps improve the objective function, the new solution is accepted; otherwise, the original solution is kept. This process is repeated until the refinement steps no longer make any improvements to the objective. By construction, this refinement procedure always produces a solution whose objective is at least as good as the original IBSS solution. For full details, see S1 Text.
Because the refinement procedure reruns IBSS for each CS discovered in the initial round of model fitting, the computation increases with the number of CSs identified. In data sets with many CSs, the refinement procedure may be quite time consuming.
Other improvements to fine-mapping with summary data
Here we introduce additional methods to improve accuracy of fine-mapping with summary data. These methods are not specific to SuSiE and can be used with other fine-mapping methods.
Regularization to improve consistency of the estimated LD matrix. Accurate fine-mapping requiresR to be an accurate estimate of R. WhenR is computed from a reference panel, the reference panel should not be too small [31], and should be of similar ancestry to the study sample. Even when a suitable panel is used, there will inevitably be differences betweenR and R. A common way to improve estimation of covariance matrices is to use regularization [38], replacingR withR l ,R whereR 0 is the sample correlation matrix computed from the reference panel, and λ 2 [0, 1] controls the amount of regularization. This strategy has previously been used in fine-mapping from summary data (e.g., [8,37,39]), but in previous work λ was usually fixed at some arbitrarily small value, or chosen via cross-validation. Here, we estimate λ by maximizing the likelihood under the null (z = 0), The estimatedl reflects the consistency between the (PVE-adjusted) z-scores and the LD matrixR 0 ; if the two are consistent with one another,l will be close to zero. Detecting and removing large inconsistencies in summary data. RegularizingR can help to address subtle inconsistencies betweenR and R. However, regularization does not deal well with large inconsistencies in the summary data, which, in our experience, occur often. One common source of such inconsistencies is an "allele flip" in which the alleles of a SNP are encoded one way in the study sample (used to computeẑ) and in a different way in the reference panel (used to computeR). Large inconsistencies can also arise from using z-scores that were obtained using different samples at different SNPs (which should be avoided by performing genotype imputation [23]). Anecdotally, we have found large inconsistencies like these often cause SuSiE to converge very slowly and produce misleading results, such as an unexpectedly large number of CSs, or two CSs containing SNPs that are in strong LD with each other. We have therefore developed diagnostics to help users detect such anomalous data. (We note that similar ideas were proposed in the recent paper [40].) Under model (22), the conditional distribution ofz j given the other PVE-adjusted z-scores isz where Ω ≔R À 1 ,z À j denotes the vectorz excludingz j , and O j,−j denotes the jth row of O excluding O jj . This conditional distribution depends on the unknown b j . However, provided that the effect of SNP j is small (i.e., b j � 0), or that SNP j is in strong LD with other SNPs, which implies 1/O jj � 0, we can approximate (25) bỹ This distribution has been previously used to impute z-scores [41], and it is also used in DEN-TIST [40]. An initial quality control check can be performed by plotting the observedz j against its conditional expectation in (26), with large deviations potentially indicating anomalous zscores. Since computing these conditional expectations involves the inverse ofR, this matrix must be invertible. WhenR is not invertible, we replaceR with the regularized (and invertible) matrixR l following the steps described above. Note that while we have written (25) and (26) in terms of the PVE-adjusted z-scores,z, it is valid to use the same expressions for the unadjusted z-scores,ẑ, so long as the effect sizes are small (DENTIST uses z-scores instead of the PVE-adjusted z-scores).
A more quantitative measure of the discordance ofz j with its expectation under the model can be obtained by computing standardized differences between the observed and expected values, SNPs j with largest t j (in magnitude) are most likely to violate the model assumptions, and are therefore the top candidates for followup. When any such candidates are detected, the user should check the data pre-processing steps and fix any errors that cause inconsistencies in summary data. If there is no way to fix the errors, removing the anomalous SNPs is a possible workaround. Sometimes removing a single SNP is enough to resolve the discrepancies-for example, a single allele flip can result in inconsistent z-scores among many SNPs in LD with the allele-flip SNP. We have also developed a likelihood-ratio statistic based on (26) specifically for identifying allele flips; see S1 Text for a derivation of this likelihood ratio and an empirical assessment of its ability to identify allele-flip SNPs in simulations. After one or more SNPs are removed, one should consider re-running these diagnostics on the filtered summary data to search for additional inconsistencies that may have been missed in the first round. Alternatively, DENTIST provides a more automated approach to filtering out inconsistent SNPs [40].
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model We caution that computing these diagnostics requires inverting or factorizing a J × J matrix, and may therefore involve a large computational expense-potentially a greater expense than the fine-mapping itself-when J, the number of SNPs, is large.
Fine-mapping with inconsistent summary data and a non-invertible LD matrix: An illustration
A technical issue that arises when developing fine-mapping methods for summary data is that the LD matrix is often not invertible. Several approaches to dealing with this have been suggested including modifying the LD matrix to be invertible, transforming the data into a lowerdimensional space, or replacing the inverse with the "pseudoinverse" (see "Approaches to dealing with a non-invertible LD matrix" above). In SuSiE-RSS, we avoid this issue by directly approximating the likelihood, so SuSiE-RSS does not require the LD matrix to be invertible. We summarize the theoretical relationships between these approaches in S1 Text. Here we illustrate the practical advantage of the SuSiE-RSS approach in a toy example.
Consider a very simple situation with two SNPs, in strong LD with each other, with observed z-scoresẑ ¼ ð6; 7Þ. Both SNPs are significant, but the second SNP is more significant. Under the assumption that exactly one of these SNPs has an effect-which allows for exact posterior computations-the second SNP is the better candidate, and should have a higher PIP. Further, we expect the PIPs to be unaffected by LD between the SNPs (see S1 Text). However, the transformation and pseudoinverse approaches-which are used by msCAVIAR [42] and in previous fine-mapping analyses [35,36], and are also used in DEN-TIST to detect inconsistencies in summary data [40]-do not guarantee that either of these properties are satisfied. For example, suppose the two SNPs are in complete LD in the reference panel, soR is a 2 × 2 (non-invertible) matrix with all entries equal to 1. Here,R is inconsistent with the observedẑ because complete LD between SNPs implies their z-scores should be identical. (This could happen if the LD in the reference panel used to computeR is slightly different from the LD in the association study.) The transformation approach effectively adjusts the observed dataẑ to be consistent with the LD matrix before drawing inferences; here it would adjustẑ toẑ ¼ ð6:5; 6:5Þ, removing the observed difference between the SNPs and forcing them to be equally significant, which seems undesirable. The pseudoinverse approach turns out to be equivalent to the transformation approach (see S1 Text), and so behaves the same way. In contrast, our approach avoids this behaviour, and correctly maintains the second SNP as the better candidate; applying SuSiE-RSS to this toy example yields PIPs of 0.0017 for the first SNP and 0.9983 for the second SNP, and a single CS containing the second SNP only. To reproduce this result, see the examples accompanying the susie_rss function in the susieR R package.
Effect of allele flips on accuracy of fine-mapping: An illustration
When fine-mapping is performed using z-scores from a study sample and an LD matrix from a different reference sample, it is crucial that the same allele encodings are used. In our experience, "allele flips," in which different allele encoding are used in the two samples, are a common source of fine-mapping problems. Here we use a simple simulation to illustrate this problem, and the steps we have implemented to diagnose and correct the problem.
We simulated a fine-mapping data set with 1,002 SNPs, in which one out of the 1,002 SNPs was causal, and we deliberately used different allele encodings in the study sample and reference panel for a non-causal SNP (see S1 Text for more details). The causal SNP is among the SNPs with the highest z-scores (Panel A), and SuSiE-RSS correctly includes this causal SNP in a CS (Panel B). However, SuSiE-RSS also wrongly includes the allele-flip SNP in a second CS (Panel B). This happens because the LD between the allele-flip SNP and other SNPs is incorrectly estimated. Fig 1, Panel C shows a diagnostic plot comparing each z-score against its expected value under model (22). The allele-flip SNP stands out as a likely outlier (yellow circle), and the likelihood ratio calculations identify this SNP as a likely allele flip: LR = 8.2 × 10 3 for the allele-flip SNP, whereas all the other 262 SNPs with z-scores greater than 2 in magnitude have likelihood ratios less than 1. (See S1 Text for a more systematic assessment of the use of these likelihood ratio for identifying allele-flip SNPs.) After correcting the allele encoding to be the same in the study and reference samples, SuSiE-RSS infers a single CS containing the causal SNP, and the allele-flip SNP is no longer included in a CS; see
Simulations using UK Biobank genotypes
To systematically compare our new methods with existing methods for fine-mapping, we simulated fine-mapping data sets using the UK Biobank imputed genotypes [43]. The UK Biobank
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model imputed genotypes are well suited to illustrate fine-mapping with summary data due to the large sample size, and the high density of available genetic variants after imputation. We randomly selected 200 regions on autosomal chromosomes for fine-mapping, such that each region contained roughly 1,000 SNPs (390 kb on average). Due to the high density of SNPs, these data sets often contain strong correlations among SNPs; on average, a data set contained 30 SNPs with correlation exceeding 0.9 with at least one other SNP, and 14 SNPs with correlations exceeding 0.99 with at least one other SNP.
For each of the 200 regions, we simulated a quantitative trait under the multiple regression model (1) with X comprising genotypes of 50,000 randomly selected UK Biobank samples, and with 1, 2 or 3 causal variants explaining a total of 0.5% of variation in the trait (total PVE of 0.5%). In total, we simulated 200 × 3 = 600 data sets. We computed summary data from the real genotypes and synthetic phenotypes. To compare how choice of LD matrix affects finemapping, we used three different LD matrices: in-sample LD matrix computed from the 50,000 individuals (R), and two out-of-sample LD matrices computed from randomly sampled reference panels of 500 or 1,000 individuals, denotedR 500 andR 1000 , respectively. The samples randomly chosen for each reference panel had no overlap with the study sample but were drawn from the same population, which mimicked a situation where the reference sample was well matched to the study sample.
Refining SuSiE model fits improves fine-mapping performance. Before comparing the methods, we first demonstrate the benefits of our new refinement procedure for improving SuSiE model fits. Fig 2 shows an example drawn from our simulations where the regular IBSS algorithm converges to a poor solution and our refinement procedure improves the solution. The example has two causal SNPs in moderate LD with one another, which have opposite effects that partially cancel out each others' marginal associations (Panel A). This example is challenging because the SNP with the strongest marginal association (SMA) is not in high LD with either causal SNP; it is in moderate LD with the first causal SNP, and low LD with the second causal SNP. Although [17] showed that the IBSS algorithm can sometimes deal well with such situations, that does not happen in this case; the IBSS algorithm yields three CSs, two of which are false positives that do not contain a causal SNP (Panel B). Applying our refinement procedure solves the problem; it yields a solution with higher objective function (ELBO), and with two CSs, each containing one of the causal SNPs (Panel C).
Although this sort of problem was not common in our simulations, it occurred often enough that the refinement procedure yielded a noticeable improvement in performance across many simulations (Fig 2, Panel D). In this plot, power and false discovery rate (FDR) are calculated as FDR ≔ FP TPþFP and power ≔ TP TPþFN , where FP, TP, FN, TN denote, respectively, the number of false positives, true positives, false negatives and true negatives. In our remaining experiments, we therefore always ran SuSiE-RSS with refinement.
Impact of LD accuracy on fine-mapping. We performed simulations to compare SuS-iE-RSS with several other fine-mapping methods for summary data: FINEMAP [12,21], DAP-G [14,16] and CAVIAR [7]. These methods differ in the underlying modeling assumptions, the priors used, and in the approach taken to compute posterior quantities. For these simulations, SuSiE-RSS, FINEMAP and DAP-G were all very fast, usually taking no more than a few seconds per data set (Table 2); by contrast, CAVIAR was much slower because it exhaustively evaluated all causal SNP configurations. Other Bayesian fine-mapping methods for summary data include PAINTOR [8], JAM [15] and CAVIARBF [11]. FINEMAP has been shown [12] to be faster and at least as accurate as PAINTOR and CAVIARBF. JAM is comparable in accuracy to FINEMAP [15] and is most beneficial when jointly fine-mapping multiple genomic regions, which we did not consider here.
We compared methods based on both their posterior inclusion probabilities (PIPs) [44] and credible sets (CSs) [2,17]. These quantities have different advantages. PIPs have the advantage that they are returned by most methods, and can be used to assess familiar quantities such as power and false discovery rates. CSs have the advantage that, when the data support multiple causal signals, the multiple causal signals is explicitly reflected in the number of CSs reported. Uncertainty in which SNP is causal is reflected in the size of a CS.
First, we assessed the performance of summary-data methods using the in-sample LD matrix. With an in-sample LD matrix, SuSiE-RSS applied to sufficient statistics (with estimated σ 2 ) will produce the same results as SuSiE on the individual-level data, so we did not include SuSiE in this comparison. The results show that SuSiE-RSS, FINEMAP and DAP-G have very similar performance, as measured by both PIPs (Fig 3) and CSs ("in-sample LD" columns in Refining SuSiE model fits improves fine-mapping accuracy. Panels A, B and C show a single example, drawn from our simulations, that illustrates how refining a SuSiE-RSS model fit improves fine-mapping accuracy. In this example, there are 1,001 candidate SNPs, and two SNPs (red triangles "SNP 1" and "SNP 2") explain variation in the simulated phenotype. The strongest marginal association (yellow circle, "SMA") is not a causal SNP. Without refinement, the IBSS-ss algorithm (applied to sufficient statistics, with estimated σ 2 ) returns a SuSiE-RSS fit identifying three 95% CSs (blue, green and orange circles); two of the CSs (blue, orange) are false positives containing no true effect SNP, one of these CSs contains the SMA (orange), and no CS includes SNP 1. After running the refinement procedure, the fit is much improved, as measured by the "evidence lower bound" (ELBO); it increases the ELBO by 19.06 (−70837.09 vs. −70818.03). The new SuSiE-RSS fit (Panel C) identifies two 95% CSs (blue and green circles), each containing a true causal SNP, and neither contains the SMA. Panel D summarizes the improvement in fine-mapping across all 600 simulations; it shows power and false discovery rate (FDR) for SuSiE-RSS with and without using the refinement procedure as the PIP threshold for reporting causal SNPs is varied from 0 to 1. (This plot is the same as a precision-recall curve after flipping the x-axis because precision ¼ TP TPþFP ¼ 1 À FDR and recall = power.) Circles are drawn at a PIP threshold of 0.95. https://doi.org/10.1371/journal.pgen.1010299.g002
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model Fig 4). Further, all four methods produced CSs whose coverage was close to the target level of 95% (Panel A in Fig 4). The main difference between the methods is that DAP-G produced some "high confidence" (high PIP) false positives, which hindered its ability to produce very low FDR values. Both SuSiE-RSS and FINEMAP require the user to specify an upper bound on the number of causal SNPs L. Setting this upper bound to the true value ("L = true" in the figures) only slightly improved their performance, demonstrating that, with an in-sample LD matrix, these methods are robust to overstating this bound. We also compared the sufficientdata (estimated σ 2 ) and summary-data (fixed σ 2 ) variants of SuSiE-RSS (see Table 1). The performance of the two variants was very similar, likely owing to the fact that the PVE was close to zero in all simulations, and so σ 2 = 1 was not far from the truth. CAVIAR performed notably less well than the other methods for the PIP computations. (Note the CSs computed by Table 2. Runtimes on simulated data sets with in-sample LD matrix. Average runtimes are taken over 600 simulations. All runtimes are in seconds. All runtimes include the time taken to read the data and write the results to files. FDR and power are calculated from 600 simulations as the PIP threshold is varied from 0 to 1. Open circles are drawn at a PIP threshold of 0.95. Two variants of FINEMAP and three variants of SuSiE-RSS are also compared: when L, the maximum number of estimated causal SNPs, is the true number of causal SNPs, or larger than the true number; and, for SuSiE-RSS only, when the residual variance σ 2 is estimated ("sufficient data") or fixed to 1 ("summary data"); see
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model CAVIAR are defined differently from CSs computed by other methods, so we did not include CAVIAR in Fig 4.) Next, we compared the summary data methods using different out-of-sample LD matrices, again using SuSiE-RSS with in-sample LD (and estimated σ 2 ) as a benchmark. For each method, we computed out-of-sample LD matrices using two different panel sizes (n = 500, 1000) and three different values for the regularization parameter, λ (no regularization, λ = 0; weak regularization, λ = 0.001; and λ estimated from the data). As might be expected, the performance of SuSiE-RSS, FINEMAP and DAP-G all degraded with out-of-sample LD compared with in-sample LD; see Figs 4 and 5. Notably, the CSs no longer met the 95% target coverage (Panel A in Fig 4). In all cases, performance was notably worse with the smaller reference panel, which highlights the importance of using a sufficiently large reference panel [31]. Regarding regularization, SuSiE-RSS and DAP-G performed similarly at all levels of Table 1): when the residual variance σ 2 was estimated ("sufficient data"), or fixed to 1 ("summary data"). We evaluate the estimated CSs using the following metrics: (A) coverage, the proportion of CSs that contain a true causal SNP; (B) power, the proportion of true causal SNPs included in a CS; (C) median number of SNPs in each CS; and (D) median purity, where "purity" is defined as the smallest absolute correlation among all pairs of SNPs within a CS. These statistics are taken as the mean (A, B) or median (C, D) over all simulations; error bars in A and B show two times the standard error. The target coverage of 95% is shown as a dotted horizontal line in Panel A. Following [17], we discarded all CSs with purity less than 0.5. https://doi.org/10.1371/journal.pgen.1010299.g004
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model For out-of-sample LD, different levels of the regularization parameter λ are also compared: λ = 0; λ = 0.001; and estimated λ. Panels C-F show results for two variants of FINEMAP and SuSiE-RSS: in Panels C and E, the maximum number of causal SNPs, L, is set to the true value ("L = true"); in Panels D and F, L is set larger than the true value (L = 5 for FINEMAP; L = 10 for SuSiE-RSS). In each panel, the dotted black line shows the results from SuSiE-RSS with in-sample LD and estimated σ 2 , which provides a baseline for comparison (note that all the other SuSiE-RSS regularization, and so do not appear to require regularization; in contrast, FINEMAP required regularization with an estimated λ to compete with SuSiE-RSS and DAP-G. Since estimating λ is somewhat computationally burdensome, SuSiE-RSS and DAP-G have an advantage in this situation. All three methods benefited more from increasing the size of the reference panel than from regularization, again emphasizing the importance of sufficiently large reference panels. Interestingly, CAVIAR's performance was relatively insensitive to choice of LD matrix; however the other methods clearly outperformed CAVIAR with the larger (n = 1, 000) reference panel.
The fine-mapping results with out-of-sample LD matrix also expose another interesting result: if FINEMAP and SuSiE-RSS are provided with the true number of causal SNPs (L = true), their results improve (Fig 5, Panels C vs. D, Panels E vs. F). This improvement is particularly noticeable for the small reference panel. We interpret this result as indicating a tendency of these methods to react to misspecification of the LD matrix by sometimes including additional (false positive) signals. Specifying the true L reduces their tendency to do this because it limits the number of signals that can be included. This suggests that restricting the number of causal SNPs, L, may make fine-mapping results more robust to misspecification of the LD matrix, even for methods that are robust to overstating L when the LD matrix is accurate. Priors or penalties that favor smaller L may also help. Indeed, when none of the methods are provided with information about the true number of causal SNPs, DAP-G slightly outperforms FINEMAP and SuSiE-RSS, possibly reflecting a tendency for DAP-G to favour models with smaller numbers of causal SNPs (either due to the differences in prior or differences in approximate posterior inference). Further study of this issue may lead to methods that are more robust to misspecified LD.
Fine-mapping causal SNPs with larger effects. Above, we evaluated the performance of fine-mapping methods in simulations when the simulated effects of the causal SNPs were small (total PVE of 0.5%). This was intended to mimic the typical situation encountered in genome-wide association studies [45,46]. Here we scrutinize the performance of fine-mapping methods when the effects of the causal SNPs are much larger, which might be more representative of the situation in expression quantitative trait loci (eQTL) studies [47][48][49]. FINEMAP and SuSiE-and therefore SuSiE-RSS with sufficient statistics-are expected to perform well in this setting [17,21], but, as mentioned above, some summary-data methods make the (implicit) assumption that the effects are small (see "Connections with previous work"), and this assumption may affect performance in settings where this assumption is violated.
To assess the ability of the fine-mapping methods to identify causal SNPs with larger effects, we performed an additional set of simulations, again using the UK Biobank genotypes, except that here we simulated the 1-3 causal variants so that they explained, in total, a much larger proportion of variance in the trait (PVE of 10% and 30%). To evaluate these methods at roughly the same level of difficulty (i.e., power), we simulated these fine-mapping data sets with much smaller sample sizes, N = 2, 500 and N = 800, respectively (the out-of-sample LD matrix was calculated using 1,000 samples).
The results of these high-PVE simulations are summarized in Fig 6. As expected, SuSiE-RSS with in-sample LD matrix performed consistently better than the other methods, which use an out-of-sample LD matrix, and therefore provides a baseline against which other methods can results were generated by fixing σ 2 to 1, which is the recommended setting for out-of-sample LD; see Table 1). Some power vs. FDR curves may not be visible in the plots because they overlap almost completely with another curve, such as some of the SuSiE-RSS results at different LD regularization levels. https://doi.org/10.1371/journal.pgen.1010299.g005
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model be compared. Overall, increasing PVE tended to increase variation in performance among the methods. In all PVE settings, SuSiE-RSS with out-of-sample LD was among the top performers, and it most clearly outperformed other methods in the highest PVE setting (30% PVE), where all of FINEMAP, DAP-G, and CAVIAR showed a notable decrease in performance. For DAP-G and CAVIAR, this decrease in performance was expected due to their implicit Each curve shows power vs. FDR for identifying causal SNPs with different effect sizes (total PVE of 0.5%, 10% and 30%). Each panel summarizes results from 600 simulations; FDR and power are calculated from the 600 simulations as the PIP threshold is varied from 0 to 1. Open circles depict power and FDR at a PIP threshold of 0.95. In addition to comparing different methods (SuSiE-RSS. FINEMAP, DAP-G, CAVIAR), two variants of FINEMAP and SuSiE-RSS are also compared: when L, the maximum number of estimated causal SNPs, is set to the true number of causal SNPs; and when L is larger than the true number. SuSiE-RSS with estimated residual variance σ 2 and in-sample LD (dotted black line) is shown as a "best case" method against which other methods can be compared. All other methods are given an out-of-sample LD matrix computed from a reference panel with 1,000 samples, and with no regularization (λ = 0). The simulation results for 0.5% PVE (top-left panel) are the same as the results shown in previous plots (Figs 3 and 5), but presented differently here to facilitate comparison with the results of the higher-PVE simulations. https://doi.org/10.1371/journal.pgen.1010299.g006
PLOS GENETICS
Fine-mapping from summary data with the "Sum of Single Effects" model modeling assumption that the effect sizes are small. For FINEMAP, this drop in performance was unexpected since FINEMAP also uses the PVE-adjusted z-scores to account for larger effects. Although this situation is unusual in fine-mapping studies-that is, it is unusual for a handful of SNPs to explain such a large proportion of variance in the trait-we examined these FINEMAP results more closely to understand why this was happening. (We also prepared a detailed working example illustrating this result; see https://stephenslab.github.io/finemap/ large_effect.html.) We confirmed that this performance drop only occurred with an out-ofsample LD matrix; with an in-sample LD matrix, FINEMAP's performance was very similar to SuSiE-RSS's with an in-sample LD matrix (results not shown). A partial explanation for the much worse performance with out-of-sample LD was that FINEMAP often overestimated the number of causal SNPs; in 17% of the simulations, FINEMAP assigned highest probability to configurations with more causal SNPs than the true number. By contrast, SuSiE-RSS overestimated the number of causal SNPs (i.e., the number of CSs) in only 1% of the simulations. Fortunately, in settings where causal SNPs might have larger effects, FINEMAP's performance can be greatly improved by telling it the true number of causal SNPs ("L = true"), which is consistent with our earlier finding that restricting L in SuSiE-RSS and FINEMAP can improve fine-mapping with an out-of-sample LD matrix.
Discussion
We have presented extensions of the SuSiE fine-mapping method to accommodate summary data, with a focus on marginal z-scores and an out-of-sample LD matrix computed from a reference panel. Our approach provides a generic template for how to extend any full-data regression method to analyze summary data: develop a full-data algorithm that works with sufficient statistics, then apply this algorithm directly to summary data. Although it is simple, as far as we are aware this generic template is novel, and it avoids the need for any special treatment of non-invertible LD matrices.
In simulations, we found that our new method, SuSiE-RSS, is competitive in both accuracy and computational cost with the best available methods for fine-mapping from summary data, DAP-G and FINEMAP. Whatever method is used, our results underscore the importance of accurately computing out-of-sample LD from an appropriate and large reference panel (see also [31]). Indeed, for the best performing methods, performance depended more on choice of LD matrix than on choice of method. We also emphasize the importance of computing zscores at different SNPs from the exact same samples, using genotype imputation if necessary [50]. It is also important to ensure that alleles are consistently encoded in study and reference samples.
Although our derivations and simulations focused on z-scores computed from quantitative traits with a simple linear regression, in practice it is common to apply summary-data finemapping methods to z-scores computed in other ways, e.g., using logistic regression on a binary or case-control trait, or using linear mixed models to deal with population stratification and relatedness. The multivariate normal assumption on z-scores, which underlies all the methods considered here, should also apply to these settings, although as far as we are aware theoretical derivation of the precise form (20) is lacking in these settings (although see [12,51,52]). Since the model (20) is already only an approximation, one might expect that the additional effect of such issues might be small, particularly compared with the effect of allele flips or small reference panels. Nonetheless, since our simulations show that model misspecification can hurt performance of existing methods, further research to improve robustness of finemapping methods to model misspecification would be welcome.
Supporting information S1 Fig. Likelihood ratio for detecting allele flips. These plots summarize the likelihood ratios LR j for SNPs j in simulated fine-mapping data sets, separately for allele-flip SNPs with an effect (top row, right-hand side), without an effect on the trait (top row, left-hand side), and for SNPs without a flipped allele that affect the trait (middle row, right-hand side) and do not affect the trait (middle row, left-hand side). The two histograms in the bottom row show likelihood ratios after restricting to SNPs with z-scores greater than 2 in magnitude. The bar heights in the histograms in the middle and bottom rows are drawn on the logarithmic scale to better visualize the smaller numbers of SNPs with likelihood ratios greater than 1 (i.e., log LR j > 0). (PDF) S1 Text. Detailed methods. More description of the methods, including: the single effect regression (SER) model with summary statistics; the IBSS-ss algorithm; computing the sufficient statistics; approaches to dealing with a non-invertible LD matrix; estimation of λ in the regularized LD matrix; likelihood ratio for detecting allele flips; SuSiE refinement procedure; detailed calculations for toy example; and more details on the UK Biobank simulations. (PDF) | 12,979.4 | 2021-11-04T00:00:00.000 | [
"Mathematics"
] |
On the pitfalls of PTV in lung SBRT using type-B dose engine: an analysis of PTV and worst case scenario concepts for treatment plan optimization
Background PTV concept is presumed to introduce excessive and inconsistent GTV dose in lung stereotactic body radiotherapy (SBRT). That GTV median dose prescription (D50) and robust optimization are viable PTV–free solution (ICRU 91 report) to harmonize the GTV dose was investigated by comparisons with PTV–based SBRT plans. Methods Thirteen SBRT plans were optimized for 54 Gy / 3 fractions and prescribed (i) to 95% of the PTV (D95) expanded 5 mm from the ITV on the averaged intensity project (AIP) CT, i.e., PTVITV, (ii) to D95 of PTV derived from the van Herk (VH)‘s margin recipe on the mid–ventilation (MidV)–CT, i.e., PTVVH, (iii) to ITV D98 by worst case scenario (WCS) optimization on AIP,i.e., WCSITV and (iv) to GTV D98 by WCS using all 4DCT images, i.e., WCSGTV. These plans were subsequently recalculated on all 4DCT images and deformably summed on the MidV–CT. The dose differences between these plans were compared for the GTV and selected normal organs by the Friedman tests while the variability was compared by the Levene’s tests. The phase–to–phase changes of GTV dose through the respiration were assessed as an indirect measure of the possible increase of photon fluence owing to the type–B dose engine. Finally, all plans were renormalized to GTV D50 and all the dosimetric analyses were repeated to assess the relative influences of the SBRT planning concept and prescription method on the variability of target dose. Results By coverage prescriptions (i) to (iv), significantly smaller chest wall volume receiving ≥30 Gy (CWV30) and normal lung ≥20 Gy (NLV20Gy) were achieved by WCSITV and WCSGTV compared to PTVITV and PTVVH (p > 0.05). These plans differed significantly in the recalculated and summed GTV D2, D50 and D98 (p < 0.05). The inter–patient variability of all GTV dose parameters is however equal between these plans (Levene’s tests; p > 0.05). Renormalizing these plans to GTV D50 reduces their differences in GTV D2, and D98 to insignificant level (p > 0.05) and their inter–patient variability of all GTV dose parameters. None of these plans showed significant differences in GTV D2, D50 and D98 between respiratory phases, nor their inter–phase variability is significant. Conclusion Inconsistent GTV dose is not unique to PTV concept but occurs to other PTV–free concept in lung SBRT. GTV D50 renormalization effectively harmonizes the target dose among patients and SBRT concepts of geometric uncertainty management.
Introduction
Stereotactic body radiotherapy (SBRT) for non-small cell lung carcinomas (NSCLC) is typically delivered in free breathing condition. To limit the negative impact of respiration-induced organ motion and setup errors on its clinical benefits, passive motion management is often pursued, using either the internal target volume (ITV) concept or the mid-ventilation (MidV) concept [1]. Alternatively, passive motion management can also be realized by direct incorporation of the tumor motion into the four-dimensional (4D) optimization framework [2].
Regardless of the motion management techniques and setup uncertainty, dose optimization and prescription are invariably performed with respect to the planning target volume (PTV) to ensure, for instances, 95 and 99% PTV coverage by 100 and 90% of the prescription dose (i.e., PTV D 95 = 100% and D 99 = 90%). As suggested by Lebredonchel et al. [3], when type-B and Monte Carlo (MC) dose algorithms that model lateral electronic equilibrium (LED) are directly used to optimize to PTV D 95 a high flux of photon fluence would have to be deposited in the low density lung tissue surrounding the gross tumor volume (GTV). As a consequence, increase of dose in the lung may occur. Worse still, the GTV dose may experience increased variability only during treatment delivery as the tumor moves in and out of the high photon fluence zone over the breathing cycles. As a workaround Lacornerie et al. [4] proposed to use type-A algorithm to optimize a homogeneous fluence for which the dose distribution is ultimately calculated and renormalized to the desired prescription level using the more accurate type-B /MC algorithms. In fact, most of the major treatment planning systems (TPS) adapts type-A dose engines to increase the speed of inverse optimization for intensity-modulated (IMRT) or volumetric modulated-arc radiotherapy (VMAT). Type-B dose engine is only used at certain intermediate steps as a background dose, the so-called intermediate dose, during subsequent optimization to minimize the impact of the dose prediction and optimization convergence errors [5][6][7].
The latest published International Commission on Radiation Units and Measurements (ICRU) report 91 [8] continues to recommend treatment dose prescription based on PTV coverage (ICRU 91 coverage prescription) while acknowledging the increased variability of the internal GTV dose for lung SBRT using an advanced dose calculation engine. Potential solutions to improve the consistency in the reported dose and hence treatment outcomes were discussed in the report using the GTV median dose D 50 prescription and robust optimization (RO) but no further guidelines were provided. Following up the ICRU report 91 recommendations eight ACROP (Advisory Committee on Radiation Oncology) contributing centers have recently reported the variation of their prescription practices, which led to large inter-institutional and for four centers even large intra-institutional variations of the GTV/ITV doses [9]. The ACROP further made five additional clarifications, one of them recommending a minimum GTV biological equivalent mean dose of 150 Gy. Another preliminary study from one ACROP center also demonstrated superior inter-patient variability by prescription/renormalization to ITV D 50 to prescriptions by PTV and ITV D 98 [10]. However, their results did not concern geometric uncertainty of the GTV. Current studies supporting the GTV median and mean dose optimization and prescription were mostly based on real-time tumor tracking SBRT where tumor motion was largely constrained. More importantly, very few clinical outcomes have ever been published [11,12]. The impact of respiration motion on the variability of target dose is still unknown for the GTV D 50 prescription/renormalization methods.
Unlike for proton therapy where RO has been in routine clinical practice [13], the clinical role of RO in photon therapy remains relatively undefined and exploratory. Since RO was introduced to the commercial TPS, there have been a few studies of its clinical application to lung SBRT but mainly focusing on the dosimetric benefits and validating the degree of robustness in reality [14]. For two example patient cases, Zhang et al. [15] showed that combining robust optimization with ITV − based prescription by D 95 resulted in indistinguishable dose volume histograms (DHV) of the ITV obtained on multiple breathing instances for a typical tumor motion of 1 cm. In another phantom study, Archibald−Heeren et al. modeled the tumor motion displacement as independent scenarios and performed RO for the worst case scenario (WCS) [16]. They similarly found relatively stable tumor doses for displacement up to 2 cm by optimizing and prescribing to GTV D 99 . However, the potential of RO to overcome the limitations of PTV has never been explored for the median dose D 50 prescription.
In the specific context of respiration-induced GTV displacement, the present study aims to validate the hypotheses 1. that using type-B dose engine with the PTV concept for dose optimization and prescription introduces significant variability of target dose and 2. that RO (by the worst case method in this study)based planning is a viable alternative to the PTV concept in lung SBRT and, 3. that prescription by GTV median dose (D 50 ) can minimize the inter-patient and inter-technique variability of the reported GTV dose.
For the first argument to be valid, we hypothesized that the GTV received significantly variable doses between breathing phases. Two PTV-based optimization adapting the ITV and MidV concepts were tested using the ICRU 91 coverage prescription method. To validate the second argument, we repeated the assessment of the first argument for two WCS-based robustness optimization (hereafter called WCS optimization).The first approach is identical to Liang et al. [14] that used the ITV concept for motion encompassing. The second approach deployed all 4DCT images as independent breathing scenarios for robustness optimization. Furthermore, the dosimetric robustness was assessed by comparing the relative number of incidences that a certain target and OAR dose limit was violated in different respiratory phases. For argument 1 and 2, the inter-patient variability of GTV dose resulting from the PTV-and WCS-optimized plans were also compared. To test the third argument, all PTV and WCS-optimized plans that were prescribed by coverage according the ICRU 91 recommendation were renormalized to the GTV median dose D 50 and the above analyses were repeated.
Findings from this study are expected to provide important insight into the combination of SBRT planning concept and prescription method that produces the optimal dosimetric quality and robustness in target and organ dose during treatment, which will subsequently improve the consistency in dose reporting and multicenter clinical outcome assessment.
Methods and materials
Patient selection and pre-treatment preparation Fourteen consecutive patients with peripherally-located lung tumors who previously received SBRT were selected for this retrospective planning study.
Helical four-dimensional computed tomography (4DCT) scan of each patient was acquired in 2 mm axial slices and binned into ten datasets according to respiratory phase. Using all the phase-binned 4DCT datasets an average intensity projection (AIP) image dataset was also generated on the RayStation (RaySearch Laboratories, Stockholm, Sweden; version 8a) treatment planning system (TPS).
Definition of target and normal organs
The GTV was firstly defined on one of the 4DCT dataset that was closest to the mid-ventilation (MidV) phase (GTV MidV ) [17]. It was then transformed to all other phases according to the deformation vector fields (DVFs) derived from the anatomically constrained deformation algorithm ANACONDA [18]. Finally, these GTVs from different phases were rigidly transferred onto the AIP images to produce the internal target volume (ITV). The above process also applied to the definition of normal organs.
Treatment planning strategies to motion encompassing PTV-based optimization Two margin-based approaches were studied by optimizing to (i) the PTV expanded uniformly by 5 mm from the ITV, denoted as PTV ITV , on the AIP images, and (ii) the PTV expanded from the GTV MidV by 2:5 P setup þ β ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi σ setup 2 þ σ motion 2 þ σ p 2 p −βσ p using the van Herk (VH)'s margin recipe [19], denoted as PTV VH , on the MidV CT. Σ setup and σ setup are the residual systemic and random errors in the tumor position including patient motion and tumor baseline drift after the online 4D cone beam computed tomography (4DCBCT) setup correction (i.e., intrafractional positioning error), β = 0.52 (at a mean prescription isodose line~70%) and σ p = 6.4, respectively. The motion amplitude of individual tumor is modeled as σ motion = 1/3 amplitude [19]. The GTV displacement due to respiration was implicitly accounted for by the ITV and the MidV PTV concepts, respectively.
Two partial volumetric-modulated arcs treatment (VMAT) were created using the rayArc optimization algorithm. The rayArc optimization process uses a type-A pencil-beam dose engine. For all VMAT optimization, a type-B collapsed cone convolutionsuperposition (CCCS) dose engine was introduced at the 15th iteration to calculate intermediate dose as a background dose for subsequent optimization. At the end of the VMAT optimization a final dose was calculated by CCCS. Each time further VMAT optimization was pursued the final dose was taken to have the same effect as an intermediate dose. The final optimized dose was prescribed to PTV D 95 at 65-75% isodose line in all cases. A total dose of 54 Gy for three fractions was prescribed in all cases. Dose-volume histogram (DVH) limits to different OARs were referenced from the Radiation Therapy Oncology Group (RTOG) 0236 trial [20] and the German Society of Radiation Oncology (DEGRO) guidelines [21].
Robustness optimization
In RayStation, PTV-free planning can be realized by robustness optimization (RO) based on the composite worst case method [22]. The setup uncertainty is discretized into a set of scenarios whose actual number (n s ) depends on the size of the error. Together with nominal scenario corresponding to the planning CT with no assumed error, the DVH objectives are optimized for the worst case scenario (WCS) in which a robust function attaints its highest value. It is important to note that RO in RayStation does not treat the systematic and the random errors separately. Following ref. [23], the WCS can be approximated only for the systematic error, with the random error approximated as an additional systematic contribution. Based on the same Σ setup with σ setup as in the PTV recipe, the final WCS parameters were 3.4 mm (left-right), 5.3 mm (cranio-caudal) and 5.1 mm (anteroposterior) mm. The remaining organ motion of individual patients was accounted for in two ways: iii) implicitly by the concept of ITV in a static geometry on the AIP image. iv) explicitly by the WCS method in a dynamic geometry that was realized by utilizing the 4DCT images of all breathing phases.
In the second WCS approach, each image set of the 4DCT composes one scenario where all the setup scenarios are examined. The total number of scenarios to be considered in the WCS optimization is then n s · n i , where n i is the number of 4DCT image sets. At each iteration, minimax optimization was applied to one of all scenarios that attains the highest cost of the robust object function, i.e., the WCS. The resultant optimized plan will be robust against not just setup error but also breathing-induced tumor motion and deformation in all ten 4DCT images, hence completely margin-less. In this study, robustness was imposed to all DVH objectives of the target and the OARs.
The same two partial VMAT arcs as applied in the PTV-based planning were optimized to the ITV and the GTV to achieve 99% prescription dose coverage (D 99 ) in the first and the second WCS approaches, denoted as (iii) WCS ITV and (iv) WCS GTV , respectively. The same VMAT optimization process as in the PTV-optimization was adapted regarding the dose engines for optimization and prescription.
Comparative analysis of PTV and WCS optimizations in static geometry
Firstly we assessed the naïve plan optimized according to (i) to (iv) without explicit simulation of the geometric tumor displacement due to respiration. The MidV-CT was used as the common frame where all the dosimetric metrics, including mean, near-minimum and near-maximum dose in the GTV (GTV D 50 , D 98 and D 2 ), relative volume of chest wall (CW) receiving 30 Gy (CW V30 ), relative volume of normal lung (NL) receiving 20 Gy (NL V20 ) and 5 Gy (NL V5 ) and the mean normal lung dose (MLD) were obtained. For this, all PTV ITV and WCS ITV plans that based their optimizations on the AIP images were recalculated on the MidV-CT.
Doses to GTV and OARs of the PTV and WCS optimized plans were compared for their difference by Friedman tests and their variance by Levene's tests using the Matlab statistics toolbox v.2019b (Mathwork Inc. MA, USA). In cases where the Friedman's tests return statistical significant at p-value < 0.05, post-hoc multiple comparison tests were performed with adjusted p-values by Bonferroni's correction.
Analysis by individual respiratory phases
If type-B dose engine does induce excessive fluence in the low density PTV border one would expect the dose received by the GTV to be higher in other breathing phases than in the planning phase. As validation, all PTV and WCS-optimized plans were firstly recalculated on every image set of the 4DCT. The resulting doses to the GTV and organs-at-risk (OARs) in individual breathing phases were statistically compared for their difference by Friedman tests and their variance by Levene's tests separately for the PTV and WCS-optimized plans. The plan robustness was defined in this context by the relative count of instances where the doses to the GTV and OARs deviate from their respective tolerance limits.
Analysis over all respiratory phases
Following the line of argument, if the PTV concept using type-B dose engine introduces excessive photon fluence the GTV would eventually accumulate significant higher dose from multiple displaced positions in the respiration cycle. As the ultimate validation, the calculated doses in individual 4DCT phase images were summed deformably according to DVFs back onto the reference MidV-CT for every plan. Such deformably accumulated dose is unequivocally referred to as summed dose throughout the text. Similar to the evaluation in the static geometry, the summed doses to GTV and OARs were compared among all PTV and WCS-optimized plans on the reference MidV-CT for their differences by Friedman tests and their variances by Levene's tests.
The overall plan robustness was defined in this context as the dosimetric changes due to the motion effect from static to dynamic geometry and was assessed separately for different PTV and WCS WCS-optimized plans by the Wilcoxon's signed-ranked test.
Dosimetric implication of prescription by GTV median dose D 50
According to ICRU 91 report and other follow-up studies [1,3,16], GTV D 50 prescription was further explored for its potential in mitigating the variability of GTV dose under the circumstances of GTV displacement by respiration. For this, all final PTV and WCS-optimized plans were renormalized so that GTV D 50 equals 54 Gy on respective primary planning CT images. Dosimetric and statistical analyses were then repeated as described above.
PTV-and WCS-based SBRT using ICRU 91 recommended coverage prescription Dosimetric analysis in static geometry
In the condition where no tumor displacement is concerned, all PTV and WCS-optimized plans achieved the dose constraints following the RTOG 0236 and guidelines on the reference mid-ventilation images, except for CW. In general, WCS-optimized plans produced lower doses than PTV-optimized plans not just in the OARS but also in the GTV, as summarized in Table 1. Figure 1 shows the DVH of the GTV obtained on the MidV-CT for individual patients.
On individual patient basis, CW V30 was not met in 3 cases by PTV ITV and 1 case by PTV VH and WCS ITV while it was met in all cases by WCS GTV . For NL V20 , PTV ITV resulted in 6 minor deviations (within 10-15%) and PTV VH showed 5 minor deviations according to the RTOG 0236 dose constraint. By contrast, there were 4 and 2 minor deviations resulted from WCS ITV and WCS GTV plans, respectively. The separation between PTV-and WCS-based SBRT plans is more pronounced in NL V20 and between PTV ITV , PTV VH and WCS GTV . We found that WCS GTV is able to reduce NL V5 , on average, by 6.1%, NL V20 by 17.9% and MLD by 12.5% comparing to PTV ITV . Figure 1 shows the inter-patient variability of the GTV doses. Variances of each dose metrics between all PTV-based and WCS-based SBRT plans were statistically tested (Levene's tests) and were found significant for neither the GTV (D 98 , D 50 and D 2 ; all p > 0.05) nor the OARs (CW V30 , NL V5 and NL V20 ).
Dosimetric analysis by individual breathing phases
Recalculating the PTV-and WCS-optimized plans on every image set of the 4DCT found GTV D 50 ≥ 54Gy in all cases. PTV ITV produced D 98 > 54 Gy in all patients. There are one PTV VH plan in one phase, two WCS ITV plans in one phase and one WCS GTV plan in three phases showing D 98 < 54 Gy towards principally the end inhalation. The maximum differences of D 98 (± 1 standard deviation; SD) between all 4DCT phases are 5.7% ± 1.3, 11.4% ± 3.0, 6.5% ± 1.6 and 6.1% ± 1.5% for PTV ITV , PTV VH , WCS ITV , and WCS GTV , and for GTV D 50 1.6% ± 0.5, 3.4% ± 0.8, 1.5% ± 0.4 and 2.1% ± 0.5%, respectively. Figure 2 shows the variations of GTV D 98 , D 50 and D 2 for the 13 cases across ten breathing phases. Over all patients, none of the PTV and WCS-optimized plans showed statistical importance in their differences for GTV D 98 and D 50 (p > 0.05) between respiratory phases and significance was found only for D 2 with PTV VH (Table 2). Furthermore, none of the SBRT planning concepts shows significant inter-phase variability (in terms of their variances) in GTV D 98 , D 50 and D 2 (p > 0.05). Table 3 shows the results of the accumulated doses obtained from the PTV and WCS-optimized plans. The accumulated GTV D 98 achieved 54 Gy in all plans. The changes of GTV D 98 , D 50 and D 2 are, on average, largely limited to 1.0 Gy. Figure 3 shows the DVH of the GTV obtained from the accumulated dose for individual patients. The inter-patient variability of doses to both the GTV and the OARs were tested to be equal among all plans (Levene's tests; all p > 0.05).
Dosimetric analysis over all breathing phases
For CW V30 , one more patient failed the CW V30 tolerance (i.e., total 4 patients) in PTV ITV while one patient failed in PTV VH , WCS ITV and WCS GTV . For NL V20 , one minor deviation became major deviations in PTV ITV , one less minor deviations (total 4 cases) in PTV VH and the same number of minor deviations in the WCS group were found after dose summation over the tumor's excursion along the breathing cycle.
Dosimetric analysis by individual breathing phases
The statistical results of the dose differences for GTV and selected OARs between phases are given in Table 5. None of the PTV and WCS-based SBRT concepts showed statistical significance in the difference of GTV
Dosimetric analysis over all breathing phases
After dose accumulation, the GTV D 98 , D 50 and D 2 and MLD change by less than 0.5 Gy, on average. Figure 6 shows the resulting DVH of the GTV obtained for individual patients. The absolute changes of CW V30 and NL V5 by 0.6 cm 3 are considered negligible despite statistical significances. Detailed results for the GTV D 50 renormalized plans are given in Table 6. Furthermore, variances of all GTV dose metrics were tested to be equal among all renormalized PTV-and WCS-optimized plans.
Discussion
The criticism on the PTV concept in lung SBRT arises from the notion that its combination with type-B and Monte Carlo (MC) dose optimization would result in excessive and inconsistent GTV dose owing to an artificial increase of photon fluence in the low density lung tissue. Such limitation of current SBRT practice is also recognized in the recent ICRU report 91on prescribing, recording, and reporting of stereotactic treatments with small photon beams. This report further suggested that By analyzing the dosimetric variability and robustness resulting from two common PTV-based and two other worse case-based robust optimization methods, this study is now able to provide more clarifications to the pitfalls of PTV concept in lung SBRT. Additionally, by analyzing further the dosimetric results by different dose prescription methods according to the ICRU recommended coverage prescription and GTV median dose prescription, we identified the dominant factor that contributes to the variability of GTV dose. Note: p values were obtainedfrom Freidman's tests and Levene's tests comparing the differences and the variances between ten breathing phases, respectively, in 13 patients per SBRT optimization method Abbreviations are the same as in Table 1 Table 3 Abbreviations are the same as in Table 1 Note: The motion effect is the evaluated by comparing the recalculated and accumulated dose on the MidV-CT
PTV-and WCS-based SBRT using ICRU 91 recommended coverage prescription
SBRT plans optimized and prescribed to the PTV resulted in significant overexposure to the GTV compared to those plans optimized for WCS as expected. The GTV receives much higher dose, with the GTV median dose D 50 about 17 and 22% over the prescription dose for ITV-based and mid-ventilation based PTV optimizations, respectively. Although higher dose to the GTV is generally not a concern and even desired for SBRT, part of the excessive dose is in effect burdened by the surrounding normal organs including the normal lung and chest wall that are encompassed in the PTV. For lesions that are close to the chest wall, the volume receiving ≥30 Gy (V 30 ) was reduced significantly by up to 29.5 cm 3 (74%) and 31.4 (73%) by using WCS optimization on the averaged intensity projection (AIP) image to the ITV and on all 4DCT images directly to the GTV in comparisons to the conventional PTV approach based on the ITV. The dosimetric benefit of WCS optimization to limit the chest wall dose was also reported by Zhang et al. [21]. In their study, 8 of 20 patient plans optimized and prescribed to PTV showed chest wall dose above the limit whereas all WCS plans optimized to the ITV fulfilled the dose constraint. In this study, we showed that WCS optimized to the GTV can further improve the chest wall dose.
Besides the dosimetric inferiority to WCS optimization, the other major pitfall of PTV concept for plan optimization is that inconsistent GTV doses between individual patients (i.e., inter-patient variability) occur even with the same PTV prescription. However, our results clearly demonstrated that inconsistent GTV dose is not unique to the PTV concept. Other methods that avoid the PTV concept in SBRT planning equally suffer from inconsistent GTV doses. Specifically, robust optimization that replaces the PTV concept by the worst case method also shows inconsistent GTV dose. This was evidenced by the equivalent variances of GTV D 98 , D 50 and D 2 among all PTV and WCS-optimized plans (Table 1). In principle, one would expect zero or minimal variability of GTV D 98 at and close to the prescription point of GTV D 98 or ITV D 98 in the WCS-optimized plans. Recall that robust optimization in this study was implemented to ensure the prescription dose in the worst case scenario, that is, the GTV D 98 was optimized to equal to or at least 54 Gy in the worst case scenario but it could be any values > 54 Gy in other scenarios. Since the nominal scenario does not necessarily coincide with the worst case scenario, and in fact hardly does, GTV D 98 does not necessarily arrive exactly at 54 Gy in the nominal scenario and hence variability. On the other hand, any renormalization made to equalize GTV D 98 to 54Gy in the nominal scenario would Abbreviations are the same as in Table 1 invalidate the plan robustness that was achieved to ensure the prescription dose for the worst case scenario. When Lacornerie et al. [11] initially argued against type-B dose engine for dose optimization using the PTV concept, they claimed "the GTV will be overexposed when it moves into the regions with increased photon fluence" but without providing results to assess the magnitude of the matter. Following this line of argument, if type-B dose engine did induce excessive photon fluence in the low density PTV border one would expect the dose received by the GTV to be higher in other respiratory phases than in the planning phase. We therefore followed the phaseto-phase changes in the GTV doses. Our results show that all GTV dose parameters, except for D 2 using the mid-ventilation concept, were statistically equal among the ten 4DCT images for the PTV-optimized plans. Guckenberger et al. [24] previously optimized for the PTV coverage D 95 on the end-exhale CT, in which case the type-B dose engine would in principle drive the optimizer to deposit the maximal fluence at the opposite end-inhale position. Interestingly, the authors found no significant GTV dose differences when these plans were recalculated on the end-inhale CT. Maximum differences of 6.9 ± 3.1% and 2.4 ± 1.8% for GTV D 99 and D 50 were reported, respectively. This study observed smaller maximum differences of 2.7 ± 1.4% and 0.9 ± 0.5% for GTV D 98 and D 50 , respectively. The discrepancy is presumably attributed to the different planning CT datasets (end-exhale vs. AIP images) for which the fluence optimization were carried out. Note: p values were obtainedfrom Freidman's tests and Levene's tests comparing the differences and the variances between ten breathing phases, respectively, in 13 patients per SBRT planning method Abbreviations are the same as in Table 1 Fig. 6 Each red line represents the GTV DVH of individual patient obtained from the accumulated dose after prescribing to GTV D 50 . The black vertical line indicates the prescription dose at 54 Gy Here, we attempt to offer an explanation to the negligible GTV dose difference among breathing phases from the principles of conventional radiotherapy and SBRT. In conventional VMAT-based radiotherapy, a uniform dose profile (e.g., +/− 5%) across the PTV is often demanded and achieved by a fluence profile that is typically characterized with horns at the PTV edge to compensate for the beam penumbra. Thus, the GTV may experience an increase of fluence when it moves towards the PTV border. The magnitude of this fluence horn increases from water density to lung density to counterbalance the deteriorating condition of charged particle equilibrium. By contrast, SBRT allows higher dose in the tumor center (as much as 167% when normalized to the maximum dose at 60% on the PTV surface). In this case, the "horn" effect diminishes as the demand of photon fluence is counterbalanced by the allowed lower dose to the region around the PTV edge. The other possible reason could be that commercial planning system generally switches the type-A dose engine to type-B dose engines only at certain steps for fluence correction during the dose optimization and in final dose calculation.
Additionally, we examined the variances of different GTV dose parameters among the ten respiratory phases. Our hypothesis is that if type-B dose engine did drive up the photon fluence in the PTV-optimized plans the inter-phase variability of these GTV dose parameters would become significantly different. This hypothesis is based on the fact that individual patients have different characteristics (e.g., tumor size, motion amplitude, lung density, etc) and hence the extent to which the photon fluence were to be driven up would vary substantially. When the GTV moves in different spatio-temporal positions of the respiratory cycle it would receive photon fluence of varying degree from phase to phase that is patient dependent. Nonetheless, we found that both PTV and WCS optimizations resulted in equal variances of all GTV dose parameters among the ten respiratory phases. Interestingly, the inter-quartile ranges (IQR) of GTV D 98 resulting from WCS optimized plans using all 4DCT images were found to be more variable than from other PTV-optimized plans. This large but insignificant variability of GTV D 98 is hypothesized to have originated from the specific worst case optimization method. Compared to the voxel-wise and objective-wise robust methods, the composite worst case method implemented by the RayStation planning systems behaves to maximally minimize the objective value on the worst case scenario at the cost of higher objective values and thus larger dosimetric fluctuation in many other possible scenarios [25]. Since the worst case scenario may correspond to different breathing phases with different patient characteristics, relatively large variability of D 98 among breathing phases was observed. Nonetheless, by WCS optimization, Abbreviations are the same as in Table 1 Note: The motion effect is the evaluated by comparing the recalculated and accumulated dose on the MidV-CT particularly using all 4DCT images, the highest robustness was achieved to prevent the dose limits in the normal tissues from being exceeded when the target is displaced into different respiratory positions.
As the final validation, we compared the optimized dose on a single CT and the recalculated doses summed over all 4DCT images. Such comparisons offer clarifications to two important issues concerning the nonconsistency of PTV concept in lung SBRT. Firstly, if type-B dose engine induced excessive fluence in PTVbased optimization, the GTV would eventually accumulate significant higher dose when it moved into different breathing phases. However, no clear indication of overexposure to the GTV can be associated with PTV-based optimization ( Table 3). The GTV D 50 and D 2 obtained from PTV-optimized plans for the ITV and mid-ventilation concepts changed by 0.3 Gy only after dose summation and on the contrary decreased rather than increased. The significant increase of GTV D 98 in the PTV-optimized plans based on the ITV concept does not appear to be related to the type-B dose engine because it did not occur to the other PTV-optimized plans that adopted the mid-ventilation concept. Instead, it was presumably caused the systematic change in using the AIP images for dose optimization to the mid-ventilation images for dose accumulation. For the rather extreme situation using the end-exhale CT for fluence optimization, neither did Guckenberger et al. [24] observe serious problem of excessive build up of photon fluence at the opposite end-inhalation that caused a significant change in the overall GTV dose either. More interestingly, the authors too found an increase rather than a decrease in the summed GTV dose (presumably D 95 ) by less than 1% or 0.7 Gy only. Among all GTV dose parameters, D 50 appears to be the most robust against changes showing no statistical significance except for the ITV-based robust optimization. Based on these results, we conclude that type-B dose engine, per se, does not significantly increase the GTV dose. The significantly higher GTV dose in the PTV-optimized plans than WCS optimized plans is rather a direct consequence of the prescription method.
Secondly, equal variances of the GTV dose parameters among the PTV and WCS-optimized plans are still observed after dose summation over the ten 4DCT images. The inter-patient variability (one standard deviation) changes only by 0.1 Gy after dose summation in all but the GTV D 98 of the WCS-optimized plans (0.9 Gy). This simply means that the inconsistency of GTV dose cannot be easily resolved by migrating from the PTV concept to robust optimization irrespective of the type-B dose engine [1,14]. For the same reasoning, we would argue that using two classes of dose engines, a type-A for fluence optimization followed by a type-B for subsequent dose calculation and renormalization will not resolve the inconsistent GTV dose either. We would further argue that PTV concept, in its very design to account for geometric uncertainty, shall not be considered as a pitfall. Consistency of clinical outcome report shall not be compromised provided that the advanced dose engines are used to estimate and report the GTV dose parameters following the ESTRO ACROP recommended guidelines [11].
PTV-and WCS-based SBRT by GTV median dose renormalization
Lebredonchel et al [14] suggested that prescribing based on 50% mass of the PTV can somewhat stabilize variability of the target dose but they concluded further that moving away from the PTV concept for prescription remains the only solution if using type-B dose engine. They came to this conclusion because the GTV median dose D 50 differs substantially with variable lung density and tumor size when prescription is done to the PTV. However, this conclusion is considered as partly true only because our results already showed that other PTV-free concept by the worst case method does not stabilize the target doses either when the ICRU recommended prescription by coverage (i.e., GTV D 98 or ITV D 98 ) was followed. Instead, the prescription method has the major impact on the variability of GTV dose. After renormalization based on GTV D 50 , the separations of the DVH families became much packed together for all plans optimized using different concepts (Fig. 4), as compared to those obtained from prescription by coverage (Fig. 1). The resulting SDs of D 98 and D 50 and D 2 are limited to 1 Gy for PTV-and 1.4 Gy for WCS-optimized plans, respectively. Focusing on the concept of ITV as motion encompassing, Lang et al. similarly showed that the SDs of PTV D 98 and D 50 and ITV D 98 of 38 patients are limited to 1.5 Gy after ITV D 50 renormalization to 57 Gy [18]. They also showed that the ITV D 50 renormalization is superior to renormalization by ITV/PTV coverage D 98 as it can reduce the variability of PTV and ITV dose parameters among delivery techniques (dynamic conformal arc vs. VMAT). More importantly, the differences of GTV D 98 and D 50 and D 2 among optimized plans based on the PTV concept and the WCS method (Table 4) were found to reduce markedly. These results are still valid despite the variation of tumor position in the respiration cycle, with GTV D 50 being the only dose parameter that showed statistically significant difference. However, the absolute difference of 0.2 Gy is deemed clinically unimportant. Same as the results of coverage prescription, the median dose turned out to be the most robust against uncertainty of tumor position among other GTV dose parameters.
The effect of GTV D 50 renormalization is also marked at the phase to phase level (Fig. 5). The median of all GTV dose parameters became much closer among the plans that adopted different concepts for setup and motion compensation. Compared to the prescription by coverage method recommended by ICRU 91 report, the maximum inter-phase difference of GTV D 98 was reduced by 2.4, 4.8 and 2.4% and 1.0% for PTV optimization by the ITV and mid-ventilation concepts, and WCS optimization to the ITV and GTV, respectively.
In summary, when SBRT plans are directly prescribed or renormalized to the GTV median dose D 50 1. the consistency of GTV dose across the nearminimum, median, and near-maximum points is significantly improved, i.e. reduced inter-patient variability 2. harmonization of GTV dose is made possible for lung SBRT plans that adopt different concepts to handle geometric uncertainty caused by respiratory motion.
The first point simply implies that one can continue with the PTV concept for dose planning. The second point implies that consistent GTV dose shall be ensured between SBRT centers employing either the PTV concept or the worst case scenario concept in dose planning, and different delivery techniques as indicated by Lang et al. [18].
On the other hand, one may question the value of robust optimization concerning its computational overheads, if by D 50 GTV prescription alone can simply harmonize the GTV dose among optimization solutions. From the normal tissue dose perspective, our phase-by-phase analysis indicates that WCS optimization in general improved the dosimetric robustness, resulting in the fewest number of dose deviations from the OAR limits. Furthermore, lower NL V5 and MLD (Table 3) during respiration were constantly observed in the WCS optimization group regardless of the prescription method. In particular, WCS optimization to the GTV using all 4DCT images resulted in the lowest normal tissue dose and highest robustness against deviation of normal tissue dose limit among all optimization methods.
Limitation of the study
This study was designed by assuming the same amount of geometric uncertainties from tumor motion and patient setup in the calculation of the PTV and in the definition of the WCS parameters. Nonetheless, our results considered exclusively the uncertainty of tumor position due to breathing motion. The validity of our results shall hold because uncertainty of respiratory motion, which is considered as systematic in our phase-to-phase analysis for the GTV dose changes, is much greater than that of setup limited to millimeter accuracy with stereotactic image guidance.
The other limitation is the small number of patients which may subject our results to bias. Only 2 out of 13 patients showed tumor motion more than 1 cm. It is unclear whether our dosimetric results will remain unchanged if more patients with larger amplitude of tumor motion are included.
We also acknowledge that the exact formulation of the robustness optimization may have an influence on the dosimetric results [22]. Despite the numerous robustness optimization algorithms, there is only one commercial planning system that makes robust optimization available for clinical use. This study, like many other previous ones, was based on the worst case scenario optimization from the same planning system. Lastly, this study focused on a certain type (convolution-superposition) and class (type-B) of dose engine. Systematic difference between Monte Carlo and type B dose engines is well known especially in cases where extreme electron charged disequilibrium exists [26]. Further investigation with Monte Carlo dose engine is warranted to generalize the present findings.
Conclusions
The pitfalls of PTV concept have no association with type-B dose engine in lung SBRT. Inconsistent target dose is not unique to the PTV concept but the worst case method implemented in the robust optimization. Prescription by coverage, regardless to the PTV D 95 or GTV D 98 in common practice has the major impact on the consistency of GTV dose. GTV median dose prescription or renormalization can effectively decrease the inter-patient and inter-optimization method (PTV and worst case scenario) variability of GTV dose. | 9,507.4 | 2020-05-29T00:00:00.000 | [
"Physics",
"Medicine"
] |
A Brief Review of Some Interesting Mars Rover Image Enhancement Projects
: The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting image processing projects for Mastcams. In particular, we will address perceptually lossless compression of Mastcam images, debayering and resolution enhancement of Mastcam images, high resolution stereo and disparity map generation using fused Mastcam images, and improved performance of anomaly detection and pixel clustering using combined left and right Mastcam images. The main goal of this review paper is to raise public awareness about these interesting Mastcam projects and also stimulate interests in the research community to further develop new algorithms for those applications.
Introduction
NASA has sent several rovers to Mars over the past few decades. The Sojourner rover landed on 4 July 1997. It practically worked for a short while because communication link was broken after two months. Sojourner traveled slightly more than 100 m. Spirit, also known as the Mars Exploration Rover ( This paper will focus on the Curiosity rover. Onboard the Curiosity rover, there are a few important instruments. The laser induced breakdown spectroscopy (LIBS) instrument, ChemCam, performs rock composition analysis from distances as far as seven meters [2]. Another type of instrument is the mast cameras (Mastcams). There are two Mastcams [3]. The cameras have nine bands in each with six of them overlapped. The range of wavelengths covers the blue (445 nm) to the short-wave near-infrared (1012 nm).
The Mastcams can be seen in Figure 1. The right imager has three times better resolution than the left. As a result, the right camera is usually for short range image collection and the right is for far field data collection. The various bands of the two Mastcams are shown in Table 1 and Figure 2. There are a total of nine bands in each Mastcam. One can see that, except for the RGB bands, the other bands in the left and right images are nonoverlapped, meaning that it is possible to generate a 12-band data cube by fusing the left and right bands. The dotted curves in Figure 2 are known as the "broadband near-IR cutoff filter", which has a filter bandwidth (3 dB) of 502 to 678 nm. Its purpose is to help the Bayer filter in the camera [3]. In a later section, the 12-band cube was used for accurate data clustering and anomaly detection. Figure 1. The Mars rover-Curiosity, and its onboard instruments [4]. Mastcams are located just below the white box near the top of the mast. [4]. There are nine bands with six overlapping bands in each camera.
The Left Mastcam
The The Mars rover-Curiosity, and its onboard instruments [4]. Mastcams are located just below the white box near the top of the mast. Bayer filter in the camera [3]. In a later section, the 12-band cube was used for accurate data clustering and anomaly detection. Figure 1. The Mars rover-Curiosity, and its onboard instruments [4]. Mastcams are located just below the white box near the top of the mast. [4]. There are nine bands with six overlapping bands in each camera.
The Left Mastcam The Right Mastcam Filter
Wavelength (nm) Filter Wavelength (nm) L2 445 R2 447 L0B 495 R0B 493 L1 527 R1 527 L0G 554 R0G 551 L0R 640 R0R 638 L4 676 R3 805 L3 751 R4 908 L5 867 R5 937 L6 1012 R6 1013 Figure 2. Spectral response curves for the left eye (top panel) and the right eye (bottom panel) [5]. Figure 2. Spectral response curves for the left eye (top panel) and the right eye (bottom panel) [5]. The objective of this paper is to briefly review some recent studies done by our team for Mastcam. First, we review our work on perceptually lossless compression effort for Mastcam images. The motivation of this study was to demonstrate that, with the help of recent compression technologies, it is plausible to adopt perceptually lossless compression (ten to one compression) instead of lossless compression (three to one compression) for NASA's Mastcam images. This will save three times the precious bandwidth between Mars and Earth. Second, we review our recent study on debayering for Mastcam images. The Mastcam is still using a debayering algorithm developed in 2004. Our study shows that some recent debayering algorithms can achieve better artifact reduction and enhanced image quality. Third, we review our work on image enhancement for the left Mastcam images. Both conventional and deep learning approaches were studied. Fourth, we review our past work on stereo imaging and disparity map generation for Mastcam images. The approach was to combine left and right images for stereo imaging. Fifth, we further summarize our study on fusing both Mastcam images to enhance the performance of data clustering and anomaly detection. Finally, we will conclude our paper and discuss some future research opportunities, including Mastcam-Z, which is the new Mastcam imager onboard the Perseverance rover, image enhancement and stereo imaging by combining left and right Mastcam images.
We would like to emphasize that one key goal of our paper is to publicize some interesting projects related to Mastcam in the Curiosity rover and hopefully this will stimulate some interest from the research community to look into these interesting projects and perhaps further develop some new algorithms to improve the state-of-the-art. Our team worked with NASA Jet propulsion Laboratory (JPL) and two other universities on the Mastcam project for more than five years. Few researchers in the world actually know the fact that NASA has archived Mastcam images as well as data acquired by quite a few other instruments (LIBS, Alpha Particle X-Ray Spectrometer (APXS), etc.) onboard the Mars rover Curiosity. The database is known as the Planetary Data System (PDS) (https://pds.nasa.gov/ accessed on 6 September 2021). All these datasets are available to the public free of charge. If researchers are interested in applying some new algorithms to demosaic the Mastcam images, there are millions of images available. Another objective of our review paper is to summarize some preliminary algorithm improvement in five applications so that interested researchers can look at this review paper alone and can gather about the state-of-the-art algorithms in processing Mastcam images.
The NASA Mastcam projects are very specific applications. Few people are even aware of these projects. For all of the five applications, NASA JPL first implemented some baseline algorithms, and our team was the next one to continue the investigations. To the best of our knowledge, no one else has performed detailed investigations in these areas. For instance, in the demosaicing of Mastcam images, NASA used the Malvar-He-Cutler algorithm, which was developed in 2004. Since then, there has been tremendous developments in demosaicing. We worked with NASA JPL to compare a number of conventional and deep learning demosaicing algorithms and eventually convinced NASA that it is probably time to adopt newer algorithms. For the image compression project, NASA is still using the JPEG standard, which was developed in the 1990s. We performed thorough comparative studies and advocated the importance of using perceptually lossless compression. For the fusion of left and right Mastcam images, no one has done this before. Similarly, for anomaly detection and image enhancement, we are the only team working in this area.
Perceptually Lossless Compression for Mastcam Images
Up to now, NASA is still compressing the Mastcam images without loss using JPEG, which is a technology developed around 1990 [6]. JPEG is computationally efficient. However, it can achieve a compression ratio of at most three times in the lossless compression mode. In the past two decades, new compression standards, including JPEG-2000 (J2K) [7], X264 [8], and X265 [9], were developed. These video codecs can also compress still images. Lossless compression options are also present in these codecs.
In addition to the above codecs, some researchers developed lapped transform (LT) [10] and incorporated it into a new codec known as Daala [11] in recent years. Daala can compress both still images and videos. A lossless option is also present.
The objective of our recent study [12] was to perform thorough comparative studies and advocated the importance of using perceptually lossless compression for NASA's missions. In particular, in our recent paper [12], we evaluated five image codecs, including Daala, X265, X264, J2k, and JPEG. The objective is to investigate which one of the above codecs can attain a 10:1 compression ratio, which we consider as perceptually lossless compression. We emphasize that some suitable metrics are required to quantify perceptual performance. In the past, researchers have found that peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), two popular and widely used metrics, do not correlate well with human's subjective evaluations. In recent years, some metrics known as human visual system (HVS) and HVS with masking (HVSm) [13] were developed. For Mastcam images, HVS and HVSm were adopted in our compression studies. For perceptually lossless compression studies, we could have used CIELab metric too, but did not do so because we wanted to compare with other existing compression methods in the literature which only used PSNR, SSIM, HVS, and HVSm. Moreover, we also evaluated the decompressed RGB Mastcam images using subjective assessment. We noticed that perceptually lossless compression can be attained even at 20 to 1 compression. If one focuses at ten to one compression using Daala, the objective metrics of HVS and HVSm are 5 to 10 dBs higher than those of JPEG.
Our findings are as follows. Details can be found in [12].
• Comparison of different approaches For the nine-band multispectral Mastcam images, we compared several approaches (principal component analysis (PCA), split band (SB), video, and two-step). It was observed that the SB approach performed better than others using actual Mastcam images.
•
Codec comparisons In each approach, five codecs were evaluated. In terms of those objective metrics (HVS and HVSm), Daala yielded the best performance amongst the various codecs. At ten to one compression, more than 5 dBs of improvement was observed by using Daala as compared to JPEG, which is the default codec by NASA.
•
Computational complexity Daala uses discrete cosine transform (DCT) and is more amenable for parallel processing. J2K is based on wavelet which requires the whole image as input. Although X265 and X264 are also based on DCT, they did not perform well at ten to one compression in our experiments. • Subjective comparisons Using visual inspections on RGB images, it was observed that at 10:1 and 20:1 compression, all codecs have almost no loss. However, at higher compression ratios such as 40 to 1 compression, it was observed that there are noticeable color distortions and block artifacts in JPEG, X264, and X265. In contrast, we still observe good compression performance in Daala and J2K even at 40:1 compression.
Debayering for Mastcam Images
The nine bands in each Mastcam camera contain RGB bands. Different from other bands, the RGB bands are collected by using a Bayer pattern filter, which first came out in 1976 [14]. In the past few decades, many debayering algorithms were developed [15][16][17][18][19]. NASA still uses the Malvar-He-Cutler (MHC) algorithm [20] to demosaic the RGB Mastcam images. Although MHC was developed in 2004, it is an efficient algorithm that can be easily implemented in the camera's control electronics. In [3], another algorithm known as the directional linear minimum mean square-error estimation (DLMMSE) [21] was also compared against the MHC algorithm.
Deep learning has gained popularity since 2012. In [22], a joint demosaicing and denoising algorithm was proposed. For the sake of easy referencing, this algorithm can be called DEMOsaic-Net (DEMONET). Two other deep learning-based algorithms for demosaicing [23,24] have been identified as well. The objective of our recent work [4] is to compare a number of conventional and deep learning demosaicing algorithms and eventually convince NASA that it is probably time to adopt newer algorithms.
We have several observations on our Mastcam image demosaicing experiments. First, we observe that the MHC algorithm still generated reasonable performance in Mastcam images even though some recent ones yielded better performance. Second, we observe that some deep learning algorithms did not always perform well. Only the DEMONET generated better performance than conventional methods. This shows that the performance of demosaicing algorithms depends on the applications. Third, we observe that DEMONET performed better than others only for right Mastcam images. DEMONET has comparable performance to a method know as exploitation of color correlation (ECC) [31] for the left Mastcam images.
Due to the fact that there are no ground truth demosaiced images, we adopted an objective blind image quality assessment metric known as natural image quality evaluator (NIQE). Low NIQE scores mean better performance. Figure 3 shows the NIQE metrics of various methods. One can see that ECC and DEMONET have better performance than others. From Figure 4, we see obvious color distortions in demosaiced image using bilinear, MHC, AP, LT, LDI-NAT, F3, and ATMF. One can also see strong zipper artifacts in the images from AFD, AP, DLMMSE, PCSD, LDI-NAT, F3, and ATMF. There are slight color distortions in the results of ECC and MLRI. Finally, we can observe that the images of DEMONET, ARI, DRL, and SEM are more perceptually pleasing than others.
Mastcam Image Enhancement
The left Mastcam images have three times lower resolution than that of the right. We have tried to improve the spatial resolution of the left images so that left and right images may be fused for some applications such as anomaly detection. It should be noted that no one, including NASA, has done this work before. Here, we summarize two approaches that we have tried. The first one is based image deconvolution, which is a standard technique in image restoration. The second one is to apply deep learning algorithms.
Model based Enhancement
In [35], we presented an algorithm to improve the left Mastcam images. There are two steps in our approach. First, a pair of left and right Mastcam bands is used to estimate the point spread function (PSF) using a sparsity-based approach. Second, the estimated PSF is then applied to improve the other left bands. Preliminary results using real Mastcam images indicated that the enhancement performance is mixed. In some left images, improvements can be clearly seen, but not so good results appeared in others.
Mastcam Image Enhancement
The left Mastcam images have three times lower resolution than that of the right. We have tried to improve the spatial resolution of the left images so that left and right images may be fused for some applications such as anomaly detection. It should be noted that no one, including NASA, has done this work before. Here, we summarize two approaches that we have tried. The first one is based image deconvolution, which is a standard technique in image restoration. The second one is to apply deep learning algorithms.
Model Based Enhancement
In [35], we presented an algorithm to improve the left Mastcam images. There are two steps in our approach. First, a pair of left and right Mastcam bands is used to estimate the point spread function (PSF) using a sparsity-based approach. Second, the estimated PSF is then applied to improve the other left bands. Preliminary results using real Mastcam images indicated that the enhancement performance is mixed. In some left images, improvements can be clearly seen, but not so good results appeared in others.
From Figure 5, we can clearly observe the sharpening effects of the deblurred image (i.e., Figure 5f) compared with the aligned left images (i.e., Figure 5e). The estimated kernel in Figure 5c, was obtained using a pair of left and right green bands. We can see better enhancement in Figure 5 for the LR band. However, in some cases in [35], some performance degradations were observed.
The mixed results suggest a new direction for future research, which may involve deep learning techniques for PSF estimation and robust deblurring.
Deep Learning Approach
Over the past two decades, a large number of papers was published on the subject of pansharpening, which is the fusion of a high resolution (HR) panchromatic (pan) image with a low resolution (LR) multispectral image (MSI) [36][37][38][39][40]. Recently, we proposed an unsupervised network for image super-resolution (SR) of hyperspectral image (HSI) [41,42]. Similar to MSI, HSI has found many applications. The key features of our work in HSI include the following. First, our proposed algorithm extracts both the spectral and spatial information from LR HSI and HR MSI with two deep learning networks, which share the same decoder weights, as shown in Figure 6. Second, sum-to-one and sparsity are two physical constraints of HSI and MSI data representation. Third, our proposed algorithm directly addresses the challenge of spectral distortion by minimizing the angular difference of these representations. The proposed method is coined as unsupervised sparse Dirichlet network (uSDN). Details of uDSN can be found in our recent work [43].
Computers 2021, 10, x FOR PEER REVIEW 7 of 19 From Figure 5, we can clearly observe the sharpening effects of the deblurred image (i.e., Sub-Figure f) compared with the aligned left images (i.e., Sub- Figure e). The estimated kernel in Figure 5c, was obtained using a pair of left and right green bands. We can see better enhancement in Figure 5 for the LR band. However, in some cases in [35], some performance degradations were observed.
The mixed results suggest a new direction for future research, which may involve deep learning techniques for PSF estimation and robust deblurring. Two benchmark datasets, CAVE [44] and Harvard [45], were used to evaluate the proposed uSDN. More details can be found in [41,42]. Here, we include results of applying uDSN to Mastcam images. As mentioned before, the right Mastcam has higher resolution than the left. Consequently, the right Mastcam images are treated as HR MSI and the left images are treated as LR HSI.
To generate objective metrics, we used the root mean squared error (RMSE) and spectral angle mapper (SAM), which are widely used in the image enhancement and pansharpening literature. Smaller values imply better performance. Figure 7 shows the images of our experiments. One can see that the reconstructed image is comparable to the ground truth. Here, we only compare the proposed method with coupled nonnegative matrix factorization (CNMF) [46] which has been considered a good algorithm. The results in Table 2 show that the proposed approach was able to outperform the CNMF in two metrics.
Computers 2021, 10, 111 8 of 18 [41,42]. Similar to MSI, HSI has found many applications. The key features of our work in HSI include the following. First, our proposed algorithm extracts both the spectral and spatial information from LR HSI and HR MSI with two deep learning networks, which share the same decoder weights, as shown in Figure 6. Second, sum-to-one and sparsity are two physical constraints of HSI and MSI data representation. Third, our proposed algorithm directly addresses the challenge of spectral distortion by minimizing the angular difference of these representations. The proposed method is coined as unsupervised sparse Dirichlet network (uSDN). Details of uDSN can be found in our recent work [43]. Two benchmark datasets, CAVE [44] and Harvard [45], were used to evaluate the proposed uSDN. More details can be found in [41,42]. Here, we include results of applying uDSN to Mastcam images. As mentioned before, the right Mastcam has higher resolution than the left. Consequently, the right Mastcam images are treated as HR MSI and the left images are treated as LR HSI.
To generate objective metrics, we used the root mean squared error (RMSE) and spectral angle mapper (SAM), which are widely used in the image enhancement and pansharpening literature. Smaller values imply better performance. Figure 7 shows the images of our experiments. One can see that the reconstructed image is comparable to the ground truth. Here, we only compare the proposed method with coupled nonnegative matrix factorization (CNMF) [46] which has been considered a good algorithm. The results in Table 2 show that the proposed approach was able to outperform the CNMF in two metrics.
Stereo Imaging and Disparity Map Generation for Mastcam Images
In the past few years, more research has been investigated in using virtual reality and augmented reality tools to Mars rover missions [47][48][49]. For example, a software called OnSight was developed by NASA and Microsoft to enable scientists to virtually work on Mars using Microsoft HoloLens [50]. Mastcam images have been used by OnSight software to create a 3D terrain model of the Mars. The disparity maps extracted from stereo Mastcam images are important by providing depth information. Some papers [51][52][53] proposed methods to estimate disparity maps using monocular images. Since the two Mastcam images do not have the same resolution, a generic disparity map estimation using the original Mastcam images may not take the full potential of the right Mastcam images that have three times higher image resolution. It will be more beneficial to NASA and other users of Mastcam images if a high-resolution disparity map can be generated.
In [54], we introduced a processing framework that can generate high resolution disparity maps for the Mastcam image pairs. The low-resolution left camera image was improved and the impact of the image enhancement on the disparity map estimation was studied quantitatively. It should be noted that, in our earlier paper [55], we generated stereo images using the Mastcam instruments. However, no quantitative assessment of the impact of the image enhancement was carried out.
Three algorithms were used to improve left camera images. The bicubic interpolation [56] was used as the baseline technique. Another method [57] is an adaptation of the technique in [5] with pansharpening [57][58][59][60][61]. Recently, deep learning-based SR techniques [62][63][64] have been developed. We used the enhanced deep super resolution (EDSR) [65] as one representative deep learning-based algorithm in our experiments. It should be emphasized that no one, including NASA, has carried out any work related to this stereo generation effort. As a result, we do not have a baseline algorithm from NASA to compare with.
Here, we include some comparative results. From Figure 8, we observe that the image quality with EDSR and the pansharpening-based method are better when compared with the original and bicubic images. Figure 9 shows the objective NIQE metrics for the various algorithms. It is worth mentioning that even though the pansharpening-based method provides the lowest NIQE values (best performance) and provides visually very appealing enhanced images, it is noticed that some pixel regions in the enhanced images do not seem to be well registered in the sub-pixel level. Since the NIQE metric does not take into consideration issues related to registration in its assessment, it clearly favors the pansharpening-based method over others as shown in Figure 9. Other objective metrics using RMSE, PSNR, SSIM were used in [54] to demonstrate that the EDSR algorithm performed better than other methods.
Stereo Imaging and Disparity Map Generation for Mastcam Images
In the past few years, more research has been investigated in using virtual reality and augmented reality tools to Mars rover missions [47][48][49]. For example, a software called OnSight was developed by NASA and Microsoft to enable scientists to virtually work on Mars using Microsoft HoloLens [50]. Mastcam images have been used by OnSight software to create a 3D terrain model of the Mars. The disparity maps extracted from stereo Mastcam images are important by providing depth information. Some papers [51][52][53] proposed methods to estimate disparity maps using monocular images. Since the two Mastcam images do not have the same resolution, a generic disparity map estimation using the original Mastcam images may not take the full potential of the right Mastcam images that have three times higher image resolution. It will be more beneficial to NASA and other users of Mastcam images if a high-resolution disparity map can be generated.
In [54], we introduced a processing framework that can generate high resolution disparity maps for the Mastcam image pairs. The low-resolution left camera image was improved and the impact of the image enhancement on the disparity map estimation was studied quantitatively. It should be noted that, in our earlier paper [55], we generated stereo images using the Mastcam instruments. However, no quantitative assessment of the impact of the image enhancement was carried out.
Three algorithms were used to improve left camera images. The bicubic interpolation [56] was used as the baseline technique. Another method [57] is an adaptation of the technique in [5] with pansharpening [57][58][59][60][61]. Recently, deep learning-based SR techniques [62][63][64] have been developed. We used the enhanced deep super resolution (EDSR) [65] as one representative deep learning-based algorithm in our experiments. It should be emphasized that no one, including NASA, has carried out any work related to this stereo generation effort. As a result, we do not have a baseline algorithm from NASA to compare with.
Here, we include some comparative results. From Figure 8, we observe that the image quality with EDSR and the pansharpening-based method are better when compared with the original and bicubic images. (c) (d) Figure 9 shows the objective NIQE metrics for the various algorithms. It is worth mentioning that even though the pansharpening-based method provides the lowest NIQE values (best performance) and provides visually very appealing enhanced images, it is noticed that some pixel regions in the enhanced images do not seem to be well registered in the sub-pixel level. Since the NIQE metric does not take into consideration issues related to registration in its assessment, it clearly favors the pansharpening-based method over others as shown in Figure 9. Other objective metrics using RMSE, PSNR, SSIM were used in [54] to demonstrate that the EDSR algorithm performed better than other methods. Figure 10 shows the estimated disparity maps with the three image enhancement methods for Image Pair 6 in [54]. Figure 10a shows the estimated disparity map using the (c) (d) Figure 9 shows the objective NIQE metrics for the various algorithm mentioning that even though the pansharpening-based method provides the values (best performance) and provides visually very appealing enhanced noticed that some pixel regions in the enhanced images do not seem to be w in the sub-pixel level. Since the NIQE metric does not take into considerat lated to registration in its assessment, it clearly favors the pansharpening-b over others as shown in Figure 9. Other objective metrics using RMSE, PSN used in [54] to demonstrate that the EDSR algorithm performed better tha ods. Figure 9. Natural image quality evaluator (NIQE) metric results for enhanced "origin images" (scale: ×2) by the bicubic interpolation, pansharpening-based method, and Figure 10 shows the estimated disparity maps with the three image methods for Image Pair 6 in [54]. Figure 10a shows the estimated disparity m Figure 9. Natural image quality evaluator (NIQE) metric results for enhanced "original left Mastcam images" (scale: ×2) by the bicubic interpolation, pansharpening-based method, and EDSR. Figure 10 shows the estimated disparity maps with the three image enhancement methods for Image Pair 6 in [54]. Figure 10a shows the estimated disparity map using the original left camera image. Figure 10b-d show the resultant disparity maps with the three methods. Figure 10e shows to the mask used when computing the average absolute error values. According to our paper [54], the disparity map shown in Figure 9d has the best performance. More details can be found in [54].
original left camera image. Figure 10b-d show the resultant disparity maps with the three methods. Figure 10e shows to the mask used when computing the average absolute error values. According to our paper [54], the disparity map shown in Figure 9d has the best performance. More details can be found in [54].
Anomaly Detection Using Mastcam Images
One important role of Mastcam imagers is to help locate anomalous or interesting rocks so that the rover can go to that rock and collect some samples for further analysis.
A two-step image alignment approach was introduced in [5]. The performance of the proposed approach was demonstrated using more than 100 pairs of Mastcam images, selected from over 500,000 images in NASA's PDS database. As detailed in [5], the fused images have improved the performance of anomaly detection and pixel clustering applications. We would like to emphasize that this anomaly detection work was not done before by NASA and hence there is no baseline approach from NASA. Figure 11 illustrates the proposed two-step approach. The first step uses RANSAC (random sample consensus) technique [66] for an initial image alignment. SURF features [67] and SIFT features [68] are then matched within the image pair.
Anomaly Detection Using Mastcam Images
One important role of Mastcam imagers is to help locate anomalous or interesting rocks so that the rover can go to that rock and collect some samples for further analysis.
A two-step image alignment approach was introduced in [5]. The performance of the proposed approach was demonstrated using more than 100 pairs of Mastcam images, selected from over 500,000 images in NASA's PDS database. As detailed in [5], the fused images have improved the performance of anomaly detection and pixel clustering applications. We would like to emphasize that this anomaly detection work was not done before by NASA and hence there is no baseline approach from NASA. Figure 11 illustrates the proposed two-step approach. The first step uses RANSAC (random sample consensus) technique [66] for an initial image alignment. SURF features [67] and SIFT features [68] are then matched within the image pair.
The second step uses the diffeomorphic registration [69] technique to perform a refinement on the alignment. We observed that the second step achieves subpixel alignment performance. After the alignment, we can then perform anomaly detection and pixel clustering with the constructed multispectral image cubes. Figure 11. A two-step image alignment approach to registering left and right images.
We used K-means for pixel clustering. The number of clusters are set to be six following suggestion of the gap statistical method [70]. Figure 12 shows the results. In each figure, we enlarged one clustering region to showcase the performance. There are several important observations: (i) We observe that the clustering performance is improved after the first and second registration step of our proposed two-step framework; (ii) The clustering performance of the two-step registration for the M34-resolution and M100-resolution is comparable; (iii) The pansharpened data show the best clustering results with fewer randomly clustered pixels. Figure 11. A two-step image alignment approach to registering left and right images.
The second step uses the diffeomorphic registration [69] technique to perform a refinement on the alignment. We observed that the second step achieves subpixel alignment performance. After the alignment, we can then perform anomaly detection and pixel clustering with the constructed multispectral image cubes.
We used K-means for pixel clustering. The number of clusters are set to be six following suggestion of the gap statistical method [70]. Figure 12 shows the results. In each figure, we enlarged one clustering region to showcase the performance. There are several important observations: (i) We observe that the clustering performance is improved after the first and second registration step of our proposed two-step framework; (ii) The clustering performance of the two-step registration for the M34-resolution and M100-resolution is comparable; (iii) The pansharpened data show the best clustering results with fewer randomly clustered pixels. Figure 13 displays the anomaly detection results of two LR-pair cases for the three competing methods (global-RX, local-RX and NRS methods) applied to the original nine-band data captured only by the right Mastcam (second row) and the five twelve-band fused data counterparts (third to seventh rows). There is no ground-truth information about anomaly targets. Consequently, we relied on visual inspection. From Figure 13, we observe better detection results when both RANSAC and diffeomorphic registration steps are applied as compared with just RANSAC registration. Moreover, the results using BDSD and PRACS pan-sharpening produce less noise than the detection outputs of purely registration-based MS data. (e) using twelve-band MS cube after the second registration step with lower resolution; (f) using twelve-band MS cube after the second registration step with higher (M-100) resolution; (g) using pansharpened images by band dependent spatial detail (BDSD) [71]; and (h) using pan-sharpened images by partial replacement adaptive CS (PRACS) [72]. Figure 13 displays the anomaly detection results of two LR-pair cases for the three competing methods (global-RX, local-RX and NRS methods) applied to the original nineband data captured only by the right Mastcam (second row) and the five twelve-band fused data counterparts (third to seventh rows). There is no ground-truth information (d) using twelve-band MS cube after first registration step with lower (M-34) resolution; (e) using twelve-band MS cube after the second registration step with lower resolution; (f) using twelve-band MS cube after the second registration step with higher (M-100) resolution; (g) using pan-sharpened images by band dependent spatial detail (BDSD) [71]; and (h) using pan-sharpened images by partial replacement adaptive CS (PRACS) [72]. about anomaly targets. Consequently, we relied on visual inspection. From Figure 13, we observe better detection results when both RANSAC and diffeomorphic registration steps are applied as compared with just RANSAC registration. Moreover, the results using BDSD and PRACS pan-sharpening produce less noise than the detection outputs of purely registration-based MS data.
Computers 2021, 10, x FOR PEER REVIEW 16 of 19 Figure 13. Comparison of anomaly detection performance of an LR-pair on sol 1138 taken on 10-19-2015. The first row shows the RGB left and right images; and the second to seventh rows are the anomaly detection results of the six MS data versions listed in [5] in which the first, second, and third columns are results of global-RX, local-RX [73] and NRS [74] methods, respectively.
Conclusions and Future Work
With the goals of raising public awareness and stimulating research interests of some interesting Mastcam projects, we briefly review our recent projects related to Mastcam image processing. First, we studied various new image compression algorithms and observed that perceptually lossless compression at ten to one compression can be achieved. Figure 13. Comparison of anomaly detection performance of an LR-pair on sol 1138 taken on 10-19-2015. The first row shows the RGB left and right images; and the second to seventh rows are the anomaly detection results of the six MS data versions listed in [5] in which the first, second, and third columns are results of global-RX, local-RX [73] and NRS [74] methods, respectively.
Conclusions and Future Work
With the goals of raising public awareness and stimulating research interests of some interesting Mastcam projects, we briefly review our recent projects related to Mastcam image processing. First, we studied various new image compression algorithms and observed that perceptually lossless compression at ten to one compression can be achieved. This will tremendously save scarce bandwidth between Mars rover and JPL. Second, we compared recent debayering algorithms with the default algorithm used by NASA and found that recent algorithm can yield less artifacts. Third, we investigated image enhancement algorithms for left Mastcam images. It was observed that, with the help of right Mastcam images, it is possible to improve the resolution of left Mastcam images. Fourth, we investigated stereo image and disparity map generation by combining left and right Mastcam images. It was noticed that the fusion of enhanced left and right images can create higher resolution stereo image and disparity maps. Finally, we investigated the fusion of left and right images to form a 12-band multispectral image cube and its application to pixel clustering and anomaly detection.
The new Mars rover Perseverance that has landed on Mars in 2021 contains a new generation of stereo instrument known as Mastcam-Z, (https://mars.nasa.gov/mars202 0/spacecraft/instruments/mastcam-z/ accessed on 7 September 2021). We are currently pursuing funding to continue our customization of our algorithms described in this paper to those images in Mastcam-Z. In the stereo imaging work, more research is needed in order to deal with left and right images from different view angles. | 8,374.8 | 2021-09-08T00:00:00.000 | [
"Computer Science"
] |
State-of-Charge Estimation of Lithium-Ion Battery Pack Based on Improved RBF Neural Networks
,
Introduction
Lithium-ion batteries have been widely used as energy storage devices and in electric vehicles due to their desirable balance of both energy and power densities. Compared with single lithium battery cells, a lithium battery pack with hundreds even thousands of battery cells connected in parallel and series is able to provide the required power in various applications [1][2][3]. e battery management system (BMS) plays an important role in maintaining safe and efficient operation of the battery. e State-of-Charge (SOC) of li-ion battery pack is a key parameter affecting the battery life, safety and efficient operation [4,5]. Based on the accurate estimation of SOC, effective management strategies can be developed to avoid overcharging/overdischarging, prolong the cycle life of batteries, and prevent the occurrence of security incidents [6]. Furthermore, with the correctly estimated SOC information, drivers can also arrange the driving time properly.
Due to the complex nonlinear characteristics of li-ion batteries, SOC cannot be measured directly in real-time applications, and it needs to be inferred using other measurable variables [7]. Since a battery pack may consists of hundreds and even thousands of battery cells, the computation effort for modelling is increased accordingly. Besides, the inconsistency of cells in a battery pack varies along with the life of the battery. us, it is a challenge to accurately estimate the SOC of the battery pack. Recently, a number of methods have been proposed to improve the SOC estimation and they can be grouped to three general approaches for the estimation of battery pack SOC. e first approach integrates the cell model into the structure of the battery pack [8,9]. However, the inconsistency between different cells in a battery pack is ignored.
In the second category, the single cell SOC estimation approach is directly extended to battery packs, including open circuit voltage method [10], ampere-hour integral method [11], Kalman filter [12,13], and the equivalent electric circuit model [8]. ese methods treat the battery pack as a "big battery" [14], which makes the SOC estimation simpler and more quick. However, the simple model is based on the precise mechanism of single cells. Due to the inconsistency between different battery cells, estimation error inevitably exists. e third category includes various statistical methods. Plett first proposed the Bar-Delta Filter method in 2009 [15] which uses a Sigma Point Kalman Filter (SPKF) to estimate the average SOC of the battery pack and Delta Filters to estimate the variance between the cell's characteristics and the average characteristics. However, the accuracy of the battery SOC estimation is a key, which is still a challenge. Dai et al. [16] and Sun and Xiong [14] proposed a dual time-scale Kalman filter, based on the equivalent electrical circuit model (EECM) where the differences in the internal resistance battery cells are considered. e mean SOC model and the differences of battery SOC proposed by Zheng et al. [17,18] use the extended Kalman filter (EKF) based on the cell mean model (CMM) and cell difference model (CDM) to estimate both the mean SOC value of battery cells and their differences, respectively. is method still requires internal information about the battery pack. Deng et al. [19] proposed a data-driven method, and an efficient feature selection method is used to estimate the SOC of a battery pack using an autoregressive Gaussian process regression (GPR) model [20,21]. A challenge for the GPR modelling is its computation time (O(N 3 )).
In summary, albeit the aforementioned progresses in the battery pack SOC estimation, to develop a simple yet accurate model is still an important issue in real-life battery applications. Data-driven methods [22] have gained a lot of interest in recent years to solve highly nonlinear classification and regression problems. e advantages of datadriven methods are the flexibility and model-free [23] characteristics which make them easy to create new models. As a class of data-driven methods [24], the machine learning approaches, such as support vector regression [25], Kalman filter [12,13,17], and backpropagation (BP) neural networks [26], have been successfully used in SOC estimation and prediction. However, the selection of dataset and input features for building these models is still ad hoc via trial and error.
To overcome some shortcomings in the aforementioned methods for the battery pack SOC estimation, this paper presents an improved RBF method using a fast recursive algorithm (FRA) to estimate the SOC of a battery pack. e FRA method [27] can be used for both neural inputs selection [28] and hidden layer node selection [29][30][31] in the configuration of RBF networks. Comparing to [32], the average cell temperature, the time mean pack voltage, the time mean pack temperature, and the time mean loop current all over 10 seconds intervals can be also added to the initial candidate pool of input variables, other input candidates can also be included such as the maximum cell voltage, the minimum cell voltage, the average cell voltage, and loop current. e statistical variables are adopted to reduce the complexity of the model and the cell information is used to overcome the inconsistency among single cells.
en, a compact subset of these candidate variables are selected as the model input by the FRA method. On this basis, an improved RBF model built by the FRA method is used to predict the SOC of the battery pack. e proposed RBF model is automatically constructed by the selection of the hidden layer nodes using the FRA method. Furthermore, the parameters of RBF kernel are optimized by particle swarm optimization algorithm (PSO). e rest of this paper is organized as follows. Section 1 introduces the input selection based on the FRA method. In Section 2, the application of improved RBF neural network for SOC estimation of battery pack is introduced in detail. Furthermore, the experimental and simulation results are compared in Section 3. Finally, Section 4 concludes the paper.
Input Selection Using FRA
Based on the theory of series expansion, polynomial NARMAX models can achieve the same modelling performance as various neural networks if certain conditions are satisfied [28]. e input selection of RBF neural network is thus simplified to determining the structure of the polynomial NARMAX model. e structure of the polynomial NARMAX model can be efficiently detected by selecting important polynomial terms using the FRA method with low computational complexity [27].
FRA Method.
Consider the following multiple-input single-output system represented by a linear-in-the-parameter model: where y(t), X → (t) ∈ R m , and ε(t) are output variable, input variable vector, and model error at time instant t, respectively. Herein, m and n denote the number of input variables and model terms (mapping functions), respectively. φ k is the nonlinear mapping function. θ k are the linear coefficients for the mapping functions.
For given N training samples, the system model is expressed in the following matrix form: where Refer to [27], and the minimal cost function using the least square method is given as where 2 Complexity us, the minimal cost function is reformulated as follows: Use the definitions below: e variance of the minimal cost function E induced by an additional mapping function ϕ k+1 is given as follows: Using the propositions detailed in [27], equation (7) is rewritten as follows: where ϕ (k) k+1 � R k ϕ k+1 . Obviously, the variance ΔE k+1 only concerns the additional mapping function ϕ k+1 . en, define the recursive matrix A � [a i,j ] k×n and recursive vector A y � [a i,j ] T n×1 , the elements of which are defined as follows: erefore, the net contribution induced by the ϕ k+1 is expressed as And the linear coefficients are estimated by Require the maximum voltage vector v , the average temperature tmp ���→ ∈ R N×1 of the battery cells, the circuit current I → ∈ R N×1 , the maximal order of time lags for inputs l x � 10, the maximal order of time lags for output l y � 3, the maximal number of selected terms m, and the minimal training error e. Ensure the SOC vector of the battery pack y → ∈ R N×1 .
(1) Initialization: form the regression matrix for polynomial term selection.
(2) for i � 1 to n do (3) calculate the recursive matrix A, A y , a j,j and ay j (j � 1, . . . , m) is recursively calculated by (4) calculate the net contribution of the terms using equation (10). (5) select the significant term. (6) end for (7) Input selection: find the order of the time lags from the selected model terms.
ALGORITHM 1: Input selection using FRA algorithm.
Complexity
Require: selected input variable matrix Φ ∈ R N⋋m in equation (2), the variable upper/lower bounds [X min , X max ] and the velocity upper/lower bounds [v min , v max ], the size of the population l, the maximum number of iterations T, the crossover factors CR � [c 1 , c 2 ] ∈ [0, 1], and the acceleration of the particle velocity w i . Ensure: the SOC vector of the battery pack y → ∈ R N×1 .
(1) Initialization: j,0 and widths x 2 j,0 of the RBF basis function, where j � 1, . . . , l, thus the initial nonlinear parameters are ) and the recursive matrix A, A y using Algorithm 1, respectively.
i and X k,i+1 denote the velocity and particle at i th iteration for k th selection, r 1 and r 2 is the random numbers. (11)end for (12)add the candidate feature with the minimal PRESS error to the regression matrix Φ, k � k + 1. (13)end while (14)Identification: calculate the linear coefficients using equation (11). e SOC of a battery pack is a time sequence, so both the model dependent variables and the model output measured in the past are critical to the estimation of next SOC value. However, not all of the historical data are needed for SOC estimation, so the maximum order of time lags for these input variables should be determined in advance.
To select the RBF neural network inputs, the problem is converted into the polynomial model construction. us, the input selection problem is formulated as equation (1).
Herein, the mapping functions are selected using the following polynomial terms: where 0 ≦ n yk1 ≦ · · · ≦ n yki ≦ l y , 0 ≦ n xki ≦ · · · ≦ n xki ≦ l x , and l x � 10 and l y � 3. en, the neural network model inputs can be identified by selecting the most significant polynomial terms using the FRA method. e following input selection method is detailed in Algorithm 1.
Improved RBF Model for the SOC Estimation
is paper aims to develop an accurate yet simple model for battery pack SOC estimation. Deng et al. proposed a two stage algorithm based on the leave-one-out method [30] to increase the performance of RBF neural networks. e selection procedure is automatically terminated by predicted-residual-sums-of-squares (PRESS) error so that the constructed RBF neural model is parsimonious and accurate. In this paper, the FRA method is used instead of the two stage algorithm for RBF neural network construction, which reduces the modelling complexity. In order to ensure the accuracy of the model, particle swarm optimization (PSO) algorithm is used to optimize the kernel parameters.
General RBF Neural Network.
A RBF neural model can be formulated as a linear-in-the-parameters model like equation (1) as follows: where the additional parameters φ k (X → (t); c k ; σ k ) is the radial basis activation function for the hidden nodes which is often chosen as a Gaussian function. c k ∈ R m is the centers, and σ k ∈ R 1 denotes the RBF widths.
Similar to equation (2), the RBF neural model is formulated in the matrix form as follows: Validation SOC using experience inputs Actual SOC Validation SOC using selected inputs Validation SOC error using experience inputs Validation error SOC using selected inputs where Φ � [ϕ 1 , . . . , ϕ n ] T ∈ R N×n is the output matrix of the hidden nodes.
Improved RBF Neural
Model. e performance of the RBF neural model is related to the number of the hidden layer nodes and the kernel parameters. erefore, the construction of RBF network can be regarded as an optimization problem which depends on the number of hidden layer nodes, kernel parameters, and connection weights. In order to improve the accuracy and real-time performance of Li-ion battery pack SOC estimation, the FRA method is used to establish an accurate and compact RBF neural model.
Using the improved RBF neural model based on the FRA method, the hidden layer nodes are selected according to the net contribution of the hidden layer node output. At the same time, the nonlinear kernel parameters are optimized by the particle swarm optimization method. Particle swarm optimization (PSO) [33] is a nonlinear parameter optimization algorithm based on swarm intelligence, and it has been widely used for nonlinear parameter optimization. e method is simple and easy to implement, it is applied to the parameter optimization of RBF kernel function. According to [30], leave-one-out (LOO) crossvalidation and associated predicted-residualsums-of-squares (PRESS) error are used as an index to select hidden layer nodes and automatically break the selection procedure. e hidden layer nodes are selected with the maximal reduced PRESS error. us, the net contribution is changed to the following equation: , k � 1, 2, . . . , n, (15) where N and n is the number of the samples and the max number of the hidden layer nodes, respectively. e k (t) and R k (t, t) is the model error and the defined matrix R k in equation (6) at time instant t, respectively. Based on this net contribution, the improved RBF neural networks optimized by the PSO method is shown in Algorithm 2. Training SOC error using proposed RBF Training SOC error using General RBF Training SOC error using LSSVM Training SOC error using RBF improved by two-stage algorithm Training SOC using proposed RBF Actual SOC Training SOC using General RBF Training SOC using LSSVM Training SOC using RBF improved by two-stage algorithm
Battery Pack SOC Estimation.
As mentioned earlier, the battery pack SOC is estimated using the improved RBF neutral network. e schematic diagram of the proposed method for the battery pack SOC estimation is illustrated in Figure 1.
From Figure 1, there are three parts in the proposed method. In the first part, the inputs are determined from the measurements including the voltage of the battery cell (V cell ), the voltage of the battery pack (V pack ), the terminal current (I cir ), the temperature of the battery cell (T cell ), the SOC of the battery cell (SOC cell ), and the SOC of the battery pack (SOC pack ). Before the model inputs are determined by the FRA method, the candidate inputs are expanded by finding the maximum, the minimum, and the mean of V pack , T cell , and I cir . en, the delayed sequence obtained by using delay operator (z − 1 , . . . , z − 10 ) is adopted to produce the polynomial terms. us, the inputs are selected from the terms in the resultant nonlinear autoregressive moving average with exogenous inputs (NARMAX) model. In the second part, the improved RBF model is trained using the FRA method combined with the PSO method. Finally, the SOC of the battery pack is predicted using the built RBF model in which the kernel parameters (μ, σ), the number of the hidden layer (n) nodes, and the weights to the outputs (Θ) are optimized by PSO [33].
Simulation Results
We first consider a package with 216 battery cells of 18650 types connected in series. 8 battery packs in the same configuration were tested. In these tests, the circuit current, the terminal voltage of the battery pack, the terminal voltages of each cell individual, and the temperature between two battery cells are measured every 1 s. e SOC of the battery pack and the battery individual cell are all estimated every 1s by the battery management system. e data collected from a battery pack are often too large to be used to establish the estimation model. e ageing of battery capacity can be ignored in a short period, the training samples are selected every 30 s to build the improved RBF model. en, the model inputs are chosen by FRA for the battery pack SOC estimation. Validation SOC using proposed RBF Actual SOC Validation SOC using General RBF Validation SOC using LSSVM Validation SOC using RBF improved by two-stage algorithm
Complexity
Using the FRA method, the maximum voltage v max(t) of the battery cells, the minimum voltage v min(t) of the battery cells, the average voltage of the battery pack v avg(t), the voltage of the battery pack v(t), the mean voltage v m(t) of past 10 measurements, the mean current i m(t) of past 10 measurements, the mean temperature tmp m(t) of past 10 measurements, the circuit current i(t) and the average temperature tmp(t) of the battery cells, and the estimated SOC soc(t) are adopted as the inputs and output, respec- , tmp m(t − 8), and soc(t − 1) are selected. To verify the selected inputs, the improved RBF model for the SOC estimation is built using the selected inputs compared to the inputs selected by experience (trial and error). e performance using different inputs are shown in Table 1.
In Table 1, the RMSE (root mean square error) and the max absolute error are shown. Clearly, the model using the selected inputs performs much better, with the RMSE of the absolute error is almost always within ±0.08. e simulations are illustrated in Figure 2 and 3. It is shown that the SOC estimation is more accurate using the selected inputs of which the generalization error is less than that using experience inputs.
en, the proposed model is compared with the conventional RBF method, the general least square support vector method (LSSVM), and the improved RBF neural model and optimized by the two-stage method (TSS_RBF) [30]. e performance of the three methods is shown in Table 2.
According to Table 2, the proposed RBF method took more time than the conventional RBF method in training the model, but the validation RMSE of the proposed RBF model is just half of that using the general RBF model. While the LSSVM model takes almost 50 times longer to train than the proposed RBF model and the improved RBF neural model by the two stage method takes almost 50 times longer to train than the proposed RBF model. Meanwhile, the validation RMSE of the proposed RBF model is 0.02% lower than the other methods. e simulation results are shown in Figures 4 and 5. It is clear that the proposed RBF model has excellent generalization capability to obtain more accurate SOC than the other methods.
Conclusions
In order to estimate the SOC of battery pack accurately, it is necessary to adopt the data-driven method to handle the inconsistencies among the cells in a battery pack. is paper first uses the FRA method to select the input variables to improve the precision of the model because the inputs features are important to ensure the accuracy of the RBF neural networks. e experiment results show that better SOC estimation results can be achieved when a compact set of model inputs is selected. en, the FRA method is further used to improve construction RBF neural network for battery pack SOC estimation. e hidden nodes of RBF neutral networks are again selected using the FRA method, and the particle swarm optimization algorithm is used to optimize the kernel parameters. e results show that the improved RBF model can achieve high estimation accuracy at acceptable time costs.
Data Availability
e processed data used to support the findings of this study are included within the article. e data source is provided by the partner of Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, and can be obtained from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,744.6 | 2020-11-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Maternal Vitamin and Mineral Supplementation and Rate of Maternal Weight Gain Affects Placental Expression of Energy Metabolism and Transport-Related Genes
Maternal nutrients are essential for proper fetal and placental development and function. However, the effects of vitamin and mineral supplementation under two rates of maternal weight gain on placental genome-wide gene expression have not been investigated so far. Furthermore, biological processes and pathways in the placenta that act in response to early maternal nutrition are yet to be elucidated. Herein, we examined the impact of maternal vitamin and mineral supplementation (from pre-breeding to day 83 post-breeding) and two rates of gain during the first 83 days of pregnancy on the gene expression of placental caruncles (CAR; maternal placenta) and cotyledons (COT; fetal placenta) of crossbred Angus beef heifers. We identified 267 unique differentially expressed genes (DEG). Among the DEGs from CAR, we identified ACAT2, SREBF2, and HMGCCS1 that underlie the cholesterol biosynthesis pathway. Furthermore, the transcription factors PAX2 and PAX8 were over-represented in biological processes related to kidney organogenesis. The DEGs from COT included SLC2A1, SLC2A3, SLC27A4, and INSIG1. Our over-representation analysis retrieved biological processes related to nutrient transport and ion homeostasis, whereas the pathways included insulin secretion, PPAR signaling, and biosynthesis of amino acids. Vitamin and mineral supplementation and rate of gain were associated with changes in gene expression, biological processes, and KEGG pathways in beef cattle placental tissues.
Introduction
Maternal physiologic adaptation to pregnancy includes increased demand for nutrients to meet the maternal metabolic needs and nurture the developing fetus [1][2][3]. Furthermore, early gestation nutritional exposure affects the uterine environment and fetal development [4]. The fetomaternal interface provided by the placenta acts as a nutrient sensor to coordinate maternal nutrient supply and fetal metabolic requirements [5,6]. Thus, proper fetal development and nutrition are supported by an adequate nutrient supply through the placenta [7,8]. The placenta has many functions that include nutrient and waste product transport and hormone synthesis [6]. In ruminants, physiological exchanges between mother and fetus are supported by the caruncular-cotyledonary unit of the placenta, were approved by the North Dakota State University Institutional Animal Care and Use Committee (IACUC A19012).
Diets were delivered once daily via a total mixed ration and consisted of triticale hay, corn silage, modified distillers' grains plus solubles, ground corn, and if indicated by treatment, mineral premix. To achieve MG (0.79 kg/d), heifers were fed the total mixed ration with the addition of the starch-based protein/energy supplement (a blend of ground corn, dried distillers' grains plus solubles, wheat midds, fish oil, urea, and ethoxyquin). The LG heifers were maintained on the basal total mixed ration and targeted to gain 0.28 kg/d. Based on the National Research Council [26], the total mixed ration provided 105%, 158%, 215%, and 250% of the mineral requirements for NoVTM_LG, NoVTM_MG, VTM_LG, and VTM_MG treatments, respectively. Diet composition is described in Supplementary Materials, Table S1. The two rates of gain and VTM levels supplied to the heifers were chosen to represent two nutritional states (weight gain versus maintenance) as well as conditions applicable to beef production systems.
The VTM treatment started 71 to 148 days before artificial insemination by providing 0.45 kg/heifer daily of a ground corn and vitamin and mineral premix (113 g·heifer −1 ·d −1 of Purina Wind & Rain Storm All-Season 7.5 Complete, Land O'Lakes, Inc., Arden Hills, MN, USA). Based on the VTM starting date, heifers were assigned to one of seven breeding groups so that the supplementation period was at least 60 days for all. At breeding, heifers were randomly assigned to either LG or MG treatments within their respective VTM treatment. Heifers were bred by artificial insemination using female-sexed semen from a single sire. Pregnancy diagnosis was performed 35 days after artificial insemination, and fetal sex was determined on day 65 using transrectal ultrasonography. Further details of animal management were described elsewhere [27].
The VTM and rate of gain treatments were carried out until day 83 ± 0.27 of gestation, when uteroplacental tissues were collected through ovariohysterectomy [28]. The largest placentome closest to the fetus was collected and maternal (CAR) and fetal (COT) portions were manually dissected [29], snap-frozen, and stored at −80 • C.
Total RNA Isolation, Library Preparation, Sequencing, and Data Analysis
Total RNA of eight female samples per treatment was isolated from the CAR and COT tissues using the RNeasy ® kit (Qiagen ® , Germantown, MA, USA) followed by oncolumn DNase treatment, according to the manufacturer's protocol. Sample integrity and purity were evaluated using the Agilent 2100 Bioanalyzer and agarose gel electrophoresis. Strand-specific RNA libraries were prepared using the NEBNext ® Ultra™ II Directional RNA Library Prep Kit for Illumina (New England BioLabs ® , Ipswich, MA, USA), and sequencing was carried out on the Illumina ® NovaSeq 600 platform. Library preparation and paired-end sequencing with 150-bp reads at a depth of 20 M reads/sample were carried out at Novogene Co. (Nanjing, China).
Sequencing adaptors, low-complexity reads, and reads containing low quality bases were removed in an initial data-filtering step. Reads with a PhredScore lower than 30 were filtered out. Quality control (QC) and read statistics were estimated with FastQC v0.11.8 [30] and MultiQC v1.9 [31] software. After QC, 29 and 31 samples (seven or eight samples per group) remained for further analyses from CAR and COT, respectively. Reads were mapped to the Bos taurus reference genome (ARS-UCD 1.2) [32] using the STAR aligner v. 2.7.3a [33]. Raw counts per gene were obtained using the -quantMode GeneCounts flag from STAR based on the gene annotation file (release 100, Ensembl). MultiQC, NOISeq [34], and edgeR [35] software were used to perform the post-mapping quality control.
Differential Expression and Functional Over-Representation Analyses
Genes with expression values lower than 1 count per million in 50% of the samples were filtered out. After filtering, the genes of CAR and COT tissues were analyzed using the DESeq2 v.1.22.1 R-package [36] to identify DEGs. The median of ratios method from DE-Seq2 was employed to normalize the data for sequencing depth and RNA composition [36]. The differential expression analysis used the negative binomial generalized linear model to fit gene expression level as a negative binomial distribution and Wald statistics to perform hypothesis testing [36]. The svaseq function of the R-package Surrogate Variable Analysis v.3.30.0 [37] was adopted to estimate unknown sources of variation in the RNA-Seq data. The DESEq2 model was used to measure the treatment effect while controlling for batch effect differences that included the surrogated variables and the heifer's birthplace (farm of origin). To make all pair-wise comparisons between the four treatment groups, six contrasts were created as follows: (1) VTM_MG vs. NoVTM_LG, (2) VTM_MG vs. VTM_LG, (3) VTM_MG vs. NoVTM_MG, (4) VTM_LG vs. NoVTM_LG, (5) VTM_LG vs. NoVTM_MG, (6) NoVTM_MG vs. NoVTM_LG. Multiple testing adjustment of the p-values (padj) was performed using the Benjamini-Hochberg procedure for false discovery rate (FDR) [38]. Genes were identified as differentially expressed for each one of the contrasts when the false-discovery rate adjusted p-value (padj) cutoff ≤ 0.1 [11] and classified as up-or downregulated based on the sign of the log2 fold change. The threshold (padj < 0.1) was defined a priori based on our experimental design. Furthermore, we used stringent quality control to remove lowly expressed genes and reduce the number of false-positive genes tested. As these are exploratory analyses, this combined approach allowed us to identify significant biological processes, while avoiding losing too much information.
Gene functional over-representation analysis was carried out using the ShinyGo v0.61 webtool [39] and the B. taurus annotation as background. This approach identified specific and common biological functions and KEGG pathways within and among gene lists for each tissue and contrast. Significant results after multiple testing adjustments were considered with an FDR ≤ 0.05.
An overview of the experimental design and data analyses pipeline is presented in Figure 1.
Results
We applied an RNA-Seq-based approach to identify differentially expressed genes in maternal (CAR) and fetal (COT) placental tissues of beef heifers subjected to vitamin and mineral supplementation and two rates of gain. On average, the sequencing of the tissues generated 22.7 M reads through the 60 samples with PhredScore > 30. The sequencing throughput and mapping rates per sample and tissue are reported in Table S2. On average, 97.0% and 96.2% of the reads from CAR and COT, respectively, were uniquely mapped to genes in the bovine reference genome (Table S2). After filtering, 13,252 genes from CAR and 12,795 from COT were analyzed to identify DEGs.
Differentially Expressed Genes
We identified 267 unique DEGs (padj ≤ 0.1) throughout all tissues and group comparisons. For the CAR tissue, gene expression analysis revealed 137 upregulated and 88 downregulated genes (Figure 2a), whereas in COT, 27 and 87 genes were upregulated or downregulated, respectively ( Figure 2b). Our approach did not find significant DEGs for COT when comparing VTM_MG vs. NoVTM_MG and NoVTM_MG vs. NoVTM_LG.
The overlap between the sets of DEGs identified by the different contrasts are shown in Figure 2c,d. For CAR, we observed the greatest number of shared genes (n = 18) between the contrasts VTM_MG vs. VTM_LG and VTM_LG vs. NoVTM_MG. In the COT tissue, the VTM_LG vs. NoVTM_LG and VTM_MG vs. NoVTM_LG contrasts shared 14 genes between one another. When we compared the DEGs across tissues, most of them were tissue-specific, with only five genes shared between CAR and COT. The common DEGs between CAR and COT were: DNMT3B, ESYT3, PRPFB1, FADS1, and TTC7A. The DEGs are reported for each of the significant contrasts along with the fold-change values and annotation in Table S3 (CAR) and Table S4 (
Functional Over-Representation Analysis
We retrieved significant biological processes (BP) and KEGG pathways by querying the DEGs of each contrast using the ShinyGO tool. Our approach identified 15 and 20 KEGG pathways from CAR and COT, respectively, that were over-represented by the DEGs (FDR < 0.05). Likewise, 76 and 49 gene ontology BP terms were identified from CAR and COT, respectively (FDR < 0.05). Figure 3 shows BP and KEGG pathways that were overrepresented from DEGs of the VTM_MG vs. VTM_LG and VTM_MG vs. NoVTM_LG comparisons. The BP underlying the DEGs from CAR (Figure 3a) included, for example, regulation of molecular function and catalytic activity, organ morphogenesis and development (especially kidney and ureter). For CAR, among the over-represented pathways (Figure 3b) were the fatty acid metabolism, steroid biosynthesis, and terpenoid backbone biosynthesis. Due to the reduced number of DEGs, the contrasts NoVTM_MG vs. NoVTM_LG, VTM_MG vs. NoVTM_LG, and VTM_MG vs. NoVTM_LG from CAR did not retrieve any significantly enriched BP or KEGG pathways. Regarding COT, BP underlying the DEGs (Figure 3c) included metal ion homeostasis, ion transport, and regulation of developmental processes and response to stress, whereas the over-represented pathways (Figure 3d) VTM_MG
Functional Over-Representation Analysis
We retrieved significant biological processes (BP) and KEGG pathways by querying the DEGs of each contrast using the ShinyGO tool. Our approach identified 15 and 20 KEGG pathways from CAR and COT, respectively, that were over-represented by the DEGs (FDR ≤ 0.05). Likewise, 76 and 49 gene ontology BP terms were identified from CAR and COT, respectively (FDR ≤ 0.05). Figure 3 shows BP and KEGG pathways that were over-represented from DEGs of the VTM_MG vs. VTM_LG and VTM_MG vs. NoVTM_LG comparisons. The BP underlying the DEGs from CAR (Figure 3a) included, for example, regulation of molecular function and catalytic activity, organ morphogenesis and development (especially kidney and ureter). For CAR, among the over-represented pathways (Figure 3b) were the fatty acid metabolism, steroid biosynthesis, and terpenoid backbone biosynthesis. Due to the reduced number of DEGs, the contrasts NoVTM_MG vs. NoVTM_LG, VTM_MG vs. NoVTM_LG, and VTM_MG vs. NoVTM_LG from CAR did not retrieve any significantly enriched BP or KEGG pathways. Regarding COT, BP underlying the DEGs (Figure 3c) included metal ion homeostasis, ion transport, and regulation of developmental processes and response to stress, whereas the over-represented pathways (Figure 3d) included PPAR signaling, thyroid hormone signaling, adipocytokine signaling, insulin resistance, HIF-1 signaling, and cysteine and methionine metabolism. Biological processes and pathways identified for all the comparisons are provided in Tables S3 and S4.
Discussion
Nutrient demand increases throughout gestation to meet the requirements of the developing fetus. Growing evidence has shown that poor maternal nutrition, including vitamin and mineral deficiency, has adverse impacts on early placental development, with long-lasting effects on fetal programming of pre and postnatal growth and development [2][3][4]16]. In this study, we examined the impact of maternal vitamin and mineral supplementation (from pre-breeding to day 83) and two rates of gain (low or moderate) during the first 83 days of pregnancy on the gene expression of maternal (CAR) and fetal (COT) placental tissues. Our findings demonstrate that vitamin and mineral supplementation and rate of gain led to differential gene expression of CAR and COT tissues. However, the
Discussion
Nutrient demand increases throughout gestation to meet the requirements of the developing fetus. Growing evidence has shown that poor maternal nutrition, including vitamin and mineral deficiency, has adverse impacts on early placental development, with longlasting effects on fetal programming of pre and postnatal growth and development [2][3][4]16]. In this study, we examined the impact of maternal vitamin and mineral supplementation (from pre-breeding to day 83) and two rates of gain (low or moderate) during the first 83 days of pregnancy on the gene expression of maternal (CAR) and fetal (COT) placental tissues. Our findings demonstrate that vitamin and mineral supplementation and rate of gain led to differential gene expression of CAR and COT tissues. However, the effect of rate of gain seems to be stronger on the maternal side, as for COT, few or no genes were identified as differentially expressed for the comparisons VTM_MG vs. VTM_LG and NoVTM_MG vs. NoVTM_LG. These findings suggest potential placental adaptations in response to maternal vitamin and mineral supplementation and rate of gain as indicated by the over-represented biological processes and pathways.
While our model achieved the targeted rates of gain (as designed), we did not find significant differences in fetal size or gravid uterine weight among the treatments [27]. On the other hand, fetal liver weight was greater (p-value = 0.05) from dams fed VTM than NoVTM [27]. Likewise, amino acid concentrations of maternal serum and allantoic and amniotic fluid in the same samples used in the current study were affected by vitamin and mineral supplementation and/or rate of gain [25]. Evidence suggests that biological mechanisms regulating normal growth, development, and nutrient utilization are programmed in utero for postnatal growth and adult function even during the earliest stages of development [40]. Additionally, large amounts of epidemiological data have shown that an impaired intrauterine environment has long-term consequences (reviewed in [4,41]). The key role performed by the placenta is mediated by changes in the gene expression that leads to differential programming of fetal tissues. Thus, imbalances in maternal vitamin and mineral availability during critical windows of development play a role in fetal tissue development as observed in fetal liver size.
Although no signs of vitamin and mineral deficiency or overload were observed in these pregnant beef heifers, changes in gene expression and concentrations of amino acids in fetal fluids suggest physiological adaptations to meet the fetal and maternal metabolic needs. Furthermore, the changes in gene expression of CAR seem to be greater than in COT as more DEGs were identified and were mainly upregulated. We know that changes in maternal metabolism are sensed by the placenta to meet fetal nutrient requirements based on the maternal resources available [5,42]. Thus, the dam may insulate the fetus against short-term nutrient imbalances by using their body reserves to sustain fetal growth [5]. Alternatively, the fetal placenta (COT) has specific homeostatic mechanisms to insulate the fetus, as suggested by the differential expression of glucose transporter genes.
Pathways Underlying Caruncular Differential Gene Expression
The placenta can adapt its capacity to supply nutrients in response to insults in the maternal-fetal environment [6]. Here, we found that vitamin and mineral supplementation combined with low or moderate gain affected pathways related to energy metabolism. Additionally, several BP and KEGG pathways underlying fatty acid metabolism, hormone biosynthesis, and amino acid degradation were identified in CAR. These are processes that require vitamins and minerals as structural components or enzymatic co-factors [14].
Underlying the fatty acid metabolism pathway, we identified FADS1 and ACAT2 as DEG for CAR. The FADS1 gene codes for a rate-limiting enzyme involved with the metabolism and degradation of polyunsaturated fatty acids, such as docosahexaenoic acid and arachidonic acid [43]. Likewise, the protein encoded by ACAT2 acts in lipid biosynthesis and regulates the synthesis of cholesteryl ester [44]. Additional genes related with cholesterol metabolism include SREBF2, which was upregulated in the VTM_LG vs. NoVTM_MG and downregulated in VTM_MG vs. VTM_LG in CAR. Sterol regulatory element-binding proteins (SREBPs) are transcription factors involved in cholesterol homeostasis and fatty acid uptake [45]. Steroid biosynthesis and sterol metabolic process were over-represented in our functional analysis of DEGs in CAR. Among the differentially expressed sterol-regulated genes underpinning cholesterol biosynthesis, we identified HMGCS1, FDFT1, MSMO1, and SQLE downregulated in VTM_MG vs. VTM_LG in CAR. Cholesterol is important for fetal development as a component in the cell membranes of the growing placenta and fetus [46]. Furthermore, cholesterol is the precursor of all steroid hormones, such as progesterone, that are required for normal gestation and fetal development [46]. The production of estrogens from cholesterol is supported by the enzymes present in the bovine trophoblast. According to Schuler et al. [47], the estrogen synthesized in the trophoblast suggests a role as local regulator of caruncular growth to produce a histotroph-like cell detritus. The histotroph in turn serves as an important source of nutrients for the fetus.
From the contrast between VTM_MG vs. NoVTM_MG, we identified the genes CALM2, ATP2B4, CAMK2G, and BDKRB2 as over-represented in the calcium signaling and cyclic guanosine monophosphate (CGMP)-PKG signaling pathways. Calcium is not only essential for fetal development but also is an intracellular messenger that regulates, for example, gene transcription and cell proliferation [48]. Furthermore, calcium-mediated systems may activate steroidogenic activity of bovine placentomes [49]. The cGMP-PKG pathway plays a key role in vascular homeostasis and is mediated by nitric oxide and decreased calcium concentrations [50,51]. Previous studies have shown that maternal dietary treatments may impact placental vascularity and uterine blood flow [7,10,29]. Although we have not measured vascular development in the current study, we identified blood circulation, smooth muscle contraction, and circulatory system processes among the over-represented BP in CAR of VTM-supplemented heifers. Despite the lack of information regarding the role of vitamins and minerals in "driving" the increase in placental vascularity in bovine, Gernand et al. [16] reported that the human placenta is rich in micronutrient-dependent antioxidant enzymes that support normal maternal-fetal circulation. Moreover, vitamins E and D are suggested to enhance the expression of angiogenic factors in the placenta [52,53].
Interestingly, we identified several BP related to kidney morphogenesis, which were over-represented among the DEGs from the VTM_MG vs. VTM_LG contrast. The PAX2 and PAX8 genes were among DEGs in the BP such as kidney epithelium development and metanephros morphogenesis. These genes encode transcription factors that orchestrate kidney development, which is important for regulation of cardiovascular function, including blood pressure, later in life [54]. Additionally, the protein encoded by PAX8 plays a key role in the development of other organs and tissues by interacting with the WT1 transcription factor, which has an essential role in the normal development of the urogenital system [55]. According to Christian et al. [15], changes in micronutrient availability may lead to hormonal adaptations, and consequently, affect kidney development and function. Moreover, Mao et al. [56] reported changes in the expression of placental genes that were involved with kidney function in mice fed high-fat or low-fat diets.
Pathways Underlying Cotyledonary Differential Gene Expression
The coordinated development and function between the CAR and COT placental tissues is responsible for providing the fetus with nutrients to support its metabolic demands [57]. Nonetheless, under nutritional stress, the placenta may increase the number and the surface area of cotyledons to improve the efficiency of placental transport [58]. These adjustments are not only related to placental vascular growth and angiogenesis [7,57] but also to the regulation of genes encoding for nutrient transporters [59]. As proposed by the placental nutrient sensing or fetal demand models [59,60], different mechanisms and placental responses underlie fetal-maternal nutrient cross-talk. Based on these models, maternal downregulation of genes encoding for nutrient transporters may lead to upregulation of fetal genes, and vice-versa, to balance maternal nutrient availability and fetal nutrient demand [59]. We observed that most of the nutrient transport DEGs from COT were classified as downregulated in the supplemented groups. In light of the abovementioned models, this may suggest that the fetuses from supplemented dams met their nutritional requirements, whereas the non-supplemented fetuses optimized nutrient transport by upregulating gene expression. Furthermore, the rate of gain (i.e., maternal dietary intake to support a moderate or low rate of gain) seems to not affect COT gene expression under the conditions tested in the current study. According to Thayer et al. [5], the dam may homeostatically regulate macronutrient availability by mobilizing the maternal body's reserve to supply the fetus. On the other hand, the authors argue that the body's available store of micronutrients, such as vitamins and minerals, is limited, which may compromise their supply to the fetus when the maternal diet is micronutrient restricted [5].
Placental nutrient transporters are important for delivering nutrients such as glucose, amino acids, and fatty acids to the fetus [61]. Biological processes related to nutrient transport and ion transport were over-represented in our findings. Pathways related to insulin secretion and resistance, biosynthesis of amino acids, and PPAR signaling underlie the DEGs from the VTM_MG vs. NoVTM_LG groups in COT. Insulin is a potent hormone involved with energy metabolism and is essential for regulating glucose uptake and its serum levels [62]. Due to their role as cofactors in metabolic pathways, some minerals have been suggested to enhance insulin action [2]. For example, chromium improves glucose homeostasis through increased insulin sensitivity [63]. Glucose is the primary metabolic fuel for fetal metabolism, and it is crucial for fetal development [59]. Batistel et al. [64] reported that methionine supplementation during late gestation changed the expression profile of genes related to transport of amino acids, fatty acids, glucose, and vitamins of placentomes from dairy cows. Among the DEGs encoding glucose transporters, we identified SLC2A1 (GLUT1) and SLC2A3 (GLUT3). We also identified the INSIG1 gene as differentially expressed. The protein encoded by INSIG1 regulates cholesterol metabolism, lipogenesis, and glucose homeostasis. Furthermore, INSIG1 controls cholesterol synthesis through the SREBP and HMGCS1 proteins [65].
In addition to glucose, fatty acids are an important source of energy for placental function and fetal growth [66]. According to Lewis et al. [67], fatty acids are the precursors for PPAR transcription factors. PPARs are nuclear hormone receptors that are active in embryonic development and tissue differentiation through regulation of gene expression [66]. In the PPAR signaling pathway, we identified ACSL3, SLC27A4, and PLIN2 as over-represented DEGs for COT from the contrast VTM_MG vs. NoVTM_LG. The ACSL3 and SLC27A4 genes encode proteins that are able to activate long chain fatty acids [68]. According to Nakahara et al. [69], the ACSL3 protein is involved with fatty acids uptake for synthesis of cellular lipids and degradation via beta-oxidation, while SLC27A4 acts in fatty acids transport [59]. The gene PLIN2 is important for trophoblastic lipid droplet accumulation [70]. We identified BP related to lipid biosynthesis and metabolism among the DEGs from the VTM_MG vs. VTM_LG comparison as well. The DEGs play roles in sphingolipid biosynthesis (DEGS2) [71] and phospholipid biosynthesis and remodeling (LPCAT1) of the lipid droplets [72].
Among the DEGs, we found that AARS1, IARS1, GARS1, and NARS2 were upregulated in the VTM_LG vs. NoVTM_LG comparison in COT. These genes were overrepresented in the aminoacyl-tRNA biosynthesis pathway, and they encode key enzymes required for protein biosynthesis [73]. Furthermore, the ARG2 and MTR genes were downregulated in the VTM_MG vs. NoVTM_LG comparison, and these genes are involved in the biosynthesis of amino acids. The ARG2 protein is involved in the conversion of L-arginine into L-ornithine, which is a precursor to polyamines that support cell proliferation [74], whereas MTR catalyzes the final step in methionine biosynthesis [75]. Menezes et al. [25] reported increased concentrations of methionine and arginine in allantoic fluid in response to vitamin supplementation and a moderate rate of gain when evaluating the pregnant heifers used in the current study, which further supports the current findings.
Conclusions
By applying a genome-wide transcriptomic analysis, we identified genes differentially expressed from caruncular and cotyledonary placental tissues of pregnant heifers in response to maternal nutrition. Vitamin and mineral supplementation and low or moderate rate of gain are associated with changes in gene expression in placental tissues. Functional analysis of DEGs pointed out the pathways underlying energy metabolism, hormone synthesis, and nutrient transport. These findings shed light on the mechanisms via which maternal nutrition may regulate placental function and, potentially, fetal growth and development. Furthermore, our findings, for the first time, unravel the putative pla-cental adaptations in response to maternal vitamin and mineral supplementation from pre-breeding through to the first trimester (until day 83) of gestation. | 5,859 | 2021-03-01T00:00:00.000 | [
"Biology"
] |
Phase Analysis of Alkali-Activated Slag Hybridized with Low-Calcium and High-Calcium Fly Ash
: This paper investigates the hydrated phase assemblage, microstructure, and gel composition of sodium hydroxide (NaOH)-activated fly ash–slag blends with either low-calcium or high-calcium fly ash. The results show that the nature of precipitated calcium–aluminosilicate– hydrate (C-A-S-H) and alkali aluminosilicate-hydrate (N-A-S-H) depends on the fly ash composition and slag-to-fly ash ratio. However, regardless of fly ash composition and slag-to-fly ash ratio, a universal linear compositional relationship exists between Al/Ca ratio and Si/Ca ratio in precipitated gels. This indicates that there exists a structural limitation on the incorporation of Al 3+ for Si 4+ in the tetrahedral silicate of C-A-S-H, N-A-S-H, or metastable N-C-A-S-H gels. In a hybrid slag–fly ash system, the framework structure of precipitated gels is an assemblage of aluminosilicate units with heterogeneous Ca 2+ and Na + distribution. The amount and reactivity of calcium and alkalis seem to play a critical role in determining the structure and properties of precipitated gels in hybrid systems. The low cementitious capability in alkali-activated high-calcium fly ash may be attributed to the unstable N-C-A-S-H gel structure with concurrent high Na and Ca contents.
Introduction
Pulverized fly ash is an industry byproduct of the combustion of pulverized coal in thermal power plants, which comprise heterogeneous spherical particles with glassy and crystalline phases (e.g., mullite and quartz). Although the bulk of fly ash is mainly made up of silicon, aluminum and iron oxide, it can be further categorized into Class F (FA-F) or Class C fly ash (FA-C) depending on its composition. In particular, FA-F has low calcium content and exhibits pozzolanic properties, while FA-C contains up to 20% CaO and exhibits cementitious properties [1,2]. On the other hand, ground-granulated blast furnace slag is an amorphous by-product of iron industry, and relatively richer in calcium and lower in aluminum as compared with fly ash [2,3]. The alkaline activation of sole slag (i.e., without fly ash incorporation), FA-F, FA-C, or fly ash-slag blends as novel binders have gained significant attention recently since contemporary environmental concerns and economic realities demand these industrial byproducts be effectively re-utilized [4][5][6][7]. For instance, alkali-activated slag binders have demonstrated ultrahigh strength [8,9], antifire-induced explosive capacity [10,11], remarkable chloride durability [12,13] and sulfate attacks [14,15]. Alkali-activated FA-F binders are featured with outstanding acid resistance and thermal stability [16,17]. Since fly ash and slag are the most popular choices as raw materials in the production of alkali-activated binders, their combined use can potentially provide engineering properties and performances that are more favorable than using them solely. For instance, a small amount of slag incorporation can considerably shorten the setting time of NaOH-activated fly ash at room temperature [18]. However, due to the differences in reactivity and composition among raw slag, FA-F, and FA-C, the chemistry of an alkali-activated hybrid slag-fly ash system is complex.
The main hydrated phase in pure alkali-activated slag (AAS) is calcium-aluminosilicatehydrate (C-A-S-H) with a strong structural similarity to tobermorite minerals [19,20]. The main hydrated phase in alkali-activated pure FA-F is alkali-aluminosilicate-hydrate (N-A-S-H), where tetrahedral units of silicates (Si 4+ ) and aluminates (Al 3+ ) are connected to create a three-dimensional spatial structure, with alkaline (Na + ) and/or calcium (Ca 2+ ) cations compensating the electrical charge imbalance associated with substituted Al 3+ [21]. The N-A-S-H is a zeolite precursor and may be able to crystallize as a result of continued condensation, polymerization, and reorganization [22,23]. As the composition of FA-C locates it between FA-F and slag, the coexistence of C-A-S-H and N-A-S-H in alkali-activated pure FA-C has been reported [24,25]. In addition, for hybrid slag and FA-F systems with intermediate levels of Ca and Al, the coexistence of C-A-S-H and N-A-S-H is manifested during alkaline activation [24,26]. In addition, metastable N-C-A-S-H as a hybrid-type phase of N-A-S-H and C-A-S-H can exist in these systems [27]. The nature of precipitated C-A-S-H and N-A-S-H in an alkali-activated FA-C system or hybrid slag-fly ash system is different than the respective C-A-S-H or N-A-S-H formed in pure AAS or alkali-activated FA-F. Many attempts have been made towards a better understanding of the nature of, and compatibility among, the co-existing C-A-S-H and N-A-S-H in alkali-activated FA-C or fly ash-slag blend systems [28][29][30]. For example, it has been reported that Ca, although it significantly modifies the composition of N-A-S-H, probably via ionic exchange with Na, does not alter the gel structure of N-A-S-H [29,30]. Alternatively, although the geopolymeric gel incorporating a substantial percentage of Ca (i.e., N-C-A-S-H), being similar to the composition of C-A-S-H, can have different molecular structures than C-A-S-H [5]. Recent advances in thermodynamic modeling allows the prediction and management of phase assemblage and gel composition based on raw binder composition [31]. Nevertheless, the nature of precipitated gels in hybrid slag-fly ash systems has not been fully understood.
Alkali-activated sole FA-C has been shown to have a poor cementitious capability and low compressive strength and hence is not recommended for producing building materials [32]. As such, most of FA-C are currently dumped in landfill or utilized for soil stabilization. The causes of low cementitious and mechanical properties in alkaliactivated FA-C have not been exclusively investigated, although it has been postulated to be associated with the chemical forms of calcium in FA-C [24]. It remains unclear how the chemical composition of fly ash affects the phase formation and microstructure development of alkali-activated hybrid slag-fly ash systems, the knowledge of which allows better and optimized utilizations of FA-C in green concrete construction. Therefore, examining the relationship between the composition of the starting reactants, nature of precipitated gels, and the engineering properties of alkali-activated binders will result in significant advancements in material selection and mixture design.
The objective of this paper is to investigate the hydrate assemblage and composition in a NaOH-activated hybrid slag-fly ash system with varying fly ash compositions and slagto-fly ash ratios. This study aims to contribute to a better understanding of the nature of precipitated gels in hybrid slag-fly ash systems, as well as the reactivity and compatibility of C-A-S-H and N-A-S-H.
Materials and Methods
Grade 120 ground-granulated blast furnace slag (SL), Class F fly ash (FA-F), and Class C fly ash (FA-C) were used in this study. The major chemical composition of SL is: 43 3 . The granulometric size distributions of these three raw SL, FA-F, and FA-C materials are displayed in Figure 1. The X-ray powder diffractograms for all the raw materials used are shown in Figure 2. It can be seen that raw slag contains a primarily amorphous structure (see the humps in 2θ = 30 • to 31.6 • as a result of the short range order of CaO-SiO 2 -Al 2 O 3 -MgO glass [20]) with a trace amount of gypsum and alite. For FA-F and FA-C, they both are basically amorphous materials (see the huge diffraction humps at 2θ = 20~35 • ), with some amounts of quartz and mullite. In contrast to FA-F, FA-C also contains some amount of hatrurite. the raw materials used are shown in Figure 2. It can be seen that raw slag contains a primarily amorphous structure (see the humps in 2θ = 30° to 31.6° as a result of the short range order of CaO-SiO2-Al2O3-MgO glass [20]) with a trace amount of gypsum and alite. For FA-F and FA-C, they both are basically amorphous materials (see the huge diffraction humps at 2θ = 20~35°), with some amounts of quartz and mullite. In contrast to FA-F, FA-C also contains some amount of hatrurite. This study investigates NaOH-activated hybrid fly ash-slag binders, with all mixtures having an initial liquid (alkaline activator)-to-solid (i.e., raw FA-F or FA-C, and slag) volumetric ratio maintained at 0.75. The initial volumetric ratio in starting solid phases between raw fly ash (FA-F or FA-C) and slag were set to be 100/0, 80/20, 50/50, 20/80, and 0/100. The compositions of starting solid materials (a total of nine different mixtures) are listed in the CaO-SiO2-Al2O3-H2O ternary diagram in Figure 3. The activator used in this study was a 6 M NaOH solution with a pH of 14.7 and a density of 1.13 g/cm 3 . The effects of the activator (e.g., pH and dissolved silicate) on the hydrates assemblage and composition of hybrid fly ash-slag systems have been reported in other studies [19,[33][34][35] and are hence not considered in this study. Each mixture was poured in 100 mL containers after the raw materials used are shown in Figure 2. It can be seen that raw slag contains a primarily amorphous structure (see the humps in 2θ = 30° to 31.6° as a result of the short range order of CaO-SiO2-Al2O3-MgO glass [20]) with a trace amount of gypsum and alite. For FA-F and FA-C, they both are basically amorphous materials (see the huge diffraction humps at 2θ = 20~35°), with some amounts of quartz and mullite. In contrast to FA-F, FA-C also contains some amount of hatrurite. This study investigates NaOH-activated hybrid fly ash-slag binders, with all mixtures having an initial liquid (alkaline activator)-to-solid (i.e., raw FA-F or FA-C, and slag) volumetric ratio maintained at 0.75. The initial volumetric ratio in starting solid phases between raw fly ash (FA-F or FA-C) and slag were set to be 100/0, 80/20, 50/50, 20/80, and 0/100. The compositions of starting solid materials (a total of nine different mixtures) are listed in the CaO-SiO2-Al2O3-H2O ternary diagram in Figure 3. The activator used in this study was a 6 M NaOH solution with a pH of 14.7 and a density of 1.13 g/cm 3 . The effects of the activator (e.g., pH and dissolved silicate) on the hydrates assemblage and composition of hybrid fly ash-slag systems have been reported in other studies [19,[33][34][35] and are hence not considered in this study. Each mixture was poured in 100 mL containers after This study investigates NaOH-activated hybrid fly ash-slag binders, with all mixtures having an initial liquid (alkaline activator)-to-solid (i.e., raw FA-F or FA-C, and slag) volumetric ratio maintained at 0.75. The initial volumetric ratio in starting solid phases between raw fly ash (FA-F or FA-C) and slag were set to be 100/0, 80/20, 50/50, 20/80, and 0/100. The compositions of starting solid materials (a total of nine different mixtures) are listed in the CaO-SiO 2 -Al 2 O 3 -H 2 O ternary diagram in Figure 3. The activator used in this study was a 6 M NaOH solution with a pH of 14.7 and a density of 1.13 g/cm 3 . The effects of the activator (e.g., pH and dissolved silicate) on the hydrates assemblage and composition of hybrid fly ash-slag systems have been reported in other studies [19,[33][34][35] and are hence not considered in this study. Each mixture was poured in 100 mL containers after mixing, which was further sealed by parafilm and stored in ambient condition (~20 • C) till further characterization.
I n t e r m e d i a t e C a L o w C a H i g h C a
Slag-Class C fly ash blends Class C fly ash Figure 3. The CaO-SiO2-Al2O3-H2O ternary diagram for the composition of starting solids materials.
X-ray diffraction (XRD) was utilized to characterize the phase assemblages of samples, using a PANalytical Empyrean diffractometer in a conventional Bragg-Brentano θ-2θ configuration. CuKα X-ray (λ = 1.5418 A) was generated using 40 mA and 45 kV operating conditions. The grinded samples powders were frontally filled into a zero-background plate and scanned continuously between 5° and 43° 2θ (0.033453 degree/second). The total acquisition time of XRD diffractogram for each sample is about 15 min. The samples were characterized after hydration for 1d and 28d, during which the time-resolved phase evolution can be observed.
Scanning electron microscopy equipped with energy-dispersive spectroscopy (SEM/EDS) was used to characterize the composition of hydrates at 28D, using FEI ESEM Quanta 200. Prior to chemical analysis, the samples were epoxy impregnated and surface polished down to 1 µm. To reduce the random errors of EDS acquisition, about 50 data points were collected and analyzed for each polished specimen. Table 1 lists the X-ray diffractometry information for major peaks identified in hybrid blast furnace slag and Class C fly ash-based geopolymer samples. It should be noted that some crystalline phases (e.g., quartz) originally exist in the raw materials and are mainly inert during the alkaline activation. As such, the remnant crystalline phases would not be explicitly labeled and the emphasis would be placed on the difference in hydrates assemblage and evolution due to the difference from starting material composition. Figure 4 shows the XRD diffractograms for the alkali-activated 100% slag, 100% FA-F, and 100% FA-C. For AAS, the main hydrates are alumina-modified C-S-H (I), C-S-H, hydrotalcite-type phase, portlandite, and some amount of sulfate-bearing phase due to the presence of gypsum [33,36]. In comparison with the C-S-H, C-S-H (I) has a relatively high ordered structure and presents a diffusive basal reflection corresponding to the mean X-ray diffraction (XRD) was utilized to characterize the phase assemblages of samples, using a PANalytical Empyrean diffractometer in a conventional Bragg-Brentano θ-2θ configuration. CuKα X-ray (λ = 1.5418 A) was generated using 40 mA and 45 kV operating conditions. The grinded samples powders were frontally filled into a zero-background plate and scanned continuously between 5 • and 43 • 2θ (0.033453 degree/second). The total acquisition time of XRD diffractogram for each sample is about 15 min. The samples were characterized after hydration for 1d and 28d, during which the time-resolved phase evolution can be observed.
X-ray Diffractometry
Scanning electron microscopy equipped with energy-dispersive spectroscopy (SEM/EDS) was used to characterize the composition of hydrates at 28D, using FEI ESEM Quanta 200. Prior to chemical analysis, the samples were epoxy impregnated and surface polished down to 1 µm. To reduce the random errors of EDS acquisition, about 50 data points were collected and analyzed for each polished specimen. Table 1 lists the X-ray diffractometry information for major peaks identified in hybrid blast furnace slag and Class C fly ash-based geopolymer samples. It should be noted that some crystalline phases (e.g., quartz) originally exist in the raw materials and are mainly inert during the alkaline activation. As such, the remnant crystalline phases would not be explicitly labeled and the emphasis would be placed on the difference in hydrates assemblage and evolution due to the difference from starting material composition. interlayer in its layered silicate structures [36,37]. The introduction of large amounts of aluminum into the C-S-H (I) structure (i.e., forming C-A-S-H) can inflate the thickness between silicate layers by lowering the coherent domain [38]. Figure 6 shows the XRD diffractograms for alkali-activated hybrid 50% slag and 50% fly ash system. It can be seen that most of the crystalline zeolite-type phases have disappeared. This indicates that the dissolution and reaction processes of slag and fly ash are not independent but interactive. Instead, fly ash-slag micro-scale interactions exist during alkaline activation or geopolymerization, which can chemically, structurally, and kinetically affect the hydrates assemblages and microstructure formation [33]. Figure 6 shows the XRD diffractograms for alkali-activated hybrid 50% slag and 50% fly ash system. It can be seen that most of the crystalline zeolite-type phases have disappeared. This indicates that the dissolution and reaction processes of slag and fly ash are not independent but interactive. Instead, fly ash-slag micro-scale interactions exist during alkaline activation or geopolymerization, which can chemically, structurally, and kinetically affect the hydrates assemblages and microstructure formation [33]. On the other hand, a coexistence of two types of C-S-H gels (i.e., C-S-H (I) and C-S-H) and a partial transformation from C-S-H (I) to C-S-H is observed in FA-C-slag blends, as shown in Figure 7. It can be seen that the gradual conversion from C-S-H (I) to C-S-H is accompanied by a simultaneous enlargement in basal spacing (i.e., distance between two layers). In term of C-S-H (I), the degree of crystallinity tends to decrease and the basal spacing, if present, also decreases with an increase in Ca/Si ratio, according to Taylor [37]. However, given that the composition of C-S-H and basal peak shown in XRD patterns are not strongly correlated [40], it is impossible to postulate the change in Ca/Si ratio in C-S-H here. Considering that C-S-H (I) is the main detected C-S-H phase in AAS while C-S-H is the main detected C-S-H phase in alkali-activated FA-C, the transformation demonstrates the reactions among various hydrates due to thermodynamic incompatibility. It is known that the calcium in a C-S-H nanostructure model, particularly in tobermorite-based models [41], is located either in the CaO polyhedra layer (i.e., intralayer) or in the interlayer region as Ca 2+ ions. The CaO polyhedra layer is a double plane of Ca 2+ ions 6-or 7-coordinated by central O 2-ions, to which disordered silica chains are grafted on each side [41]. It is postulated that the calcium dissolved from slag can contribute to the extensive formation of a CaO polyhedra layer, which is magnified by the strong peak corresponding to the interlayer plane (002 plane) in XRD, whilst the calcium dissolved from FA-C behaves differently and presents mostly as Ca 2+ absorbing on the surface or interlayers of gels. It is probable that the calcium in the CaO polyhedra layer in some sense affects the overall properties of C-S-H. As such, the high calcium in FA-C may reduce the mechanical strength of materials, while the calcium in slag increases the mechanical strength [24]. Figure 8 shows the XRD patterns for an alkali-activated hybrid 80% slag and 20% fly ash system. It can be seen that regardless of fly ash composition, the basal reflection of C- Figure 4 shows the XRD diffractograms for the alkali-activated 100% slag, 100% FA-F, and 100% FA-C. For AAS, the main hydrates are alumina-modified C-S-H (I), C-S-H, hydrotalcite-type phase, portlandite, and some amount of sulfate-bearing phase due to the presence of gypsum [33,36]. In comparison with the C-S-H, C-S-H (I) has a relatively high ordered structure and presents a diffusive basal reflection corresponding to the mean interlayer in its layered silicate structures [36,37]. The introduction of large amounts of aluminum into the C-S-H (I) structure (i.e., forming C-A-S-H) can inflate the thickness between silicate layers by lowering the coherent domain [38]. Table 1. X-ray diffractometry information for main diffraction peaks identified in the hybrid blast furnace slag and Class C fly ash-based geopolymer samples. Note: a-alkali-activated blast furnace slag; b-pure NaOH-activated blast furnace slag and alkali-activated Class C fly ash; c-(h k *) denotes a band head related to two-dimensional lattice.
d-Spacing in This
For alkali-activated fly ash, it can be seen that the type of precipitated zeolite-type crystalline phases and shapes of the amorphous hump (corresponding to N-A-S-H gels) appreciably evolve as a function of fly ash composition and age. First, a shift in the position of the hump ascribed to the amorphous glass phase in the initial fly ash to slightly higher angular values at around 25~40 • indicates the formation of geopolymeric N-A-S-H gels [25,39]. The difference in shapes of the humps between alkali-activated FA-F and FA-C indicates that the structures of vitreous N-A-S-H gels in these materials are different since the XRD profiles of these two kinds of raw fly ash are rather similar. At 28D, different types of zeolite-type phases and crystalline geopolymeric sodium-aluminosilicatehydrates (N-A-S-H) can be clearly detected, which is however mainly absent at the early age. This suggests the transformation of amorphous N-A-S-H gels into crystalline phases during geopolymerization as a result of continued condensation, polymerization, and reorganization [22]. Contrary to FA-F, the alkali-activation of FA-C creates a magnified hump at around 30 • , indicating the formation of C(-A)-S-H. This C(-A)-S-H in alkaliactivated FA-C is more amorphous than the C-S-H (I) in AAS, which may be attributed to a different composition, nanostructure, or degree of crystallinity. In addition, strong peaks corresponding to hydrotalcite-type phases are detected in the FA-C system, due to the high magnesium content in the raw FA-C composition. Figure 5 shows the XRD diffractograms for alkali-activated hybrid 20% slag and 80% fly ash system. It can be seen that the precipitation of zeolite-type crystalline phases is inhibited by the incorporation of slag for FA-F-slag blends. On the contrary, the small amount of slag incorporation does not conspicuously affect the precipitation of zeolite-type phases in FA-C-slag blends. This observation may indicate that the dissolved Ca from slag can modify the N-A-S-H structure of alkali-activated FA-F, inhibiting its crystallization; while in the FA-C system, the precipitate phase already comprises a significant amount of Ca (see Section 3.2 for SEM/EDS data), which makes it less influenced by the additional Ca dissolved from slag. Moreover, a new hump at around 30 • corresponding to C(-A)-S-H has been identified in the FA-F-slag system, as a result of slag hydration. In addition, the hump corresponding to C(-A)-S-H in the FA-C-slag system seems to become wider and more diffusive. It is reasonable since slag itself is a glassy reactive material and its hydration can also promote the formation of C(-A)-S-H. Figure 6 shows the XRD diffractograms for alkali-activated hybrid 50% slag and 50% fly ash system. It can be seen that most of the crystalline zeolite-type phases have disappeared. This indicates that the dissolution and reaction processes of slag and fly ash are not independent but interactive. Instead, fly ash-slag micro-scale interactions exist during alkaline activation or geopolymerization, which can chemically, structurally, and kinetically affect the hydrates assemblages and microstructure formation [33].
On the other hand, a coexistence of two types of C-S-H gels (i.e., C-S-H (I) and C-S-H) and a partial transformation from C-S-H (I) to C-S-H is observed in FA-C-slag blends, as shown in Figure 7. It can be seen that the gradual conversion from C-S-H (I) to C-S-H is accompanied by a simultaneous enlargement in basal spacing (i.e., distance between two layers). In term of C-S-H (I), the degree of crystallinity tends to decrease and the basal spacing, if present, also decreases with an increase in Ca/Si ratio, according to Taylor [37]. However, given that the composition of C-S-H and basal peak shown in XRD patterns are not strongly correlated [40], it is impossible to postulate the change in Ca/Si ratio in C-S-H here. Considering that C-S-H (I) is the main detected C-S-H phase in AAS while C-S-H is the main detected C-S-H phase in alkali-activated FA-C, the transformation demonstrates the reactions among various hydrates due to thermodynamic incompatibility.
It is known that the calcium in a C-S-H nanostructure model, particularly in tobermoritebased models [41], is located either in the CaO polyhedra layer (i.e., intralayer) or in the interlayer region as Ca 2+ ions. The CaO polyhedra layer is a double plane of Ca 2+ ions 6-or 7-coordinated by central O 2ions, to which disordered silica chains are grafted on each side [41]. It is postulated that the calcium dissolved from slag can contribute to the extensive formation of a CaO polyhedra layer, which is magnified by the strong peak corresponding to the interlayer plane (002 plane) in XRD, whilst the calcium dissolved from FA-C behaves differently and presents mostly as Ca 2+ absorbing on the surface or interlayers of gels. It is probable that the calcium in the CaO polyhedra layer in some sense affects the overall properties of C-S-H. As such, the high calcium in FA-C may reduce the mechanical strength of materials, while the calcium in slag increases the mechanical strength [24]. Figure 8 shows the XRD patterns for an alkali-activated hybrid 80% slag and 20% fly ash system. It can be seen that regardless of fly ash composition, the basal reflection of C-S-H(I) for hybrid system shifts to higher d-spacing values, compared with that in pure AAS. This may be explained by the insert of a large amount of aluminum from FA-C or FA-F in the C-S-H (I) structure [38]. However, it can be seen that the d-spacing for an FA-C-slag system is slightly larger than that of an FA-F-slag system, although the amount of alumina in FA-F is substantially higher than that in FA-C. This may suggest that the C-S-H (I) from AAS is metastable in the presence of dissolved Ca from FA-C. Figure 9 shows representative BSE images of a hybrid high-calcium fly ash-slag sample on polished surfaces with various fly ash-slag ratios, respectively. It can be seen that decreasing the fly ash-slag ratio can make more homogenous microstructure. Similar observations can be made for hybrid fly ash-slag samples with low-calcium fly ash (not shown). Figure 10 shows the relationship between Al/Ca versus Si/Ca atomic ratio in solid hydrated phases in an alkali-activated hybrid fly ash-slag system. It can be seen that regardless of the fly ash composition or slag-to-fly ash ratio, the Al/Ca versus Si/Ca ratio shows strong linear correlation, as in Equations (1)
SEM-EDS Microanalysis
Hybrid FA-C-slag system: Al/Ca = 0.49862× Si/Ca − 0.02591 (R 2 = 0.814) The linear trend indicates that there is a structural limitation on the incorporation of Al 3+ for Si 4+ in the tetrahedral silicate of C-A-S-H, N-A-S-H or N-C-A-S-H. In C-A-S-H, it has been widely demonstrated that the Al is dominantly confined to the bridging sites of dreierkette-based silicate chains [42]. For N-A-S-H, it has been observed that its final Si/Al ratio approaches to around 2.0 regardless of initial conditions, probably due to reasons of thermodynamic stability [22,43].
The accuracy of implementing EDS in identifying the gel composition in a hybrid system is questionable since the electron beam size for the chemical probe is likely to be larger than the sizes of the homogeneous phases like C-A-S-H and N-A-S-H. However, the present study indicates that, although various types and compositions of gels are likely intermixed with each other, the framework structure of precipitated gels is an assemblage of aluminosilicate units of varying Ca and Na contents. Depending on the structural roles and binding sites of Ca and Na, the gels may have totally different structures and properties. In term of the gel composition, the heterogeneity of a hybrid system could be a result of the heterogeneous spatial distribution of Ca and Na in the microstructure. It should be noted that this observation may merely be applicable to NaOH-activated systems; while for the system activated by sodium silicates, the heterogeneous dispersion of dissolved silica from the activator may increase the degree of heterogeneity of composition in the microstructure [19,33]. Figure 9 shows representative BSE images of a hybrid high-calcium fly ash-slag sample on polished surfaces with various fly ash-slag ratios, respectively. It can be seen that decreasing the fly ash-slag ratio can make more homogenous microstructure. Similar observations can be made for hybrid fly ash-slag samples with low-calcium fly ash (not shown). Figure 10 shows the relationship between Al/Ca versus Si/Ca atomic ratio in solid hydrated phases in an alkali-activated hybrid fly ash-slag system. It can be seen that regardless of the fly ash composition or slag-to-fly ash ratio, the Al/Ca versus Si/Ca ratio shows strong linear correlation, as in Equations (1)
SEM-EDS Microanalysis
Hybrid FA-C-slag system: Al/Ca = 0.49862× Si/Ca − 0.02591 (R 2 = 0.814) The linear trend indicates that there is a structural limitation on the incorporation of Al 3+ for Si 4+ in the tetrahedral silicate of C-A-S-H, N-A-S-H or N-C-A-S-H. In C-A-S-H, it has been widely demonstrated that the Al is dominantly confined to the bridging sites of dreierkette-based silicate chains [42]. For N-A-S-H, it has been observed that its final Si/Al ratio approaches to around 2.0 regardless of initial conditions, probably due to reasons of thermodynamic stability [22,43].
The accuracy of implementing EDS in identifying the gel composition in a hybrid system is questionable since the electron beam size for the chemical probe is likely to be larger than the sizes of the homogeneous phases like C-A-S-H and N-A-S-H. However, the present study indicates that, although various types and compositions of gels are likely intermixed with each other, the framework structure of precipitated gels is an assemblage of aluminosilicate units of varying Ca and Na contents. Depending on the structural roles and binding sites of Ca and Na, the gels may have totally different structures and properties. In term of the gel composition, the heterogeneity of a hybrid system could be a result of the heterogeneous spatial distribution of Ca and Na in the microstructure. It should be noted that this observation may merely be applicable to NaOH-activated systems; while for the system activated by sodium silicates, the heterogeneous dispersion of dissolved silica from the activator may increase the degree of heterogeneity of composition in the microstructure [19,33]. Figure 11 shows the relationship between the Na-Si and Ca-Si atomic ratios in solid hydrated phases in an alkali-activated hybrid fly ash-slag system. For an FA-F-slag system, with the increased loading of slag, the N-A-S-H phase gradually converts into a metastable N-C-A-S-H phase with high Na and high Ca contents and wide ranges of Na/Si and Ca/Si ratios. This N-C-A-S-H has a similar Ca-Si ratio to the C-A-S-H in AAS, but with a much higher Na content. For the FA-C-slag system, with the increasing loading of slag, the precipitated gels show a gradually reduced Na-Si ratio with a relatively stable average Ca-Si ratio. However, it should be noted that for mixtures containing FA-C, the composition is greatly heterogeneous. Theoretically, the Ca 2+ and Na + will compete with each other to compensate the negative surface charge of silicate due to deprotonation (i.e., ≡SiOH to ≡SiO − ), and to balance the change deficiency of Al 3+ for substituting Si 4+ (i.e., M + + Al 3+ = Si 4+ ). However, the concurrent high Na and Ca content in hydrated products of an alkali-activated FA-C system and hybrid slag-fly ash with intermediate proportions may indicate that the precipitated gels are thermodynamically unstable and structurally distorted. This hypothesis can explain the low cementitious properties of alkali-activated FA-C, as well as the reduced compressive strength of a hybrid FA-F-slag system with intermediate proportions. Moreover, it is suggested that the relative amount and reactivity of Ca and Na are critical in determining the nature and properties of the precipitated gels in a hybrid system. Figure 11 shows the relationship between the Na-Si and Ca-Si atomic ratios in solid hydrated phases in an alkali-activated hybrid fly ash-slag system. For an FA-F-slag system, with the increased loading of slag, the N-A-S-H phase gradually converts into a metastable N-C-A-S-H phase with high Na and high Ca contents and wide ranges of Na/Si and Ca/Si ratios. This N-C-A-S-H has a similar Ca-Si ratio to the C-A-S-H in AAS, but with a much higher Na content. For the FA-C-slag system, with the increasing loading of slag, the precipitated gels show a gradually reduced Na-Si ratio with a relatively stable average Ca-Si ratio. However, it should be noted that for mixtures containing FA-C, the composition is greatly heterogeneous. Theoretically, the Ca 2+ and Na + will compete with each other to compensate the negative surface charge of silicate due to deprotonation (i.e., ≡SiOH to ≡SiO − ), and to balance the change deficiency of Al 3+ for substituting Si 4+ (i.e., M + + Al 3+ = Si 4+ ). However, the concurrent high Na and Ca content in hydrated products of an alkali-activated FA-C system and hybrid slag-fly ash with intermediate proportions may indicate that the precipitated gels are thermodynamically unstable and structurally distorted. This hypothesis can explain the low cementitious properties of alkali-activated FA-C, as well as the reduced compressive strength of a hybrid FA-F-slag system with intermediate proportions. Moreover, it is suggested that the relative amount and reactivity of Ca and Na are critical in determining the nature and properties of the precipitated gels in a hybrid system.
Conclusions
In this paper, the hydrates assemblage and composition in an alkali-activated hybrid slag-fly ash system with different starting material compositions was studied. The following conclusions can be drawn according to this study: (1) The type and composition of precipitated C-A-S-H, N-A-S-H, and N-C-A-S-H depend on the fly ash composition and slag-to-fly ash ratio.
(2) There is a structural limitation on the incorporation of Al 3+ for Si 4+ in the tetrahedral silicate chain of C-A-S-H, N-A-S-H, and N-C-A-S-H, regardless of the starting material chemistry.
(3) In an NaOH-activated hybrid slag-fly ash system, the framework structure of precipitated gels is an assemblage of aluminosilicate units of varying Ca and Na contents.
Conclusions
In this paper, the hydrates assemblage and composition in an alkali-activated hybrid slag-fly ash system with different starting material compositions was studied. The following conclusions can be drawn according to this study: (1) The type and composition of precipitated C-A-S-H, N-A-S-H, and N-C-A-S-H depend on the fly ash composition and slag-to-fly ash ratio. (2) There is a structural limitation on the incorporation of Al 3+ for Si 4+ in the tetrahedral silicate chain of C-A-S-H, N-A-S-H, and N-C-A-S-H, regardless of the starting material chemistry. (3) In an NaOH-activated hybrid slag-fly ash system, the framework structure of precipitated gels is an assemblage of aluminosilicate units of varying Ca and Na contents. The heterogeneity of composition in the microstructure could be a result of the heterogeneous spatial distribution of Ca and Na cations. (4) The C-S-H (I) formed in hydrated AAS seems to be reactive with the dissolved (calcium, alumina, silicate) species from fly ash, resulting in a gradual conversion to a distinct C-S-H with probably different compositions, nanostructures, and degree of crystallinity at an early age. (5) The low cementitious capability of alkali-activated FA-C may be attributed to the unstable N-C-A-S-H gel structure with concurrent high Na and Ca contents. Informed Consent Statement: Not applicable.
Data Availability Statement: All data, models, and code generated or used during the study appear in the submitted article.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,844.2 | 2022-03-23T00:00:00.000 | [
"Materials Science"
] |
Some properties of the Blumberg ’ s hyper-log-logistic curve
The paper considers the sigmoid function defined through the hyper–log–logistic model introduced by Blumberg. We study the Hausdorff distance of this sigmoid to the Heaviside function, which characterises the shape of switching from 0 to 1. Estimates of the Hausdorff distance in terms of the intrinsic growth rate are derived. We construct a family of recurrence generated sigmoidal functions based on the hyper–log–logistic function. Numerical illustrations are provided. Keywords-Hyper–log–logistic model, Heaviside function, Hausdorff distance, upper and lower bounds Mathematics Subject Classifications (2010) 41A46; 68N30
I. INTRODUCTION
The logistic function belongs to the important class of smooth sigmoidal functions arising from population and cell growth models.The logistic function was introduced by Pierre Franc ¸ois Verhulst [1]- [3], who applied it to human population dynamics.Verhulst proposed his logistic equation to describe the mechanism of the self-limiting growth of a biological population.
Analysis of continuous growth models in terms of generalized logarithm and exponential functions can be found in [14].A very good kinetic interpretation of Log-logistic dose-time response curves is given in [15] (see also [16]).
In artificial neural networks, [17], the sigmoid functions are used as activation or transfer function between two states, usually 0 and 1.
In all of their application, the shape of the sigmoid functions is essential factor determining the properties of the underlying biological, chemical or artificial system.An important characteristic related to the shape of a sigmoid is how far it deviates from the Heaviside function, also referred to as step-function, binary switch, or binary activation depending on the context.As shown in [18]- [19], an appropriate measure of this deviation is the Hausdorff distance of the sigmoid to the interval Heaviside function.Some approximation and modelling aspects are discussed in [20]- [23].In this paper we discuss the Hausdorff distance of the hyper-log-logistic sigmoid curve to the interval Heaviside function.
MODEL
In 1968 Blumberg [9] introduced a modified Verhulst logistic equation, the so called hyper-loglogistic equation: where k is the rate constant and α and γ are shape parameters.The equation ( 1) is consistent with the Verhulst logistic model when α = γ = 1.
We will consider the following modification of the hyper-log-logistic equation (1) (see for instance [12]: where β is a shape parameter.For β → ∞ the equation (2) reduces to the Verhulst equation.The equation (2), in essence, provides parametric interpolation between the logistic equation (β → ∞) and second order kinetics (β = 1).
An explicit form of the solution is derived as follows.
Let the function N (t) be defined by the following nonlinear equation: After differentiation of both sides of Eq. (3), we have From here it follows that and, therefore, the function N (t) satisfies the hyper-log-logistic differential equation ( 2).
The equation ( 3) can be rewritten as: Further, we see that Since equation ( 2) satisfies the conditions for local existence and uniqueness while N > 0, the function N (t) given in ( 4) is a unique solution of equation ( 2) satisfying the condition (5).The function is defined on β k , +∞ .The definition can be extended in a unique way on the rest of the t-axis as zero.
III. PRELIMINARIES
As stated in the Introduction, our main interest is the Hausdorff distance from the hyper-loglogistic function in (4) to the interval Heaviside function.We recall here the relevant definitions.
[28] The one-sided Hausdorff distance − → ρ (f, g) between two interval functions f, g on Ω ⊆ R, is the one-sided Hausdorff distance between their completed graphs F(f ) and F(g) considered as closed subsets of where We recall that completed graph of an interval function f is the closure of the graph of f as a subset of Ω×R.If the graph of an interval function defines a metric in the set of all S-continuous interval functions.The topological and algebric structure of the space of S-continuous functions and its subspaces is studied in [24]- [27].
In this paper we apply only the concept of the one-sided Hausdorff distance.
IV. MAIN RESULTS
Our main interest is characterizing the shape of N as a switching curve from 0 to 1.To this end, we use as a characteristic the one-sided Hausdorf distance from N to h as in [19].
The following theorem gives upper and lower bounds for − → ρ (N, h).
Theorem 3. The one-sided Hausdorff distance − → ρ (N, h) from the function N given in (4) to the Heaviside function h given in (6) satisfies the following inequalities for k > 0: Proof: First we consider the interval [0, +∞).Taking into account the sigmoid shape of the function N (t) in (4), the one-sided Hausdorff distance from N to the Heaviside function h on the interval [0, +∞) is a root of the equation or, equivalently, Clearly, F is an increasing function of t ∈ [0, +∞).Hence, if (8) has a root, then it is unique.We use the well-known inequalities where α ∈ R, x > 1 and α + x > 0. Using the first inequality in (9) we have Using the second inequality in (9) we have Hence, For the derivative of Therefore ϕ is a decreasing function of k.Using that k > 0 we have Biomath 7 (2018), 1807317, http://dx.doi.org/10.11145/j.biomath.2018.07.317Since F is an increasing function, the inequalities ( 10) and (11) imply that (8) has a unique root in the interval (d l , d r ).
Secondly, we consider the interval (−∞, 0].Similarly to the interval [0, +∞), using the shape of the sigmoid, the Hausdorff distance from N to h is a root of the equation Clearly, G is a decreasing function of θ ∈ [0, min{ β k , 1}].Hence, if (12) has a root, then it is unique.Using the first inequality in (9) we have Using the second inequality in (9) we have It is easy to see that the function Hence, G(d r ) in ( 14) is also e decreasing function of k.Using that k > 0 we have Since G is a decreasing function of θ, the inequalities ( 13) and (15) imply that (12) has a unique root in the interval (d l , d r ).This completes the proof.
The model (4) for β = 21, k = 20 is visualized on Fig. 1.From the equations ( 8) and ( 12) as Biomath 7 (2018), 1807317, http://dx.doi.org/10.11145/j.biomath.2018.07.317The estimates (7) of the one-sided Hausdorff distance of the Blumberg sigmoidal function to the Heaviside function, match those obtained for the Vehulst sigmoidal function.This should not surprise us.We already mentioned that the equation ( 2) is consistent with the Verhulst logistic model when β → +∞.As it is known, the Verhulst logistic function is of the form A comparison between function V (t) and N (t) at fixed k = 20 and β = 21 is shown in Fig. 3.
The Hausdorff distance from the Verhulst function to the interval Heaviside function by is studied in detail in [19], [24].Specifically, in the article [19], one may find more accurate estimates.
The hyper-log-logistic function can be used to recurrently generate a family of sigmoidal function: where N 0 (t) = N (t) -the function given in (4).We refer to this family shortly as recurrence generated sigmoidal hyper-log-logistic functions.
The recurrence generated sigmoidal hyper-loglogistic functions: N 0 (t), N 1 (t), N 2 (t) and N 3 (t) for k = 4 and β = 21 are visualized on Fig. 4.This type of family of functions can find application in the field of debugging and test theory [39]- [40].Further, the results can be of interest to specialists working in the field of constructive approximation by superposition of sigmoidal functions [29]- [38].
V. CONCLUSIONS
In the areas of population dynamics, chemical kinetics or neural networks it is important to study the shape of the involved sigmoidal curve, since it relates to the fundamental properties of the respective system.In order to study the shape usually the curve is divided into lag phase, growth phase and saturation phase, [41].These are defined in different ways in the literature, but in essence in the lag phase and in the saturation phase there is little or no growth, while most of the growth occurs in the growth phase.Hence the latter one is also called exponential phase.In [19] the Hausdorff distance to the interval Heaviside function is considered as a rigorously defined characteristic of the shape.One may consider that the points, where the value of the one-sided Hausdorff distance is attained, are precisely the points dividing the curve into the three mentioned segments.Then, the timelength of the growth phase is exactly twice the value of this distance.
In this paper we study the properties of the hyper-log-logistic curve produced by the Blumberg model through the one-sided Hausdorff distance of this curve to the interval Heaviside function.Lower and upper estimates of this distance are derived in terms of the intrinsic growth parameter and some possible applications are discussed. | 2,087.4 | 2018-08-14T00:00:00.000 | [
"Mathematics"
] |
Modeling and analysis of brushless DC motor system based on intelligent controllers
The constant innovative advancements of brushless DC motors (BLDCMs) have discovered a wide scope of utilizations. For instance, underground electric vehicles, drones, and submerged bikes have just confirmed elite BLDCMs. Be that as it may, their appropriation requires control frameworks to screen torque, speed and other execution attributes. This paper presents numerous brilliant controllers structure and order line programming to assemble, alter and mimic intelligent to control BLDCMs. Recently planned graphical user interface (GUI) structure for Multi controllers: conventional proportional integral derivative (PID) controller, intelligent controller based fuzzy techniques: type 1, type 2, and modified type 2 fuzzy logic controller (T1FLC, IT2FLC, and MIT2FLC respectively). Different phases of the problematic framework configuration process, from the underlying depiction to the last usage, can be acquired from the altered tool compartment. MATLAB Ver.2019b was utilized to mimic and plan all procedure GUI. The agreeable aftereffects of MIT2FLC have been approved also; the examination gives the subtleties of the GUI program through estimations, shapes, flowcharts and code, giving understanding into the presumptions of common-sense boundaries that may emerge in such a minimal effort mentor for BLDCM. At last, the outcomes acquired through a few reproduction tests affirm the legitimacy of the showed math model and designing of intelligent controllers.
INTRODUCTION
Brushless DC motors (BLDCMs) are considerably used as a result of there benefits in comparison to the traditional DC brushed motors, also because of the accelerated growth, efficiency evaluation of control electronics as well as control semi-conductor power technology [1].BLDC motor is a permanent magnet synchronized machine (PMSM) that is provided with a six transistor electrical that calculates (on/off) switching also by device's rotor position.The performance of BLDCM is identical to DC motor however it operates without brushes it lead less costly for maintain.Furthermore, it is called as electronically commutated motors (ECMs) and it is fuelled by DC electricity and it has lot of like dependable operation [2].BLDC motors are commonly utilized for several applications in fields such as automation and also medical implementations for various apparatuses [3].The BLDCM is getting more common with higher performance, these motor has several attractive characteristics like high instantaneous torque, longer life, the capacity to regulate the speed over a large range that requires very little repairs and less inertia, a higher power to volume ratio, and lower friction [4].The key issue with this engine is the high design and development, as well as the BLDC motor controller is much complicated compared with the traditional motor controller [5].BLDCM include a better energy density compared with different type of motors (e.g., induction machines (IM)), this kind seems to have no loss and therefore no commutation inside rotor copper so it has been utilized extensively in manufacturing and are ideal for high-performance applications [6], [7].Those other variables add to popularity of brushless DC motors in productivity-critical utilization and where there are spikes caused by switching are (undesired).The commutation requires that utilized of an inverter and rotor location sensor [8], [9].However, the location sensor could add more expense and system size to operate, minimize efficiency, and immune to loud sounds.The research in study less sensor drives research which could control location, velocity and/or torque sans shaft mounted location sensor [10], [11].
Emerging intelligent strategies may not need specific models, and are thus utilized widely to enhance or supplant traditional control strategies.The intelligent fuzzy logic (FL) of Zadeh was utilized to construct controller [12].That benefit of the fuzzy control technique, has been its (not sensitivity) with the precision of the dynamic complicated system.Through the T-2 FLS system as well as it's manage uncertainty have increased benefit with the utilized of FLC control interval-T-2 (IT-2).The concept of T2-fuzzy set theory had generalized that idea from well-established, regular, type 1-fuzzy set theory since it was first presented.
BLDC MOTORS MATHEMATICAL FORMULATION
A permanent magnet motors are called a BLDC motors with a smooth trapezoidal back electromagnetic field (EMF) wave shape.A BLDCM is indeed a revolving electrical system with the classical three-phase stator identical to that of the induction motor (IM) [13], [14].That rotor holds constant magnets as have seen in Figure 1.Rotor flux relationship relies on the substance of the magnet.Therefore, for this kind of motor, magnetic flux saturation is typical.A BLDCM system is feeds from a 3-phase voltage source.Not essentially, the source to be sinusoidal.It could be added square wave or any wave form so that the crest voltage not to surpass the top engine voltage limit.In the same way, for the BLDC engine, the armature winding model [15], [16].
Figure 1.Basic BLDC motor construction [17] Figure 2 is which is a motor drive BLDC block diagram.Suppose that all windings have the same stator resistance and continuous self-inductance and reciprocal inductance.The (three-phase voltage equation) could be represented as in (1), driven currents in rotor is ignored because of the harmonic fields in the stators, iron losses and strays losses are often overlooked and the damper windings have not been modelled [18], [19].
Where Te is electromagnetic torque, T1 is load torque, J is moment of inertia, B is friction constant.Rotor displacement can be found out as, Where P is number of poles.Back EMF will be of the form [20], bs =K f bs ( ) W (4) Where R is resistance per phase, L is inductance per phase.Electromagnetic torque developed, Figure 2. Overall control scheme for a BLDC motor [21] The phase voltages are indeed (va, vb, and vc) and the stator induction is Ls.(ia, ib, and ic) phase currents, Rs stator resistance, M mutual inductance, as well as (L=Ls-M) are listed (ea, eb, and ec).Represent back EMFs of the phase [22], [23].The mechanical angular speed is to wm. Figure 3 indicates that the torque ripple is minimized by injecting a phase current (square wave) inside a fraction of the steady back EMF magnitude.This work given during that research paper replace associate a modified type2 (MIFT2) controller into a BLDC engine's current, and speed controls for the achievement the required dead-beat responVin the high-performance implementation [24].The function of the plant transfer can differ with conditions on the operating.Maintaining the necessary output in the entirety of the old techniques includes suitable adjustments to the controller parameters.As have seen in the Figure 3 show the schematic diagram with traditional BLDCM controller [25], [26].Two controls have been utilized: the first one in the internal loop (to regulate current) and the second for the external loop (for speed control) through modifying voltage throughout the DC bus.Both of controllers are substituted by one smart controller after operation, that doesn't involve tuning, which improves the precision of the response and overcomes the issue of control parameter in-application tuning [27].
DESIGN SIMULATION AND ANALYSIS
In order to evaluate the viability as well as functionality of a controller system suggested, an type 2 fuzzy logic controller (IT2FLC) developed was used for DC motor speed control.The MATLAB 2019b toolbox was developed to apply many types of intelligent controllers with dynamic system using a graphic interface of type-2 fuzzy inference.A requirement for results is the settling period and the peak overshoots.With the MATLAB 7.0-Simulink program the phase response plots have been obtained, and after that the two controllers matched arrangement times and over-shoots.The reliability of the controls was contrasted with two widely known output performance indexes: absolute means square error; describes the design and implementation of an output identified as: Then yout (k) presents the system design output, while ydesired (k) presents desirable performance.BLDC motor control system based on proportional integral derivative (PID) controller.A system of control has two loops, which is the control loop adjustable to speed of motor, as well as the present torque control loop for the engine Figure 3, the model allows several of the parameters of the existing system and speed converter to incorporate fuzzy controller types 1 or 2 until they are recognizable.The traditional controller structure Figure 4.The BLDC motor simulation and study was done with MATLAB/Simulink.The six-stage-voltage inverter, the MOSFET of the (Sim-power-systems) collection, which is feeds a three phase motor (as shown Table 1 of the parameters).The DC bus voltage is regulated by a speed controller.Signals from inverter gates were generated through the decoding of a motor's (Hall Effect Signal).In this windings of the stator (PMSM block), that three-phase effects of the inverter are utilized.The load torque utilized to the machine's shaft is first adjust to be (0) and steps at (t=0.1 s) to its asset value (11 N.m).
The internal loop controls the input gate and electromotive forces (EMFs) synchronises the inverter signalling process Post-tuning utilizing this designing which is obtained the highest outcomes of trial of several sets of parameters culminated in Ki=15 and Kp=20 final PI parameter values (see Table 2), The BLDCM controller structure depends on the 6-step decoder fed inverter modelling technique and modular application of the Truth-Table .Table 3 is the inverter switch truth.
The study of system design presumed with implement BLDCM for simplicity and accuracy, equivalent stator resistance, constant reciprocal inductance, constant self-inductance, perfect inverter semiconductor systems, and marginal zero iron loses equivalent waveforms back-EMF for all phases.The theory were centred the derived dynamic (1) around a similar to circuit of the BLDC motor and VSI device focused on the set up powerful condition (Figure 1) (see in Figure 2).The movement equation will be as per the following: Table 3. Inverter switches truth table based on a series EMFs The electromagnetic torque is a Te, the moment of inertia is a J (in kgm 2 ), the load torque is a TL (in Nm), the coefficient of the B friction is a B, and the rotor velocity is ωr in the electrical mode.(rad/s) and the speed of the rotor is ωm in a mechanical (rad/s).
BLDCM SYSTEM SIMULATION BASED ON MULTI INTELLIGENT (T1FLC, IT2FLC) CONTROLLER
The simulation BLDCM system with Multi intelligent controllers that illustrated in Figure 5 presents with all performance of system design in Figures 6 and 7. T1FLCs have effectively been applied to a variety of technical problems.T1FLCs' key benefit is the capacity to express information through linguistic fuzzy laws, which humans can easily understand and create.Moreover, T1FLCs can deal through linguistic uncertainty and ambiguity.The T1FLC is typically composed of four main sections-fuzzy inference system, inputs fuzzification, fuzzy rules, and performance defuzzification.The fuzzy system is indeed a simple representation of non-fuzzy inputs with non-fuzzy outputs.The output phase consists of two stages in such a type 2 fuzzy logic system.Initially, type-2 mapping is applied to a type-1 fuzzy collection named reduction of form or reduction of order-1.The defuzzification step of the collection is then put in minimize.In the (type 2 fuzzy logic systems), in reality, methods of order reduction in (type1 fuzzy logic systems) are the same approach for rotation speed through two variables (e1, e2) and a single output (u).
With both the error e1 as the reference speed, wr that current engine speed wm.In (15) which is used for calculated the error change (e2(k)) and the previous error value being e1 (k-1).
The FLC method specifies 2-parameters for normalisation (e1N, e2N, for input) and -parameter for de-normalisation (uN, for output).The values applied to normalization are scaled between the (-1, + 1) and that output of its fuzzy controller is translated into a value dependent on the terminal's control characteristics in de-normalisation.The ambiguous values generated from the fuzzy inference process must be defused to a smooth output (u).Therefore, the nine clusters seen in a previous part and Figure 7 describe a triangular fuzzy MF for defined input and output value.
The adaptive-neruo-fuzzy-inference-system (ANFIS) was implemented including both FIS as well as GFS structures in this graphical user interface (GUI).The three kind of controllers (FT1, IFT2 and classical) of the BLDC motor are designed and simulated by five Interface as have seen in Figure 6.The initial window introduces the concept behind the updated FT2 controller and begins rendering all designs simulation windows.The third window implements classical reviews.The fourth window shows the FT1 controller's ANFIS-operation configuration.Four control buttons are shown in this window: the first one calls whole of MFs that used build a fuzzy logic controller, fuzzy regulations reader as well as surface error are referred to in the second one Figure 7 for output from this order) (FT1 controller was utilized after construction with the BLDC engine), simulation is enabled by a third button Figure 7, and ANFIS calls the FT1 controller's fourth window to analyse the main fuzzy inference function.
A final window will display the design of the IFT2 controller adjusted.Additionally, four buttons are also on this window.The most significant is the first.The new kind of MF is MIFT2-FLC which is found in the adjusted design.The key benefit of this feature is that the parameter values increase and that there are no significant constraints; this helps the controller for getting a right answer without over-shoot.Furthermore, a broad MF base supports a design solution to the issue of the initial operational state (the transitional response of the engine BLDC).These kinds of MF are utilized in the IFT2 controller design as seen in Figure 7.The user has a comfortable familiar environment to work in with a good a GUI.We may simulate in GUI system equations that explain electrical transients as well as mechanical.GUI simulation can help grasp the scientific skills gained in actual devices.Through the GUI simulation are able to track and analyse the response time of usable physical quantities.After adjusting might value by increasing the load size that the motor speed and even back-EMF voltages throughout the winding of the stator reduced during the rising three-phase phase.
Figure 8 the results of the simulation of the conventional device, which has fast fixed of the most beneficial responses with fast settling and stabile error rate of 0.99 percent error rate of o/pwm=2,982.If we compare for example with [27], it has a fast dynamic (response settling time is approximately 0.025 s, while our system settling time is almost 0.015 s as seen in Figure 8.The outcome indicates the performance of the inverter that precisely fed the engine to the connection tables with that of the encoder module.Moreover in [28].The proposed technique is proven to have well performance characteristics under transients and rapid load disruptions.demonstrating is extremely valuable in contemplating a superior drive before moving toward the idea of reassure configuration devoted to surveying dynamic drive execution.Shows reproduced results from a customary PID controller and a baffling three-phase PID controller BLDCM.With the outcomes acquired from the re-enactment, obviously for the equivalent working conditions the BLDC speed control utilizing PID fuzzy control innovation would do well to execution than the customary PID controller, for the most part when the motor was running at lower and higher paces.Time arrangement results explain the ability of proposed GUI to give direct control of the BLDCM based multi intelligent controllers.Additionally it looks at the consequences of the four sorts of controls.The utilization of graphical user interface (UI) innovation permits synchronous activity and examination of control gadgets.The plan and usage that are first performed utilizing the IT2FLS toolbox is conceivably critical to investigate in type 2 interval logic, as the IT2 fuzzy control design toolbox takes care of complex issues in various applications.
Figure 5 .Figure 6 .
Figure 5. Simulation results of the BLDCMs with multi intelligent controller form
3001 Figure 7 .
Figure 7. FT1 controller rule and its surface error
Figure 8 .
Figure 8. Testing of simulation BLDCMs model based multi intelligent controller
Table 2 .
Decoder-fed inverter truth table | 3,605 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Propagation structure feature of entertainment news in the Weibo online social network
Entertainment news caters to people's desires for leisure and gossip. Recently, with the Internet popularization, the online social network has become a convenient platform for netizens to receive and spread entertainment news. Previous study efforts of entertainment news focused on analyzing the psychological background and the economic significance of entertainment news. However, little attention was paid to realistic propagation structure features of entertainment news, which reflect the essential propagation mechanisms. This research analyzes different types of news in the online social network, and entertainment news and other news are compared with respect to different structural features. It is found that entertainment news propagates with more indirect re-postings and a polycentric structure. These unique characteristics are stable for different stages of propagation. This structural feature offers us a better understanding of entertainment propagation and may inspire the further analysis of entertainment news based on the propagation trace.
Introduction. -The origin of entertainment news can be traced back to the 1830s when Benjamin Day established the first successful penny -The New York Sun in the United States in 1833, marking the beginning for the era of the penny press [1]. The major coverage of entertainment news on penny press added to its readership and success [2]. Alfred Harmsworth founded the Daily Mirror in 1903 and used "tabloid" to refer to it [3]. Since the paper was half the size of broadsheets with soft and entertainment news, the word "tabloid" gradually began to refer to the small-sized newspaper, reporting celebrities' private life including emotional situation, sexual scandals, changes of the appearance, violent crimes, weird stuff and all kinds of advertisements [4]. Developing from the newspaper, entertainment news gradually penetrated into the field of other media like radio, television, and the online media nowadays [5]. The emergence and growth of the Internet also accelerated the spread and popularity of entertainment news [6]. Nowadays, the trend towards entertainment has overwhelmed the whole society in various perspectives, which offers general public relaxation and (a) E-mail<EMAIL_ADDRESS>(corresponding author) recreation. Acting as a complex system [7], the online social network also accelerates the spread of entertainment information and expands the entertainment market as well [8]. The giant social network companies in the USA, such as Facebook and Twitter, promote the explosive and overwhelming spread of entertainment news to reach a new level and propagate to the whole world [9]. For example, the Twitter social network has 500 million tweets per day, and 74% of Twitter users claim they use the network to get their news [10]. The most followed person on Twitter has already more than one hundred million followers. As for the Northeast Asian countries like China, Japan and South Korea, as the pillar industry, the entertainment industry helps entertainment news to penetrate into people's daily life [11]. By the end of 2018, Sina Weibo, the most famous social network site of China, announced it possessed 200 million daily active accounts in its official reports. And the entertainment news is the top reading topic for users [12]. The entertainment news propagation redefines the entertainment industry that engages in content innovation and media entrepreneurship to incubate media brands and aggregate fan communities [13].
In the new era of the Internet, studies about the entertainment news in the social networks are mainly from the psychology and economy area rather than the propagation pattern. For instance, with respect to the psychology of entertainment preferences, individuals have differences in preferences for such entertainment [14]. And the drive for entertainment on Facebook [15] and Wechat [16] is found to be related to extrovert personality and narcissism. And the entertainment satisfies people's need for group behavior and conformity psychology [17]. Therefore, many users consider social network sites as a platform for fun and become sticking on the social network site. For example, a study based on the QQ zone of China found that entertainment motivation could be a major predictor of social network sites' usage [18]. And in Belgium, Facebook use is positively related to entertainment-oriented purpose especially for adolescents [19]. Some social network sites even use entertainment news to attract and bind users: an up-to-date entertainment website named Celebrity Watch selects important readers and finds potential customers based on their browsing history [20]. The entertainment news not only changes the social network site but also influences the whole media industries [13]. Traditional entertainment spreads a onedirectional message, whereas online social networks are the bi-directional platform where people congregate to communicate gossip and entertainment news [21].
Apart from the psychology and economy research mentioned above, the study of structural features is also indispensable and provides the opportunity to understand the infectious mechanism of entertainment. For example, many works analyze information propagation by the epidemic model [22][23][24][25]. Some researches focus on the propagation structure features of a specific kind of news. For example, some researches study the fake news and find its structural features are significantly different from those of the other news [26][27][28]. The propagation of fake news on Twitter is found to be farther, faster, deeper, and broader than the real news. And fake news inspired fear, disgust, and surprise [27]. Interestingly, some structural features may emerge at an early stage [29]. Apart from the fake news we mentioned, the entertainment news is another kind of infectious news, which has a great influence on people's modern lives. The topology of entertainment news propagation reflects the essential properties and mechanism of how this influence spreads. However, the structural features of entertainment news have rarely been analyzed.
On the one hand, the studies of entrainment news are mainly from psychology and economy perspectives rather than propagation structure. On the other hand, the research about structural features is rarely focused on entrainment news. As a result, in this work, we analyze the entertainment news of the social networks from the structural perspective. We select realistic entertainment news and compare it with other news in the perspective of topology. The entertainment news networks are found to have particular structural features: the ratio between the indirect and direct re-postings of entertainment news is relatively larger than other types of news and the networks of entertainment news often have a smaller Gini coefficient [30]. This indicates that entertainment news is driven by multiple users, which describes the propagation pattern. We also test these features in the early stage of propagation. These unique structural features of entertainment news will help us to understand its infection mechanism better.
Methods. -
Dataset description. The dataset of anonymous users and their retweeting traces are collected through the Sina Weibo social network platform, which is the largest social network of China. In this work, we choose relatively larger networks (greater than or equal to 200 re-postings) to analyze for they have a certain influence. Hence, we study 1691 propagation news with 1.5 million re-postings from 01.01.2016 to 08.30.2018. The users are selected randomly without considering the types of users. More details are shown in table 1.
Definition of entertainment news. Acting as the top popular reading topic for users [12], entertainment news mainly reports celebrities' gossip, scandals, and aims to arouse the ordinary audience's interest and caters to their taste. Specifically, in this work, the dataset containing 1691 news networks is classified according to their content: politics news, economy news, social news, entertainment news as well as education and knowledge news. Due to the importance of news topics for this work, we carefully classify the dataset manually, with a three-person voting model to avoid subjective bias. If three judges fail to give the same conclusion, then this model observes the principle of minority obeying majority. If the content of one network contains information with several topics, the manual classification could select the main type conveniently from the reader's perspective. Because of the importance of news topic and the convenience of selecting the most important topic, we choose a manual three-person voting model rather than natural language processing in this work. Due to a huge dataset, other automatic approaches such as the natural language processing are required. After the classification, there are 479 entertainment news networks. Because the other four kinds of news have similar structural features, we study them together in order to compare them to entertainment news more conveniently. Hence there are 1212 other news networks. The information about the number of anonymous users and re-postings are shown in the table 1.
Network establishment.
We plot the schematic diagram to explain how to establish the re-posting trace network as shown in fig. 1(a). In the propagation network, the node stands for user and the edge stands for re-posting relationship. This propagation network is a directed network with the direction of information spreading (from source to receiver). The direction and other concepts such as the layer, direct re-postings and indirect re-postings are shown in this figure. This example network contains eight users as well as seven re-postings among them.
Ratio between the indirect and direct re-postings. The layer describes group re-postings that share the same distance from the creator. As shown in fig. 1(a), the seven re-postings could be divided into layer one, layer two and layer three. The re-postings from the first layer are actually the direct re-postings (out-degree) from the creator, whereas the re-postings from the rest of the layers such as layer two and layer three represent the indirect re-postings. Therefore, the ratio between the indirect and direct repostings is defined as where n d is the size of direct re-posting and n i is the size of indirect re-posting.
Concentration ratio.
We measure the concentration ratio of the propagation network by the Herfindahl-Hirschman Index [31] from the network perspective. One given network has one concentration ratio, and it is defined as the sum of the squares of the out-degrees of the nodes within the given propagation network. The concentration ratio is defined as where N is the number of nodes in the network and k out j is the out-degree of the node number j.
Gini coefficient [30]. The Gini coefficient is initially used as a single parameter aimed at measuring the degree of inequality of income distribution. Based on this, we analyze the distribution of the out-degree of nodes within the given network to plot the Lorenz curve. The Lorenz curve is originally a graph showing the proportion of overall income or wealth assumed by the bottom x% of the people. Here it is shown for the bottom x% of nodes what percentage (y%) of the total out-degree these The creator is marked especially by node a. Node b reposts node a's message, hence producing a direct re-posting, whose direction is the information spreading direction (from source to receiver). Also, node e reposts node b's message, creating an indirect re-posting accordingly. The re-postings could be divided into different layers by the distance from the creator. (b) This network contains 454 nodes and 514 edges, and its content is about publicity shots of a famous Chinese actor. The creator "i" is marked. We use Pajek software with the 2D Fruchterman-Reingold layout to plot these figures. (c) Another typical example for the propagation network of entertainment news with 722 nodes and 732 edges. The creator "j" is marked. This news is a video about a Korea TV series. The different stage of propagation shows that the first layer is dominating and the entertainment news has a relatively smaller first layer. (c) The proportion of influential nodes at different layer over all the networks from the same group. Here, we define influential spreaders as those whose out-degree is above 5% of the whole size of the network. (d) The proportion of influential nodes at each layer over all the networks at the early stage of propagation (considering only the re-postings within one day from the first re-posting).
nodes have. Theoretically, the Gini coefficient can range from zero (complete homogeneity) to one (complete heterogeneity). Furthermore, we use the complementary Gini coefficient to replace the Gini coefficient in order to better demonstrate its distribution by logarithmic coordinates. The complementary Gini coefficient is defined as where G is the Gini coefficient where A is the area that lies between the line of equality line and the Lorenz curve and B is the area under the line of the Lorenz curve, with x the cumulative portion of nodes ranked by outdegree and y the cumulative portion of out-degrees.
Results. -To establish the news propagation network, we apply an approach based on re-posting relationships among users regarding the same news [29]. Based on the identified propagation networks, we demonstrate the propagation networks of entertainment news in fig. 1. For example, in fig. 1(b), the network is evolving with the participation of not only the creator "i" but also other influential re-posters. Apart from the creator, the latter re-posters could also play an important role in the further spreading of entertainment news. To check this, the proportion of indirect re-postings in entertainment networks is studied.
The investigation of the layer in typical propagation networks helps to understand the distribution of entertainment news. For the whole lifespan ( fig. 2(a)) and the early stage ( fig. 2(b)), we study the layer distribution for entertainment news and other news, respectively. Although the first layer is dominating, entertainment news networks tend to have a relatively smaller first (direct) layer and larger indirect layers. Moreover, considering the influential re-posters (their out-degree is above 5% of the whole size of the given propagation network) rather than all the re-postings, we first count the number of influential nodes and all the nodes for the different layers. And then we calculate the proportion of influential nodes. As shown in fig. 2(c), the proportions of influential re-posters are generally higher in entertainment news than the other news. As a result, the layer distributions show different evolution paths between entertainment and other news. Whereas in fig. 2(d), at the early stage of propagation, the proportions of influential spreaders of entertainment and other news are low and have a minor difference considering the re-postings within one day. This indicates that for the entertainment news, the influential spreaders may appear at the later stage of spreading. For earlier stages such as within one hour and within five hours, the proportions of influential spreaders are even lower, so we do not show them here.
The propagation is highly influenced by the proportion of the first layer since it is the majority. As a result, we adopt the ratio between the indirect and direct re-postings (see the section "Methods") to express the amplification effect of propagation among two groups of news. As shown in fig. 3(a), for the whole lifespan, entertainment news networks turn out to have larger r than that of other news networks. The entertainment can attract more re-posters out of interests. The spreading of entertainment news is contributed not only by the creator but also by influential latter re-posters. With the participation of some influential re-posters in entertainment news, more re-postings exist in the indirect layers rather than the first one. Here we conduct hypothesis testing to measure whether the means of two sets of data are significantly different from each other. We choose the Mann-Whitney U test [32] which is suitable for a non-normal distributed and large sample. Furthermore, the difference between entertainment news and other news already emerges at the early phase. We further study the r between two groups of news at the early phase including one hour, five hours and one day from the first re-posting ( fig. 3(b), (c) and (d)). The separation of distribution emerges at one hour ( fig. 3(b)), and the difference already becomes significant with the p-values of the Mann-Whitney U test below 0.01. The differences between entertainment and other news seem stable across time.
Apart from the creator, we further study the out-degree of all the nodes by the concentration ratio. It is originally used in economics: it measures the market share of the industry and illustrates the oligopolistic degree. Here we use the concentration ratio to measure the inequality degree of nodes property (out-degree) from the network perspective. Specifically, the concentration ratio calculates the sum of the squares of the out-degrees (see the section "Methods"). The concentrated network has a relatively larger concentration ratio. As shown in fig. 4(a), the peak of entertainment news is obviously lower than other news and the corresponding concentration ratio of the peak is relatively large. This indicates that entertainment news networks turn out to have smaller concentration ratios. Other kinds of news have few nodes which contribute a lot in the structural diffusion process, since they have broadcast-like diffusion processes [33,34]. Compared with other news, the entertainment news propagates in a viral way. Similar to the methods of the r, we check the stability of this phenomenon from the temporal perspective. And this finding remains at the early stage ( fig. 4(b), (c) and (d)). The concentration ratio reflects the inequality from the network perspective, and when we look deeper into the node perspective, the Gini coefficient demonstrates a more obvious difference between types of news as follows.
The ratio between the indirect and direct re-postings and the concentration ratio we mentioned above is a kind of global parameter. For example, the r divides edges of a network into two groups: direct re-postings and indirect re-postings, which does not consider the microscopic difference of propagation connection. And the concentration ratio studies the concentrated degree from the network perspective. More in detail, from the node perspective, we analyze the Gini coefficient of a given network to measure the out-degree heterogeneity of each node. For example, the random regular network is one of the most homogeneous networks since every node has the same out-degree. The Gini coefficient of the random regular network is zero. However, the star network is the most heterogeneous network and the Gini coefficient of the star network is one. The realistic propagation networks are between these two limiting cases. Therefore, we use the Gini coefficient to measure the heterogeneity degree (see the section "Methods"). In fig. 5(a), the complementary Gini coefficient of entertainment news is considerably larger than that of other news. Entertainment news networks have a viral-like structural diffusion process since their propagation is more homogeneous and involves some influential broadcasters. On the contrary, other news demonstrates a broadcast-like layout due to one dominant creator acting as the unique influential disseminator. Again, we test whether this difference is stable at the early stages by analyzing the distribution of the complementary Gini coefficient within one hour ( fig. 5(b)), within five hours ( fig. 5(c)) and within one day ( fig. 5(d)). And it also shows a difference, with a p-value below 0.01.
Discussion. -Recently, entertainment news has gained great popularity among different types of news media [12], which influences the entire society in all perspectives. Here, we study the propagation structure feature based on the complex network theory. Considering different driving factors including social contagion behaviors or strategic considerations [35], the topology of the network helps to explore the hidden structure of entertainment news. Compared with the popularity of entertainment, studies of its propagation mechanism did not draw enough attention. The underlying mechanism of entertainment news still has many unsolved problems currently, indicating the significance and necessity of study on propagation features. In this work, we focus on the re-posting ratio and the heterogeneity level of networks. The entertainment news has a peculiar distribution of these two features and these characteristics could appear at the early stage of spreading. During the entertainment news propagation, the masses are willing to post entertainment news to satisfy the psychological need of curiosity, conformity or narcissism [16], and the entertainment practitioners add fuel to the gossip fire deliberately. These enthusiastic audiences and entertainment reporters could constitute influential re-posters. As a result, the entertainment news spreads often by the contribution of both the creator and influential latter re-posters, which reflects its multiple-spreader structure compared to the single dominant spreader in other news. And the r of entertainment news is larger, demonstrating that entertainment news has a stronger infectivity as a result of cascading dynamics [36][37][38]. The fan economy is a kind of economic mode that obtains benefits from the relationship between fans and the people who are followed like stars, idols, and celebrities. The mechanism that entertainment news generates special structural features will be useful in the study of the fan economy, which could drive further research in the future. Besides, negative and unhealthy entertainment news can be intentionally and effectively supervised and controlled according to these structural features.
The entertainment industry is influenced by its local culture. For example, since the last century,the UK and US had tabloid culture, which served as a kind of newspaper to spread entertainment news. And the entertainment industry has become a pillar industry in the Northeast Asia countries like China, Japan and South Korea, where entertainment news plays an indispensable role in people's life. Additionally, different social network platforms also have different operating mechanisms including users' category, text language and length, multimedia attachment and so on. This diversity of culture and platform causes the difficulty of studying entertainment news only by user property analysis or natural language processing. However, with respect to the propagation structure feature, for example, we find that entertainment news propagates with more indirect re-postings and a polycentric structure in this work. And these structure properties could study deep into their "DNA" and break the shackle of different cultures and platforms. Moreover, the structural propagation findings of this work require verification from other platforms of other cultures. In our previous work about fake news, similar features are found for both Weibo and Twitter platforms from different cultures [29]. Due to data reason, this work only studies the dataset from Weibo, and the analysis of other platforms may drive further research in the future. | 5,098.8 | 2021-07-01T00:00:00.000 | [
"Physics"
] |
Design of an enhanced mechanism for a new Kibble balance directly traceable to the quantum SI
The “Quantum Electro-Mechanical Metrology Suite” (QEMMS) is being designed and built at the National Institute of Standards and Technology. It includes a Kibble balance, a graphene quantum Hall resistance array and a Josephson voltage system, so that it is a new primary standard for the unit of mass, the kilogram, directly traceable to the International System of Units (SI) based on quantum constants. We are targeting a measurement range of 10 g to 200 g and optimize the design for a relative combined uncertainty of 2×10−8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$2\times 10^{-8}$\end{document} for masses of 100 g. QEMMS will be developed as an open hardware and software design. In this article, we focus on the design of an enhanced moving and weighing mechanism for the QEMMS based on flexure pivots.
Introduction
The Kibble balance is the instrument for the precise primary realization of the unit of mass in the International System of Units (SI) based on electro-mechanical metrology principles and quantum physics. Due to the design of a new version of the Kibble balance at the National Institute of Standards and Technology (NIST) as part of the so-called "Quantum Electro-Mechanical Metrology Suite", QEMMS, this article describes the components integrated in QEMMS as well as the design of the new flexure-based moving and weighing mechanism.
Figure 1
The relation between the IPK, natural constants and the unit of mass in the Kibble balance in traditional and reversed operation quency of Caesium, ν Cs -provides a means to realize mass through electrical metrology using absolute precision balances without the need to compare to a physical object.
Until 2018, only the values for c and ν Cs were fixed, and the International Prototype Kilogram (IPK) was the last physical artifact defining one of the seven SI base units. Using the Kibble balance principle and decades of hard work, scientists were able to measure and define a numerical value for h based on the mass of the IPK, the speed of light and the hyperfine transition frequency of Caesium. As a result, the Kibble balance can be operated in a reversed fashion and realize virtually any macroscopic mass value directly from the defined value of h. A visualization of this relation is shown in Fig. 1.
The Kibble balance uses two modes of operation for an absolute mass measurement. In the first one, the weighing mode, the gravitational force of a test mass is directly compensated by a counter force such that the test mass stays at a defined null position. The counter force is provided by a magnet-coil system and can be controlled by adjusting the current through the coil, see Fig. 2.
The equation for this mode of operation is where m is the mass of the test mass, g the local gravitational acceleration, N the number of turns of the wire in the coil, the magnetic flux through the coil, z the position of the coil along the vertical direction, and I the electric current through the coil. Since the magnetic flux gradient N∂ /∂z is very hard to obtain with sufficient uncertainty, a second mode of operation, the velocity mode, allows for direct measurement of the N∂ /∂z with the required precision. Here, the coil is moved up and down in the magnetic field of the magnet such that a voltage is induced in the coil. Derived from Faraday's induction law it yields, where V is the voltage induced in the coil in the velocity mode, and v is the vertical velocity of the coil with respect to the magnet, see Fig. 3. The velocity in high precision Kibble balances is usually measured with an interferometer and vacuum operation of the balance creates an environment with a stable index of refraction. Solving both equation (1), and equation (2) for N∂ /∂z and equating them yields
Figure 2
The principle of operation in the Kibble balance in the weighing mode. The gravitational force of a test mass m is compared with the magnetic force produced by the magnet-coil system to keep the test mass in a defined, feedback-controlled position On the left hand side in equation (3), we see the expression for mechanical power, and on the right hand side, the one for electrical power. The original name of the Kibble balance, "watt balance", is a reflection of the power balance principle described here.
A direct measurement of mass traceable to the new SI as in the QEMMS would not be possible without two quantum effects: The Josephson effect and the quantum Hall effect.
The electric current I through the coil in the weighing mode is usually measured according to Ohm's law by monitoring a voltage drop over a traditional resistor R with very precisely known value (see Fig. 2), which is calibrated against a quantum Hall resistor (QHR) stan- Furthermore, the velocity and the local gravitational acceleration in equation (4) are measured using primary standards of length and time based on the definition of the second and meter traceable to ν Cs and c, respectively.
More detailed information on the link of the quantum effects with the Kibble balance equation can be found in [7,8].
The QEMMS
There are multiple Kibble balance experiments around the globe. The design of each Kibble balance is unique, however, one feature seems to be common to all: The resistor in the electrical circuit for the weighing mode (see Fig. 2) is calibrated using transfer standards against a QHR as primary standard for resistance. QEMMS will overcome the inconvenience of external resistor calibration and thus eliminate the resistance calibration uncertainty by implementing a QHR directly in the Kibble balance's electrical circuit. This QHR is based on the quantization effect of monolayer graphene [9]. Among the advantages offered by graphene-based QHR standards is their ability to maintain the fully quantized state at higher currents and temperature. NIST researchers fabricated thirteen single element graphene QHR standards in parallel and operate at the Landau level i = 2 plateau, giving a nominal value of R K /26 ≈ 992.8 [10]. Even though the graphene QHR array in Panna et al. show experiments with maximum currents up to 0.3 mA, measurements have already been published showing single Hall bar graphene devices maintaining quantization for currents up to 0.7 mA [11]. Furthermore, creating arrays with higher currents is subject to extensive research efforts at NIST. In context of QEMMS, these allow for a measurement of masses up to 200 g, depending on the design of the coil in the magnetic field [12].
Hence, this will be the first integrated instrument in the world to feature all three: a Kibble balance for the actual mass measurement, a PJVS, and a primary reference resistor directly implemented in the instrument, which provides an outlook for improved overall measurement uncertainty for masses. The entire QEMMS will fit in one room of the size of approximately 4 m × 5 m (width × length). Through the direct link to all three natural constants h, c and ν Cs , it is providing direct SI realization for the units of mass, time, length, electrical resistance, voltage, and current. Thus, the QEMMS can be seen as a metrology institute in one single experiment.
As opposed to traditional Kibble balances [8] which were designed and optimized to measure 1 kg, the QEMMS is being built to cover a mass range of 10 g to 200 g. We optimize the design specifically targeting a relative combined measurement uncertainty of 2 × 10 -8 at nominal mass value of 100 g. Here, we want to take advantage of the direct implementation of the new quantum SI in the instrument for mass scaling. A factor of 20 in the total mass range has proven convenient for systematic studies on the balance, which motivates the covered range around the nominal value. Furthermore, smaller nominal mass values allow for a more robust and compact design, which makes the technology more accessible for other metrology institutes as well. Since a QHR has been designed for this purpose [10], we put the focus on the redesign and optimization of the Kibble balance. Each individual subsystem of the Kibble balance is being investigated regarding new opportunities for improvement. The permanent magnet system [12] and a vacuum chamber [13] have already been acquired. In the current design step, improvement of the mechanical components in the balance, especially the mechanism, is of key interest since the state of the art seems to be one of the limiting factors for the overall balance uncertainty. Hence, the design of the QEMMS mechanism is the focus of the following chapters.
Requirements to the QEMMS mechanism
The mechanism in the QEMMS has two functions: It provides suspension and defines the trajectory of moving components. The former entails suspending all functional components of the balance such as the mass pan, the coil, a counterweight for balancing, the coil suspension and optics for the interferometer. The latter requires the mechanism to move the coil in a vertical trajectory through the magnet in velocity mode. We are seeking to integrate these two main functions in the same mechanism so that one mechanical system is used to perform both modes of operation in the QEMMS.
A set of important requirements pre-define a successful mechanism design. The most important aspects that can directly be impacted by conceptual decisions in an early state of design are listed in Table 1.
In general, it is important to keep all uncertainty contributions of the mechanical system to the measurement sufficiently small that their combination does not exceed the targeted uncertainty. For the QEMMS we define this as on the order of parts in 10 9 . In the following, each requirement from Table 1 is explained in more detail regarding the background of its origin and importance.
Coil travel in the velocity mode
For a measurement of N∂ /∂z in the velocity mode, the coil will be moved up and down in the magnetic field of the permanent magnet for the QEMMS. At first, it is accelerated to a velocity of v z = 2 mm s -1 . Then it will travel with constant speed for 40 mm. Finally, the coil is decelerated to a stop at the end of each sweep. The permanent magnet for the QEMMS has been designed and built to provide uniform ∂ /∂z profile over the course of the travel with constant speed. This area is referred to as the precision air gap. [12] Hence, the travel range of the coil is given by the length of the precision air gap plus additional travel for acceleration and deceleration of the coil. A conservative estimate of the total travel length based on experience with previous Kibble balance designs is 60 mm. We will use this number as an input parameter for the mechanism design of QEMMS.
Stiffness
To keep measurement errors at the necessary minimum and to achieve the required resolution, the balance mechanism needs to have a low stiffness. More importantly, we want the stiffness to be constant in time and over all possible coil positions for the following three reasons: (a) The noise in the balance position feedback converts to force noise by multiplying the position noise with the stiffness of the mechanism. Hence, a low stiffness produces a small force noise at a stable position which reduces averaging time. Since QEMMS will be optimized to measure 100 g mass artifacts with an overall relative standard uncertainty of 2 × 10 -8 , we are aiming for a resolution of the balance of 1 × 10 -9 on 100 g. This yields an absolute mass resolution of 0.1 μg, which is roughly equivalent to a force resolution of 1 nN. We assume we can average the position feedback uncertainty down to 100 nm, which means that the mechanism needs to have a stiffness of ≤ 0.01 Nm -1 at the weighing position to be within the resolution requirements.
(b) A constant stiffness value along the coil travel is important for an accurate and stable feedback control during the velocity mode. A higher order stiffness term causes the balance to become unstable or stiffen up along the travel if the stiffness at the weighing position, which is in the middle of the coil travel, is adjusted down to 0.
(c) The stiffness needs to be constant with time, because changes over time result in a change in the restoring force of the mechanism, which leads to a drifting equilibrium position of the balance and therefore drift in the force signal, which might be limiting for the balance performance.
At present, we are confident that a mechanism stiffness of ≤ 0.01 Nm -1 will satisfy the above requirements. Experimental validation will follow in the near future. Optionally, we include a stiffness adjustment unit. This can be realized by applying a negative stiffness to the mechanism by means of, e.g., an inverted pendulum, astatic stiffness adjustment, or a negative spring restoring torque [14][15][16].
Guiding of the coil
For QEMMS, the maximum horizontal parasitic motion along the vertical coil travel is 10 μm. Furthermore, we are seeking to create a trajectory with the QEMMS mechanism that is repeatable between weighing and velocity mode as well as between each sweep in the velocity mode.
The coil speed in the velocity mode is measured with a heterodyne interferometer reflecting a laser beam off of a retroreflector that is attached to the moving coil (see Fig. 3). The most basic criterion to run a measurement in the velocity mode is that the interference detector does not lose the beam signal in the sweeping phase. The laser beam of the interferometer needs to be fully reflected. Horizontal displacements and imperfections of the trajectory from the vertical can cause the beam to be clipped. However, before clipping the beam, other impacts to the measurement occur due to imperfections in the coil trajectory from the vertical [17,18]. Limiting the explanations to the x-direction, these are proportional to the term v x /v z , where v x is a horizontal and v z is the vertical coil velocity. In Appendix A, we discuss three biases that occur with horizontal motion. For all, the figure of merit is v x /v z = x/ z ≈ 1.67 × 10 -4 .
Low hysteretic force
Non-linearities from mechanical hysteresis can be a problem in Kibble balances. If hysteresis is too large, then a precise measurement cannot be done because the this is a limitation in the resolution of the balance.
The magnitude of a hysteretic force in general, depends on the load to the pivots and on excitations of the balance, e.g., due to a response of the balance during a mass transfer or the mechanism movement during the velocity mode. There are two general ways to reduce hysteretic forces in balances: (1) Minimizing the excursion during mass placement by adjusting mass lift speed and by optimizing feedback control. (2) Using an erasing procedure where the center pivot point is exercised in a damped sinusoidal motion after each excursion of the balance. However, a small part of hysteresis remains in the system. The goal is to have a remaining hysteretic loss in the mechanism equivalent to ≤ 0.1 μg.
The cause and magnitude of a hysteretic force is also driven by the type of mechanism used in the balance. All balance mechanisms can be divided based on their type of center pivot into two groups: knife edge-based and flexure pivot-based balances. In the QEMMS mechanism, flexure pivots will be employed.
The main advantage of a flexure over a knife edge with regards to hysteresis is that in the flexure only elastic deformation contributes to this effect, whereas in a knife edge elastic and plastic deformation need to be considered. After a change in the load and a deflection, a flexure shows anelastic behavior because it takes time to relax back to its internal equilibrium state. However, despite the similarities to the knife edge, the hysteretic forces in a flexure are known to be smaller. Reviewing literature, T. Quinn [19] published results for a 1 kg mass comparator stating that a flexure-based balance performs mass measurements with precision of up to six orders of magnitude better than a knife edge-based balance. Note that he admits that the knife edge-based balance was probably not ultimately optimized in terms of knife and flat materials and geometry [19][20][21].
Note also that all published anelasticity data concerning flexures in precision balances were gathered using mass comparators which only use the weighing mode. The anelastic characteristic of flexures was never shown in Kibble balances where the weighing and the velocity mode are both performed by a single flexure mechanism. Conducting a series of experiments is necessary to learn about this effect because it cannot be quantitatively pre-determined during the design process. Unfortunately, theoretical calculation or simulation cannot provide quantitative knowledge regarding hysteretic forces because theories on this topic are not very well developed or validated. There are only general design recommendations that can be applied for minimizing the amount of anelasticity in flexures, for example: (1) designing the flexure as thin as possible in the region of bending, (2) designing it long and (3) using a flexure material with low internal damping coefficient. These recommendations minimize the geometric and elastic part of the stiffness, the internal material friction and therefore the energy stored and dissipated in the flexure during deflection, which is a measure for low hysteresis according to Speake [21].
Carrying capacity and installation space
The central pivot in the QEMMS mechanism needs to support a total load of 15 kg. Furthermore, with this load it must be capable of rotating by ±7 • to provide the desired ±30 mm of vertical travel, assuming the maximum dimension of a beam/wheel limited by the maximum installation space in QEMMS. Since the Kibble balance in QEMMS is Figure 4 The requirements to the mechanism in the Kibble balance of the QEMMS summed up in a drawing designed to be similar in size to commercial high precision vacuum mass comparators, we allow for a maximum horizontal dimension of the mechanism of ≈ 500 mm. This limits the maximum dimension of one beam or wheel arm in the balance to 250 mm considering symmetric balance design. Also, the mechanism itself needs to be as compact as possible to keep free space for optical systems and a mass exchange unit in the vacuum chamber. The sensitivity of a balance scales as the square of the balance arm [16]. Hence it should be as large as is practically allowed. Figure 4 visualizes the above explained requirements in a drawing.
Design of the QEMMS mechanism
We are quite confident that a flexure-based mechanism design keeps the mechanical hysteresis in the QEMMS mechanism small and provides a repeatable movement. Furthermore, the new mechanism should be more compact, lighter, and require less auxiliary control features to define a one-dimensional coil trajectory than existing versions of Kibble balance mechanisms. Henceforth, we favor passive mechanical components built from flexure elements for defining the balance trajectory over auxiliary electrical components. Figure 5 shows the separation of the QEMMS mechanism function in two subcomponents integrated in a single mechanical system. One component is for balancing, the other one for defining a linear trajectory of the coil. We will take a look into how to realize each functional component in Fig. 5 with a flexure design and integrate them into a mechanism that satisfies the requirements for the QEMMS.
Main flexure
The trade-off in the design of the main flexure is to keep the maximum stress in the flexure at the maximum deflection angle at an acceptable level. A safety factor of 2 to 3 is desired. Also, a low elastic flexure stiffness is recommended in order to reduce the anelastic effect in the flexure after deflection [21]. This calls for a flexure as thin as possible.
Furthermore, a material with low internal damping/anelasticity must be used in order to avoid hysteretic effects upfront as much as possible. A metal with very high yield strength force/torque for the velocity mode is put into the mechanism. Here, F and τ denote a force or torque applied to the mechanism by the motor used to move the coil in the velocity mode. Furthermore, ϕ and ω are the deflection angle and angular velocity of the main pivot. Finally, z and v are the translational displacement and velocity of the coil in vertical direction constrained by a guiding mechanism (≈ 1000 MPa) and small anelastic after effect is a hardened Copper Beryllium alloy as used for flexures in balance experiments, e.g., in [19].
These materials are typically machined using wire electrical discharge machining, grinding, high speed milling or etching. In notch flexures, minimal flexure thicknesses of ≈ 50 μm can be achieved.
The main flexure needs to be carefully designed and analyzed by means of finite element analysis in order to evaluate stress-reducing parameters in the flexure accurately. One key factor to evaluate is the stress concentration near axis of rotation of the flexure due to the high axial load and high deflection. Widely used hinge geometries such as flat or conventional circular hinges have limitations for application in the QEMMS because they show high stress values derived from the simulations in Fig. 6. Maximum equivalent stresses of > 1000 MPa exceed the static yield strength for the high strength flexure material we chose (≈ 1000 MPa).
In the investigated conventional flexure geometries, the bending radius is very small, which causes a high stress concentration in the region of flexing. Thus, we explore a modified flexure geometry that causes the bending radius of the flexure to become larger. This shows a positive effect to the maximum equivalent stress in the flexure. A representation for the modified geometry with an acceptable maximum stress value is shown in Fig. 7. Figure 8 only represents one set of parameters for a flexure suitable for the application in the QEMMS mechanism. Other variations are possible, but further elaboration on this matter is not within the scope of this manuscript. The optimization of the geometry not only regarding the stress in the flexure but also the elastic stiffness is work in progress at NIST and will be published in a later on.
Kinematics
In addition to the main pivot, a sub-mechanism for guiding with a proper link to the main flexure completes the kinematics of the balance. Examples, where a guiding mechanism is combined and linked with a balance beam can be found in, e.g., commercial weighing cells or in the Planck balance at Physikalisch-Technische Bundesanstalt (PTB) [22]. An approach for an integrated design of main flexure and guiding mechanism in a parallel four bar linkage was used at NIST for electrostatic force balance experiments [16,23]. The mechanisms in [16,22,23] show a systematic arc motion caused by the use of a parallel four [24]. The dissertation thesis addresses the correction of unwanted parasitic motions as well as the change of the degree of freedom in a kinematic structure in order to achieve a certain trajectory. Such principles are, e.g., the design of parallel or serial kinematic structures, or applying nested or external linkage structures to avoid underconstraint in a mechanism [25]. An example based on a planar parallelogram linkage is provided in Fig. 11 in Appendix B.
However, due to simplicity, correction of lateral error motions, compactness, and the convenience in machining of a planar flexure mechanism, we favor the folded parallelogram linkage as a guiding mechanism for the QEMMS. We will integrate a weighing beam/wheel as the external linkage as shown in option (4.2) in Fig. 11 (Appendix B) and use this directly as a link to the main flexure. A visualization of the kinematic system for the QEMMS mechanism is shown in Fig. 9.
In a next step, a comparison of using a beam balance versus a wheel balance is addressed.
Beam balance vs. wheel balance
There is a fundamental difference between the kinematics of the two balance types. The end points of the beam balance perform a rotational motion along a fixed circle. Only by the use of a further guiding mechanism, the movement of suspended components can take place parallel to the vector of gravity as shown in Fig. 12 (Appendix C).
In contrast, with the wheel balance, a rolling of a band on the wheel is performed. This results, in theory, in a direct conversion of a rotational movement of the wheel into a linear displacement of suspended components parallel to the vector of the gravitational force. In order to achieve a further improvement in the quality of the linear motion and to avoid unwanted oscillations, a guiding mechanism as shown in Fig. 9 can be used. The resulting overconstrained design can be dealt with by adjustment.
At first glance, this difference in use of a beam to a wheel balance seems to be negligible for application in a balance. Indeed, in a mass comparator, where the mechanism operates closely around its defined zero position, there is no theoretical functional advantage to either solution. All parasitic effects in the kinematics are small due to small deflections of the flexures. Convenience in machining and assembling of small planar (sometimes monolithic) mechanisms seems to be causing the favor for beam balances here.
However, with the large travel required in the QEMMS, not all of the usually negligible effects in the mechanism remain harmless to the mechanical properties of the instrument.
In fact, there is one impact that causes a major disadvantage of a beam balance compared to a wheel balance from a kinetic perspective: there is a horizontal force acting upon the connecting links between the guiding mechanism and the beam when the effective horizontal length of the balance beam shortens by the cosine of the deflection angle.
This results in a non-linear stiffness term and parasitic forces to the guiding mechanism which cause unwanted deformations and error motions in the guide. An analysis of this effect is provided Appendix C.
We could think of compensating the parasitic horizontal force on the stages by applying a symmetric compensation design approach by adding more components to a beam balance-based mechanism. However, this would increase the complexity of the mechanism to a level that we do not desire. The fact that the wheel balance prevents this horizontal force by design causes a favor of the wheel over the beam balance for the QEMMS mechanism.
Modular design
The mechanism has been designed based on a wheel balance and a rendering of the frame and the moving parts can be seen in Fig. 10. We favor a prototype of modular build to allow for variation of single components in experiments. Interesting elements for variation would be, e.g., the main flexure or the guiding mechanism flexures with view on the hysteretic properties of the mechanism or the guiding quality of the compliant struc- The modular design furthermore provides us with the opportunity to methodically investigate the effect of certain misalignments to the movement and hysteretic properties of the QEMMS mechanism. This helps to further clarify which design parameters matter not just from a theoretical, but from a practical point of view and highlights practical requirements to precision in assembling.
Conclusion
In this article we discuss the requirements and design for a new mechanism for the Kibble balance in the Quantum Electro-Mechanical Metrology Suite (QEMMS) currently under construction at NIST. Summarizing the explanations on the previous pages, we designed a fully mechanical and flexure-based balance mechanism that can execute both modes of operation in the Kibble balance experiment-the weighing and the velocity mode. This mechanism constrains the trajectory of the moving components to a purely linear, onedimensional motion by design without requiring auxiliary active control features.
Experiments regarding the guiding quality and repeatability of the mechanism will show the technological limitations of the flexure design. Due to the modular design, we can optimize the mechanism after evaluation of experimental results methodically piece by piece to converge to a design with the properties we desire for the QEMMS mechanism.
A prudent design is to theoretically minimize the known factors contributing to the hysteretic effect, such as the spring constant and loss factor. However, ambiguity remains about the contribution of mechanical hysteresis to the uncertainty budget.
Appendix A
Here, we discuss three errors introduced to the measurement with the Kibble balance in the velocity mode stemming from imperfections in the motion of the coil from the vertical. We limit the explanations to the x-direction and linear assumptions for simplicity. The errors describe systematic biases, that we are estimating with an upper limit value. In reality, once we build the experiment, we will measure all the imperfection terms in situ, e.g., the wave front distortion, the beam diameter, the horizontal motion of the balance mechanism, . . . , and apply corrections for these biases.
A.1 Voltage bias
Horizontal velocities also cause a bias e V in the readouts of the induced voltage in the velocity mode. This stems from both horizontal forces F x to the coil produced in the weighing mode-when the electrical center of the coil is not aligned with the magnetic center of the permanent magnet-and horizontal velocities in the velocity mode. The relative bias is expressed as follows where F z is the force applied in the vertical direction by the magnet-coil system in weighing mode. The factor F x /F z is typically on the order of 1 × 10 -5 [17,18], thus this bias yields e V = 1.7 × 10 -9 . However, this effect can be cancelled according to [26] when employing a balance mechanism that performs the exact same motion in the weighing and the velocity mode, which is the goal for the QEMMS mechanism.
A.2 Beam shear bias
A contribution to a bias in the velocity mode is the beam shear error e BS in the interferometer. It occurs when the coil moves horizontal and the back-reflected interferometer beam gets displaced horizontally such that there is change in overlap between the reference beam and the beam reflected off of the moving retroreflector at the coil. The important values here are the wavelength of the laser, λ, the wave distortion at the optics-we assume λ/10-the beam shear/horizontal coil motion, x, and the diameter of the laser beam in the interferometer, d Beam . The equation for this relative bias derived from [27] is The wavelength of the laser is λ = 633 nm with a beam diameter of d Beam = 6 mm which yields a bias of e BS = 3.52 × 10 -9 .
A.3 Velocity bias
Another measurement bias, e v , comes from horizontal velocities during the velocity mode. These are derived from a misalignment of the interferometer with respect to gravity, α, and from the horizontal motion, x, along the vertical coil travel z [18,28]. The equation for the bias is Assuming a reasonable value α = 45 μrad, we get a relative measurement bias of e v = 7.5 × 10 -9 , which is on the order of magnitude we desire to have. Figure 11 explains design principles to correct for motion imperfections in kinematic chains transitioning from a purely rotational to a purely translational motion of a moving member in a mechanism.
Figure 11
In first approximation, flexure mechanisms can be modelled with rigid links connected to rotational pivots. This diagram shows an example for the correction of unwanted motions in the kinematic structure of mechanisms consisting of rotational joints. Starting with a purely rotational joint (1.) and ending with a linear motion provided by a chain of rotational joints, (4.1) and (4.2). The principles of serial and parallel setup of pivots as well as an external linkage are applied A purely rotational motion (1.) is turned into a quasi-linear motion of a plane with a systematic horizontal part (x ) through parallel setup of two beams as in a parallel four bar linkage (2.). Now we can use a serial (folded) chain of these linkages. By moving (z ) half the distance of (z ) we can correct for the parasitic horizontal part of the motion in the final stage. However, this is a movable system with two degrees of freedom, where the movement of the intermediate stage on the right (z ) is not coupled to the final stage on the left (z ) (3.). Furthermore, if the equation z = 2z is not fulfilled here, there is going to be a certain x in the final trajectory. In a final step, applying an external linkage (4.1) or parallel setup (4.2) we can constrain the motion to a single degree of freedom, as long as the condition z = 2z is fulfilled by design.
Option (4.1) was used, e.g., as the mechanism in an electrostatic balance at NIST [29], and option (4.2) as the dedicated guiding mechanism for the Mark II Kibble balance of the Federal Institute of Metrology (METAS) of Switzerland [30].
which points out a third order dependency of the induced moment on the deflection angle ϕ.
For simplification, we assume that we compensate a constant, elastic part of the flexure stiffness in the mechanism entirely by using, e.g., an inverted pendulum. Building the derivative of the previous equation due to ϕ yields the non-linear rotational stiffness term induced by the parasitic horizontal force F H K = dM dϕ = 3mg l 2 1 l 2 ϕ 2 .
Unfortunately, this term becomes highly dominant with larger deflections as can be seen in the upper plot in Fig. 13. With reasonable values for the system parameters shown in Figure 14 Total deformation plot of a finite element analysis at the beam balance model for the QEMMS mechanism. The final stage was displaced step-wise ±30 mm in z-direction and the force reaction as well as the error motion in x-direction was monitored. The result is shown in Fig. 13 Fig . 12, a study at a simplified beam balance-based mechanism model using finite element analysis was conducted. Figure 14 shows the model and boundary conditions therefore.
The force over displacement curve of the final stage of the mechanism was monitored and compared to the analytical model. The rotational stiffness in equation (10) can be transferred to a linear stiffness at the guiding mechanism in vertical direction by dividing equation (10) by the square of the balance lever arm l 2 1 . Furthermore, ϕ ≈ z/l 1 . Note that the constant offset of the dashed from the solid green line in Fig. 13 stems from a residual constant stiffness part in the simulation model, which can, in practice, be compensated by adjusting the center of mass of the beam [16].
Furthermore, the horizontal force F H acting upon the flexure-based guiding mechanism affects the quality of guiding. The simulation yielding the results for the stiffness of the mechanism in Fig. 13 also provides insights to this horizontal trajectory error, dx, along the vertical displacement axis in z-direction displayed in the lower plot in Fig. 13. We see that the total horizontal error motion is approximately 20 times larger than allowed. | 8,225 | 2022-07-11T00:00:00.000 | [
"Physics"
] |
Challenges and prospects for unmanned urban transport
. The article deals with current problems and prospects of development of urban unmanned transport. The rapid development of autonomous transport, artificial intelligence, and other information technologies makes it possible to introduce unmanned vehicles in urban public transport systems, primarily buses. The technological factors and obstacles for the development of unmanned public transport systems are summarised. Despite the fact that the capacity of such buses in current use is still small, a maximum of about 15 people, the routes are relatively short, and the use is mainly in test mode, the use of these vehicles, especially in large urban agglomerations, seems undoubtedly promising. The article presents an analysis of the main features and incentives for the development of unmanned public transport, gives a brief overview of pilot systems of autonomous public transport in European cities, considers obstacles to the development of these systems and the experience of development of unmanned public transport in Russia, and formulates assumptions about the future development of this transport segment. The first steps towards full autonomy and widespread use of unmanned urban public transport, however, this path will not be taken quickly.
Introduction
Taeihagh A., Lim H. S. M. Governing mention that "autonomous systems are characterised as systems capable of making decisions independently of human interference" [1].Such vehicles can be divided into three groups: private unmanned vehicles, shared unmanned vehicles or taxis, and unmanned means of public urban transport, primarily buses [2].In recent years, the topic of the development of unmanned transport has become increasingly relevant.
According to a study by the UN Department of Economic and Social Affairs, by 2050, two thirds of the world's population will live in cities.As shown in Figure 1, given the overall growth of the world's population, this will add 2.5 billion urban dwellers, a trend that will primarily affect relatively poor countries.
The estimated and projected urban population of the world, the most developed regions and the less developed regions from 1950 to 2050 is shown in Figure 1 [3].
Relevance
The availability of fast, well-connected and affordable public transport is fundamental to making urban areas accessible.Large and densely populated cities depend on high-capacity public transport trunk lines to meet travel demand.The trunk lines are, in most cases, surface rail or the metro.However, as trunk lines are not directly accessible to the entire population due to the large size of large cities, additional more flexible solutions are required to bring passengers to them.Increased automation, i.e., the development of vehicles whose use requires minimal human involvement, is seen as one such solution.
The development of unmanned public transport is also relevant given the importance of the environmental agenda.Most countries worldwide have set quantitative targets for reducing greenhouse gas emissions over the coming decades.According to the European Environment Agency, traditional road transport is one of the main contributors to carbon dioxide pollution in cities, and 60% of road transport emissions come from private and public passenger transport.Yet, as Figure 3 shows, its relative negative impact continued to increase between 2005 and 2019.Unmanned public transport can contribute to a drastic reduction in the negative impact on environmental quality, not only in terms of greenhouse gases, but also with regard to the concentration of particulate matter and noise pollution in the air [4], as well as harmful emissions of sulphur and nitrogen compounds [5].
Figure 2 shows the sectoral trends and progress towards the 2020 and 2030 targets in the EU-27 [6].
Innovations in autonomous driving technology, according to some experts, will eventually allow citizens without a driving licence, such as minors, to gain access to a car.Hörl S., Ciari F., Axhausen K. W. Recent note that "induced demand and a wider user base will lead to an increase in the number of cars on the streets" [7].Although the widespread use of autonomous vehicles can help to significantly increase road capacity, the rapid influx of new cars can lead to increased traffic congestion.This gives added urgency to the issue of developing not only personal but also public unmanned urban transport.
The potential for increased safety through the use of unmanned public transport is also a factor of relevance to the study.According to the US Department of Transportation (National Motor Vehicle Crash Causation Survey), drivers are the main cause of accidents in a large proportion of cases.Frequent causes of accidents include their inability to correctly recognise traffic situations, poor driving decisions or reduced attention to the road.The introduction of unmanned vehicles, although not an absolute panacea, may help Anton Smirnov, Evgeniy Smolokurov, Yuriy Fir mention in their article about a passenger train that derailed in Taiwan on April 2, 2021."The accident killed 50 people and more than 70 passengers were seriously injured.In total, there were 494 people on the train during the accident.Earlier in Taiwan, there was an accident in 2018 when a train derailed in Yilan County.Then 22 people became victims of the accident and more than 170 passengers were injured".[9].
Literature review
Research into the development of unmanned urban public transport is a relatively new topic in the academic literature.Mouratidis K., Serrano V. C. note that some studies based on hypothetical models and simulations conclude that "autonomous cars -both private and shared -will result in increased vehicle miles travelled, shifts from public transport and active travel modes to more car travel, and more urban sprawl" [10].Fraedrich E., Dirk Heinrichs, Francisco J., Bahamonde-Birkeb, Rita Cyganski note that the position dominating the research in recent years is that unmanned transport technologies "transport planners think that shared autonomous vehicles as a complement to public transport systems are more appropriate to support urban development strategies" [11].
Based on the fact that in traditional urban public transport systems, the bus is the main and most common mode of transport, the research on unmanned public transport also focuses on innovative buses.For example, Patrick M. Bosch, Felix Becker, Henrik Becker, Kay W. Axhausen analyse the different types of costs associated with urban transport, concluding that "Not only is demand bundling, when possible, more economic than pointto-point service, there is also a user preference for high-frequency, line-based service over dynamic services."[12].Certainly, one cannot disagree with this position.A considerable part of research is devoted to certain aspects and consequences of functioning of unmanned public transport systems.Zhuang Dai, Xiaoyue Cathy Liu, Xi Chen, Xiaolei Ma consider "optimization where there are already running buses (i.e., including both HBs and ABs) on the transit line at the beginning of the studied horizon" [13].Moradzadeh and Kaffafi examine unmanned public transport in the context of strategies for "urban development and minimisation of environmental damage through the use of low-carbon technologies" [14].It should be noted that the implications of unmanned means of passenger transport for E3S Web of Conferences 363, 04047 (2022) sustainable development are mainly analysed in terms of the dichotomy "public transportprivate car".At the same time, insufficient attention has so far been paid to the comparative analysis of unmanned and traditional public transport, both in the context of environmental impacts and broader implications (e.g., an increase in unmanned buses may lead to higher unemployment rates among drivers).The topic of shaping unmanned public transport systems in relation to the triad of sustainable development (economic, environmental and social aspects) requires further study.
A separate block of scientific and analytical literature is related to the study of public perception of unmanned, including urban public transport.S. Nordhoff, M. Kyriakidis, Bart van Arem, R. Happee show that "Automated vehicle acceptance (AVA) is a necessary condition for the realisation of higher-level objectives such as improvements in road safety, reductions in traffic congestion and environmental pollution" [15].Indeed, this is true.However, passenger attitudes towards innovations in public transport related to the introduction of unmanned vehicles have improved over time.
López-Lambas and Alonso, based on the analysis of focus group surveys, note positive factors in the development of the transport in question mentioned by passengers such as "Autonomous driving would be energy-wise more ecient and controlled.Vehicles would be programmed to minimise energy consumption by employing eco-driving patterns" [16].However, there are still concerns and elements of negative public perception due to the high cost of unmanned vehicles and associated infrastructure, potential safety risks and possible increase in unemployment.Salonen and Haavisto show that "By explaining the clear purpose of driverless shuttle buses, the development of distorted perceptions can be prevented and interest and desire created" [17].This view is also valid.For the analysis of other studies on the perception of unmanned public transport, refer to [18].
Problem statement
Analysis of scientific and analytical publications on the subject under consideration provides an opportunity to formulate the main theoretical and practical problems associated with the development of urban unmanned transport systems: identifying the advantages and disadvantages of the existing global pilot programmes of development of urban autonomous transport; identifying technological barriers to the development of the transport systems under consideration; identifying the features of regulation of unmanned transport sector and the necessary legislative and regulatory changes to ensure.
Taking into account the mentioned problems, the analysis of existing examples of using autonomous vehicles in urban public transportation systems, as well as the prospects for the development of this transport segment and the gradual replacement of unmanned vehicles by autonomous ones, are carried out within the framework of this study.
Aim, objectives and hypothesis
The purpose of this paper is to identify the future of urban unmanned public transport.
In order to achieve the goal, it seems necessary to perform the following tasks: -to analyse the main features and incentives for the development of autonomous transport and specifically public transport at present; -to consider the features and results of the functioning of pilot systems of autonomous public transport in the cities of the world; -to investigate the technological factors and barriers to the development of unmanned public transport systems; -to analyse the regulatory aspects of the development of the systems in question, to consider the available data on user attitudes towards unmanned public transport; It is to be expected that unmanned will make effective use of the potential of autonomous vehicles in passenger transport systems, providing citizens with a convenient, comfortable and sustainable alternative to traditional mobility.
Methods
This article is based on theoretical research methods, including a comparative analysis of factors for the development of unmanned public transport systems; statistical analysis of data on the use of unmanned transport; the method of induction and generalisation, which allows drawing conclusions from a review of the operation of the first unmanned transport projects in some cities around the world; and the method of synthesis, which provides an understanding of the prospects for the development of autonomous transport systems taking into account various groups of factors.
The empirical methods used include expert evaluation and forecasting, which allow formulating recommendations regarding the use of unmanned vehicles in urban public transport systems, including those in Russia.
Results and discussion
Unmanned urban public transport (mainly autonomous minibuses) belongs to a new group of mobility tools.There are several differences between traditional mobility and its new automated models that could revolutionise travel in the following ways (Figure 3) [19]: 1. Ridesharing or car-sharing (Blablacar is an example) and the use of taxi services are merging with carsharing due to driverlessness.2. Public transport by automation could become more efficient for more frequent trips with fewer passengers, and could also gain the ability to operate on-demand, making it more attractive relative to other types of unmanned mobility.3.There is less change in the personal car segment; instead of a revolution, there will be an evolution of the traditional car, which can already brake, park autonomously, or hold a lane.transport pilot projects has increased rapidly.Such projects have begun to generate interest in various cities, universities and private companies.The development of autonomous urban public transport systems is currently at an early stage.Most of the projects are characterised by relatively short route lengths and small numbers of passengers carried.It should also be noted that most of the autonomous buses used in the aforementioned projects operate at a speed of around 12 km/h, and only in some pilot projects do the buses reach a speed of 20 km/h.The accelerated development of unmanned urban public transport is constrained by a number of factors, primarily technological.
Electrification and autonomy are the two key technologies enabling the formation of the next generation of unmanned transport systems.However, full electrification of public transport cannot yet be considered economically feasible due to the high cost of batteries and their low energy density (i.e.specific energy density) compared to fossil fuels.
The full electrification of public transport systems requires both vehicles and infrastructure to be adapted accordingly.Infrastructure must be upgraded with charging stations and vehicles equipped with high-capacity batteries.An important consideration in infrastructure development is the choice between large stations (centralised architecture) and many small charging points (distributed architecture).S. Sachan and N. Kishor in their paper "have allocated centralized charging station in distribution network with the objective of minimizing system costs including power loss and voltage deviation" [20].
In terms of vehicles, electrification can bring significant benefits associated with reduced pollution, lower noise levels, and improved safety.As already noted, the main disadvantage of using batteries compared to fossil fuels for transportation purposes is the lower energy density.In particular, the energy density of a diesel engine is about 13,440Wh/kg, whereas a lithium-ion battery has an energy density of about 220Wh/kg [21].This means that in order to produce the same amount of energy as with fossil fuels, batteries would need to weigh about 60 times as much as a conventional engine.At the same time, electric motors have a higher efficiency (over 90%) than internal combustion engines (less than 30% in optimal conditions and less than 20% in normal use).Consequently, for the development of unmanned public transport infrastructure, it is important to focus on improving the energy capacity of batteries and the efficiency of the recharging process.
Sukhvinder P.S. Badwal, Sarbjit S. Giddey, Christopher Munnings, Anand I. Bhattand Anthony F. Hollenkamp note that "there are two variants of rechargeable Li-air technologya non-aqueous and an aqueous form, both of which offer at least ten times the energystoring capability of the present lithium-ion batteries" [22].There is also the potential to improve the lithium-ion batteries that are already in active use.According to the US Geological Survey, there is enough lithium in the United States alone to equip more than 30 billion vehicles with lithium-ion batteries [23].The current cost of lithium carbonate, primarily needed for battery production, and other materials used, such as cobalt oxide, manganese oxide, copper and aluminium, are relatively inexpensive and are widely available in nature.
To achieve the ambitious goal of safe and fully autonomous use in urban environments, vehicles must be equipped with a large number of sensors, essentially turning them into a kind of robot with new control functions such as reality perception and artificial intelligence (AI).The basic concepts related to modern autonomous vehicle technology are still controversial among experts.For example, which specific models of sensors and environmental perception systems to use, and which approach to take: traditional modelling or AI [24].
The problem of unmanned, or more broadly, autonomous control of any system has historically been solved by classical control theory, involving procedures for analysing a physical process and creating a controller for it.At the same time, a controller can only be E3S Web of Conferences 363, 04047 (2022) INTERAGROMASH 2022 https://doi.org/10.1051/e3sconf/202236304047created if the physical process is fully known.In the case of unmanned vehicles, the physical process can vary significantly depending on their geometry, loading and pavement properties (e.g., in the case of rain or snow).
Vishnukumar H. J., Müller C., Butting B., Sax E. conclude in their paper that "A future research aspect is to extend this methodology for complete autonomous testing and validation for the future generation of ADAS and autonomous vehicles with complete autonomously controlled environments in the real-world" [25].However, such systems, built using classical control theory, lack the element of informed decision-making required to create a fully unmanned vehicle.In this case, an artificial intelligence approach may be useful.
The AI approach has demonstrated effectiveness in many practical scenarios.The most active debate is over the level at which artificial intelligence should be implemented in unmanned transport systems.The most conservative approach involves minimal use of AI functions, limiting its task to recognising surrounding realities, such as detecting pedestrians, cyclists and other vehicles, and recognising obstacles.In a more 'technological' approach, AI is also used for reactive navigation and obstacle avoidance, i.e. it is included as an element in safety-related decision-making.However, the question of the optimal level of AI implementation to ensure full vehicle autonomy remains open.
The successful implementation of autonomous public transport also requires the development and adoption of appropriate regulation.However, a fully unmanned vehicle cannot be licensed because it does not comply with European legislation, in particular UNECE (United Nations Economic Commission for Europe) rules and broader international law.UNECE rules require automated vehicles to be designed in such a way that the driver can, at any time, by a deliberate act, deactivate the automated driving function.Additional legal restrictions on unmanned vehicles, including public transport, relate to technical regulations in some countries that require seat-belts, mechanical brakes and other features not required for fully autonomous vehicles.The presence of a so-called 'vehicle operator' in the passenger compartment can solve some legal problems.
Conclusions
Unmanned public transport uses constantly evolving technologies that will nevertheless require lengthy testing and refinement before autonomous vehicles can fully replace traditional vehicles.
The analysis presented in this paper leads to the following conclusions: 1. unmanned vehicles are making significant changes to traditional mobility models and have the potential to revolutionise passenger transport.Exactly how the relationship between the trends associated with the development of unmanned transport will play out depends on how actively governments and businesses support research and testing programmes for new technologies and vehicles, and how effectively barriers to the introduction of fully autonomous transport are removed.2. The development of unmanned urban public transport systems is still at an early stage.Most projects in this area are implemented in the format of testing and running-in of technologies for future scaling up 3. Development of autonomous urban public transport is hindered by a number of barriers, primarily technological and legal.The two main technologies enabling the formation of unmanned transport systems are electric propulsion transfer and autonomy.4. Wider adoption of unmanned public transport is constrained by the need to develop and adopt appropriate regulations.There are legal issues related to licensing, liability in case of accidents, safety and other aspects of using unmanned vehicles.
E3S Web of Conferences 363, 04047 (2022) INTERAGROMASH 2022 https://doi.org/10.1051/e3sconf/202236304047 5. Despite the objective difficulties, the number of projects in the field of unmanned public transport is growing, including projects of this kind being implemented in Russia.Nevertheless, such projects are only the first step towards mass use of unmanned vehicles in public transport systems on a par with traditional vehicles and, in the long term, the complete replacement of the latter.
However, the benefits and advantages of unmanned public transport in terms of costeffectiveness, environmental sustainability, passenger convenience, high levels of safety (in the long term) and other factors make efforts to address the emerging challenges worthwhile.In this regard, particularly in Russian realities, it is recommended to: 1. to study and take into account the experience of pilot projects of unmanned public transport in other countries when developing policies; 2. to intensify the implementation of the state policy in the field of AI, including on the basis of Presidential Decree No. 490 of 10.10.2019 "On Development of Artificial Intelligence in the Russian Federation"; 3. to support and provide sufficient funding for research related to new battery technologies, as well as other devices necessary for unmanned vehicles, such as sensors and sensors, and related infrastructure; 4. to take measures to amend the legal and regulatory framework governing the use of unmanned vehicles, including those intended for public transport, taking into account the key challenges presented in the article; 5. to incorporate unmanned public transport initiatives with other transport and urban development projects and programmes.
The implementation of the proposed measures will make it possible to effectively use the potential of autonomous vehicles in passenger transportation systems, providing citizens with a convenient, comfortable and sustainable alternative to traditional mobility.
Fig. 1 .
Fig. 1.Estimated and projected urban populations of the world, the more developed regions and the less developed regions, 1950-2050.
Fig. 2 .
Fig.2.Sectoral trends and progress towards achieving the 2020 and 2030 targets in the EU-27.
E3S
Web of Conferences 363, 04047 (2022) INTERAGROMASH 2022 https://doi.org/10.1051/e3sconf/202236304047-to determine the prospects for enhancing the use of unmanned vehicles in urban public transport systems, taking into account various factors; -to formulate recommendations on overcoming barriers to the development of unmanned public transport.
Fig. 3 .
Fig. 3. Possible consequences of implementing automated driving into four contemporary modes of transport.How exactly the balance between these trends plays out will depend on a number of factors, including the extent to which unmanned public transport programmes are actively supported and addressed.Over the past few years, the number of autonomous public | 4,894.4 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Viral persistence in colorectal cancer cells infected by Newcastle disease virus
Background Newcastle disease virus (NDV), a single-stranded RNA virus of the family Paramyxoviridae, is a candidate virotherapy agent in cancer treatment. Promising responses were observed in clinical studies. Despite its high potential, the possibility of the virus to develop a persistent form of infection in cancer cells has not been investigated. Occurrence of persistent infection by NDV in cancer cells may cause the cells to be less susceptible to the virus killing. This would give rise to a population of cancer cells that remains viable and resistant to treatment. Results During infection experiment in a series of colorectal cancer cell lines, we adventitiously observed a development of persistent infection by NDV in SW480 cells, but not in other cell lines tested. This cell population, designated as SW480P, showed resistancy towards NDV killing in a re-infection experiment. The SW480P cells retained NDV genome and produced virus progeny with reduced plaque forming ability. Conclusion These observations showed that NDV could develop persistent infection in cancer cells and this factor needs to be taken into consideration when using NDV in clinical settings.
Introduction
Newcastle disease virus is a negative-stranded RNA virus belonging to the family of Paramyxoviridae of the genus Avulavirus. Its high preference to infect cancer cells [1] have made it as one of the widely studied candidate agent for oncolytic virotherapies. Due to its promising potential, NDV strains NDV-HUJ, PV701, MTH-68/H and NV1020 are now being tested as cancer therapeutics in clinical trials [2][3][4]. Despite their promising preclinical findings [4], further advancement in the clinical use of NDV is still debatable even though their mechanistic insights are well studied [reviewed in 4]. The risks involved in using viruses as therapeutic agents include problems with mutant viruses having increased pathogenic properties [5] and potential development of persistent infections [6].
Viruses have the ability to establish a persistent infection in host cells. This type of infection may exist in the forms of silent or productive infections [6]. Cells, which have been persistently infected by viruses, remained viable in the forms of latent, chronic or slow infections [6]. The ability of enveloped negative strand RNA viruses such as arboviruses, paramyxoviruses, vesicular stomatitis virus (VSV) and rabies virus to develop persistent infections in host cells has been well documented [7][8][9][10]. The forms of 'incomplete' viruses [10][11][12] involved in these persistent infections were referred to as defective interfering particles (DIPs). For oncolytic viruses, development of persistent infections can lead to reduced virus-induced cytotoxicity [13]. It can also create populations of cells that have reduced permissiveness to wildtype viruses [10][11][12][13]. This phenomenon was previously reported for the oncolytic reovirus, where wild type reovirus cause reduced infection in persistently infected cells [14]. Development of persistent infections by NDV was first reported in the late 60s and early 70s [7][8][9][10][11][12][13]. These studies, however, only reported the occurrence of persistent infections in normal cells. Since NDV has shown great potential as an anticancer agent in clinical studies [2,3,15,16], detailed information regarding their involvement in persistent infection in cancer cells is imperative. Thus far, occurrence of persistent infection in cancer cells by a velogenic strain of NDV has not been described, even though its establishment in normal cells had been reported [8][9][10][11][12]. Therefore, in the present study, we reported the development of persistent infection by NDV in a subpopulation of SW480 colon cancer cells. These findings contribute additional data needed in the tailoring of NDV, particularly of a velogenic strain, as oncolytic virotherapy agent in clinical settings.
Results
Detection of a subpopulation of cancer cells that are resistant to NDV cytolysis During the course of our investigation into the oncolytic activities of NDV strain AF2240 in human colorectal cancer (CRC) cell lines, we inadvertently observed the existence of a subpopulation of SW480 cell line that was resistant to NDV killing. To characterize these resistant cancer cells, we attempted further culturing and propagation of the cells based on their continued proliferation and morphological observation. This population of cells originated from the remaining viable SW480 cells after 96 hpi ( Figure 1A). Even though SW620, DLD-1, Dks8, HCT116p53 +/+ , HCT116p53 -/-, and HT29 CRC cell lines also contained approximately 10-30% viable cells at this time post-infection, they died within 7 days following media replacement. The remaining cells in the infected SW480, on the other hand remained viable and they continued to proliferate following media replacement. The resulting population of cells was designated as SW480P. These cells retained similar morphological features to the parental SW480 population ( Figure 1B). A complete confluency of the SW480P on flask surfaces was achieved within approximately 10 days after the media replacement.
SW480P cells are less susceptible to NDV-induced cytolysis
The existence of a subpopulation of cancer cells that were resistant to NDV killing was suggestive of a persistent type of viral infection [6]. For oncolytic viruses, these persistently infected cells tend to be less susceptible to virus-induced cytolysis in subsequent infections [13]. To investigate whether this phenomenon occur in the SW480P, a reinfection studies were performed. Re-infection of the SW480P cells with the same multiplicity of infection (MOI) as the initial SW480 infection, did not result in cell death even after 96 hpi (Figure 2A). The cells instead continued to divide at almost the same multiplication rate as the mockinfected SW480 population. Even though a slightly slower replication rate was noted in the infected SW480P during the first 24 hours of infection, it stabilized to the level of SW480 afterwards. Infected SW480, on the other hand displayed the expected pattern of reduced cell viability as the infection period progressed. In addition to the absence of susceptibility to NDV killing, infected SW480P cells also failed to show any obvious cytopathic effects (CPE) after 72 hpi ( Figure 2B, left panel). A parallel infection in the parental SW480, on the other hand, showed the typical pattern of CPE due to NDV infection ( Figure 2B, right panel).
NDV genes and proteins were detectable in the SW480P cells
Resistance of the SW480P towards NDV-induced killing during the re-infection experiment was suggestive of a persistent NDV infection in the cells. Characteristics of persistently infected cells include reduced virus-induced cytotoxicity and a deregulation of antiviral response [17]. This was perhaps coincided with the presence of viral RNA and protein synthesis [11] within the hosts' transcriptional and translational machineries. To investigate whether the SW480P cells retained NDV genes and proteins, we performed RT-PCR and immunofluorescent analyses as described in the Materials and methods. RT-PCR amplification showed a DNA band with a size of 1.7 kb, which was the expected size of an amplified nucleocapsid protein (NP) gene of NDV. This DNA band was seen in the positive control ( Figure 3A, lane C2) and the SW480P samples ( Figure 3A), regardless whether the cells were infected or not with NDV. On the other hand, no such band was seen in the negative control ( Figure 3A, lane C1) and the mock-infected SW480 samples ( Figure 3A).
Since DNA band of a similar size to the amplified NP gene was detected in the mock-infected SW480P cells, we were interested to see whether NDV proteins were also present in the samples. Immunofluorescent staining using a monoclonal antibody against the NP protein of NDV gave a positive detection in the mock-infected SW480P cells ( Figure 3B, left panel). A speckled pattern of antigen staining (green) in the cytoplasm of cells, particularly in the perinuclear regions, was noted. Almost all the cells had this staining pattern. A positive staining for the NP protein was also seen in the infected SW480P as well as in the infected SW480 cells. No such staining was present in the mock-infected SW480 suggesting that the staining was specific for the NP protein of NDV.
SW480P cells maintained a productive NDV infection, secreting virus progeny with reduced plaque-forming ability
Detection of NDV genome as well as proteins in the mock-infected SW480P suggested that the cells had been persistently infected with the virus, and were actively producing viral proteins. To further characterize the type of persistent infection [6] that occured in the cells, we determined whether any infectious viral progenies or just DIPs were being secreted by the cells. To achieve this, we performed a plaque assay [18] using spent culture media of the mock-infected SW480P cells. Results showed that plaques were formed ( Figure 4A, top panel). However, their sizes were smaller than the plaques formed from the media of infected parental SW480 ( Figure 4A, middle panel). These smaller plaqueforming virus progenies were designated as 'mutant' NDV (mNDV) particles. The majority of plaques from the mock-infected SW480P media had diameters of less than 1 (±0.05) mm while those from the infected SW480 media had a range of 1-4 mm. Media from the reinfected SW480P cells (after 96 hpi; Figure 4A, bottom panel), gave rise to a mixed morphology of plaques with diameters ranging from 0.5-4 mm. Upon closer examination, the number of small plaques (less than 1 mm) was lesser than the number of the bigger ones (1-4 mm) by a ratio of approximately 1:10.
Counting of the plaques showed that the undiluted spent media of the mock-infected SW480P had 66 pfu/ ml of infectious virus progenies ( Figure 4B), almost all of which were less than 1 mm in size. This was in contrast to the media from the infected parental SW480 and reinfected SW480P, where higher numbers of bigger plaques (1-4 mm) were seen. The number of total secreted infectious virus progeny was significantly higher in the infected parental SW480 cells compared to the SW480P cells. This was true even when the SW480P cells were re-infected with NDV. termed DIPs [10][11][12]. DIPs are unable to sustain infection without the presence of suitable helper viruses. Plaque assay results in the present study showed that mNDV particles from the mock-infected SW480P culture media were able to infect and gave rise to plaques despite at smaller sizes. To investigate whether their infectivity on another colorectal cancer cell line was also retained, we performed infections in HT29 cells. HT29 cells were previously shown to be susceptible to NDV oncolytic activities [19]. Spent culture media collected from the mock-infected SW480P cells were used to infect HT29 as described in the Materials and methods section. After 24 hpi, viability of the infected HT29 cells remained around 90% ( Figure 5). This value was different from the value of cells infected with the wild-type NDV, where only around 10% of cells were still viable. Higher percentage of viable HT29 cells was seen in the mNDV-infected culture compared to the wild type NDV-infected cells throughout the infection period. Even after 96 hpi, more than 70% of the mNDV-infected HT29 cells continued to be viable. This finding confirmed that mNDV remained infectious albeit at lower infectivity in HT29 CRC cells.
Discussion
Development of persistent infection by oncolytic viruses may interfere with their efficacy as anticancer agents. This form of infection in target cells can affect wild type virus tropism as well as the cells permissiveness to reinfection [1]. It also caused a decrease in virus-induced cytotoxicity [13,20]. Data on the occurrence of persistent infections by a number of oncolytic viruses, such as Reovirus, Vesicular stomatis virus (VSV), measles, adenovirus and herpes simplex virus (HSV) were previously reported [20][21][22][23][24]. Up to now, there has been no report on the relationship of velogenic strain of NDV and persistent infection in cancer cells, despite the fact that the lentogenic strain of the virus is now in phases 1 and 2 clinical trials [4]. Development of persistent infection by NDV in normal cells, on the other hand, had also been well studied in normal cells [8][9][10][11][12], but not in cancer cells.
In the present study, we observed that a viscerotropicvelogenic strain of NDV [25] was able to develop persistent infection in a type of CRC cells, specifically the SW480 cell line. No such infection was observed in the other CRC cell lines tested. Further investigation into the contribution of genotypic differences among the cell lines towards NDV susceptibility and infection outcome are currently being investigated in our laboratory. The persistently-infected cells designated as SW480P, arose from a small subpopulation of cells which survived the primary round of NDV infection. SW480 cell line was also shown to be susceptible to cytolysis by other oncolytic viruses such as echoviruses [26]. This observation supports our data of the existence of persistently infected SW480 cells by NDV.
Persistent infection can be divided into latent, chronic, and slow infections [6]; each with its own unique characteristics that influence cellular changes. The lack of morphological changes in the SW480P versus SW480 cells, as well as the presence of viral NP proteins in almost all of the SW480P cells, were strongly suggestive of a slow type of persistent infection [6]. The absence of cell death, the occurrence of infectious virus secretion and the high fraction of antigen-positive cells in SW480P narrowed down the infection to a chronic diffuse type of persistent infection.
Previously, a plaque assay using media from persistently infected normal mouse cells on chicken embryonic fibroblasts also showed the formation of smaller plaques [10]. Even though the host cells were different in the assays, their data supported our observation that NDV can cause persistent infection. In the current study, we observed the occurence of a persistent infection in cancer cells. Such observation has not been previously reported in NDV infections.
These findings suggest that even though SW480P maintains a low level of productive NDV infection, they themselves are still susceptible to infection by the wild type NDV. On another note, the progeny virus, the mNDV, secreted by the SW480P cells retained their infectivity, hence they were unlikely to be in the form of DIPs per se. The DIPs are characterized by their inability to infect cells on their own due to large genetic mutations [27]. This mNDV virus was also able to infect another colorectal cancer cell line, HT29 [19] albeit at lower cytotoxicity. This suggested that the mNDV maintained its infectivity in other cancer cells besides the parental cells.
The fact that NDV was able to establish persistent infection in SW480 cells, but not in other cell lines tested, Figure 5 Infectivity of mNDV on HT29 cells. mNDV was infectious towards HT29 cells, however it resulted in less killing effect. The cells were infected using spent culture media from the mock-infected SW480P cells as described in the Materials and methods section.
highlighted the specificity of either the mechanism of NDV infection in SW480 cells or the cells' responses to the infection. To the best of our knowledge, until now only the lentogenic strains of NDV were evaluated in clinical studies [18]. This might be due to the specific regulations by the World Organization for Animal Health on the use of notifiable diseases and viruses with velogenic properties. Further investigation into the pathogenesis and oncolytic properties of the viscerotropic-velogenic strain of NDV, such as the one used in the study, would add to the understanding of their detail mechanistic actions. These data would contribute towards tailoring of velogenic NDV specificity in the clinical settings.
NDV propagation
A viscerotropic-velogenic NDV strain AF2240 [25] was propagated in 9-day-old embryonated chicken eggs as previously described [28]. After 48 h of incubation, the infected allantoic fluid was harvested and clarified by centrifugation at 5000 × g. Virus was purified using a sucrose gradient centrifugation at concentrations of 20 to 60% at 200,000 × g for 4 h. A band observed around the middle of the gradient, which represented the concentrated virus, was pipetted and diluted with 1 × PBS (Sigma) followed by another centrifugation at 200,000 × g for 4 h. The resulting pure virus pellet was resuspended with 1 × PBS, aliquoted and kept in -80°C until used. NDV is endemic in Malaysia and categorized as a BSL2 pathogen based on NIH, US and Japan guidelines [29,30].
All experiments were performed in BSL2 biosafety cabinet. Extra precautions were taken to ensure there was not leakage of the virus to the environment. Waste and all instruments were sterilized and decontaminated after use.
Cell lines and infection by NDV
CRC cell lines; SW620, SW480, DLD-1, Dks8, HCT1 16p53 +/+ , HCT116p53 -/-, and HT29, were generous gifts from Prof. Eric J. Stanbridge, University of California, Irvine, USA. Cells were maintained in RPMI1640 media (PAA, Austria) supplemented with 10% (v/v) fetal bovine serum (FBS; PAA, Austria). For infection, cells were seeded for overnight followed by infection with NDV [18] at a 2.0 MOI. Cell viability was determined using the trypan blue exclusion assay. Experiments were repeated at least twice and a representative of the replicates was presented.
Rescue of viable cells following NDV infection
After 96 h of infection, media containing floating cells were removed. Fresh growth media was added to the remaining attached cells. Cells were incubated for approximately two weeks with an addition of 4 ml of fresh growth media every 3 days. The surviving cells were then trypsinized and sub-cultured into new tissue culture flasks. For a re-infection experiment, the recovered cells were seeded and infected as described in the infection procedure above. Confirmation of NDV infection was performed using RT-PCR and immunofluorescent staining. Virus infectivity was quantitated using the plaque assay method [18].
RT-PCR
Total RNA samples were harvested using the TriReagent (Invitrogen, USA) following the manufacturer's protocols. First strand cDNA was then synthesized using the Reverse Transcription System (Promega) in a thermal cycler (MJ Research Inc. USA). Forward (5′-AAT GAA TTC TG ATG TCT TCC GTA TTC GAT G -3′) and reverse (5′-AAT CTC GAG C TCA ATA CCC CCA GTC GGT GT -3′) primers were used to amplify the NDV NP gene using 30 PCR cycles. The conditions were 94°C for 30 s, 55°C for 1 min, and 72°C for 1.5 min, followed by a final extension step of 72°C for 7 min. The resulting PCR product was electrophoresed, stained with ethidium bromide and viewed with a Gel Documentation Imaging System (BioRad, USA).
Immunofluorescent detection of NDV NP protein
Infected cells on cover slips were washed with 1 × PBS and fixed with 4% paraformaldehyde, followed by permeabilization with 0.1% Triton-X 100 for 10 min. All dilutions of reagents were performed in PBS. After blocking with 1% BSA, cells were probed with a primary monoclonal antibody against the NP protein of NDV [31] for overnight. The bound antibody was then detected with an FITC-conjugated secondary antibody (Santa Cruz Biotechnology, sc-2010) and counterstained with propidium iodide. Samples were then visualized using a fluorescent microscope (DFC420C, Leica) and images were captured using a DM2500 (Leica) camera.
Statistical analysis
Student's t-test was used to analyze the experimental data throughout the study. Results were expressed as mean ± standard error of the mean of at least two independent experiments. Statistical significance was defined as p-value < 0.05. All the tests were performed using Windows Microsoft Excel 2010 (Microsoft Corporation, Seattle, WA). | 4,411.4 | 2014-05-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
Laser 3D Printing of Inorganic Free-Form Micro-Optics
: A pilot study on laser 3D printing of inorganic free-form micro-optics is experimentally validated. Ultrafast laser direct-write (LDW) nanolithography is employed for structuring hybrid organic-inorganic material SZ2080 TM followed by high-temperature calcination post-processing. The combination allows the production of 3D architectures and the heat-treatment results in converting the material to inorganic substances. The produced miniature optical elements are characterized and their optical performance is demonstrated. Finally, the concept is validated for manufacturing compound optical components such as stacked lenses. This is an opening for new directions and applications of laser-made micro-optics under harsh conditions such as high intensity radiation, temperature, acidic environment, pressure variations, which include open space, astrophotonics, and remote sensing.
Introduction
Ultrafast laser written photonic circuits in transparent materials is a steadily growing scientific field and approaching towards practical implementations [1]. The laser directwrite (LDW) technique based on ultrashort pulses allows prototyping of dense hierarchical integrated devices of organic photopolymers [2] as well manufacturing of ultra highperformance devices made in diamond [3]. Both mentioned examples are incredible achievements individually; however, a cross-road of whether choosing the CAD-CAM design freedom for a prototype or the functional materials with limited processing options is inevitable. Here, we employ our developed laser additive manufacturing technique of Si/ZrO 2 tunable inorganic 3D micro-/nano-structures [4]. The approach combines a method of laser-assisted precision additive manufacturing and an advanced thermal post-processing solution. The laser direct writing 3D lithography enabled by non-linear light-matter interaction [5] is already a well-established technique for routine fabrication of diverse organic [6] and hybrid or composite materials [7]. However, until now, it was quite limited for direct processing of transparent inorganics of ceramic and crystalline phases [8].
On the other hand, the current advances of the technique are driven vastly by the rapid progress for implementation in manufacturing of various micro-optical and nano-photonic monolith elements as well as fully assembled complex 3D components [9][10][11][12][13]. Thus, here we demonstrate the combination of ultrafast laser 3D nanolithography and thermal posttreatment for opening a route for production of free-form inorganic structures-specifically free-form micro-optics. Up to now, it was just partially realized and restricted to limitations such as: 2D/2.5D structures [14], or millimeter-scale dimensions [15,16], or non-transparent components [17]. It is intuitively obvious and clearly anticipated that the Laser Induced Damage Threshold (LIDT) of such inorganic optical components will be of higher values preferable in practical micro-optics [18] and nano-photonic applications [19], especially taking into account high-temperature or light-intensity, chemically harsh environments, and heavy duty applications [20]. Another wide area of applications of calcinated microoptical elements is in specialized spectral applications, e.g., in astro-photonics where UV transparent micro-optical elements are required for coupling light into optical fibers [21]. Currently, such micro-optical elements can be made by laser ablation of UV-transparent sapphire [22]; however, a better optical quality of surfaces and unmatched freedom in 3D sculpting is achievable only via calcination.
The aim of the work was validation of the concept that transparent in the visible range and free-form micro-optics of inorganic materials can be made via a straightforward combination of laser 3D printing and high-temperature calcination. The implemented workflow is graphically shown in Figure 1. Figure 1. Schematics of the proposed approach: (a) LDW process; (b) wet chemical development bath (placing the sample with an angle results in a cleaner development process reducing the polymer waste); (c) calcination process; (d) a geometrical comparison of micro-optical structure prior and post calcination, a pre-compensation angle of the legs is shown as well as an expected shrinkage; (e) optical performance characterization of the micro-structure prior and post calcination; (f) imaging and measuring the resolving power by a calibration test target.
In order to achieve this goal, a series of micro-optical components were fabricated, calcinated and characterized geometrically as their functional performance was validated. The obtained glass-ceramic 3D micro-lenses demonstrated sufficiently high optical transmittance, imaging and magnification. Altogether, this proves it as a novel and feasible way to produce various micro-optical components with a high-degree of freedom in geometry. Additionally, stacking of individual elements into a compound monolith optical components was benchmarked-proving its feasibility for making custom integrated optical micro-systems. Further studies are projected towards precise evaluation of their refractive index (n) and even potentially tuning it towards specific demands.
Used Materials
We used the hybrid organic-inorganic material SZ2080 TM (IESL-FORTH, Heraklion, Greece) with 1 wt% concentration of Irgacure 369 [23]. The specific photopolymer was chosen due to its superior structuring and fabricated object physical performance, known and tunable chemical composition, and proven compatibility with the calcination process [4]. A drop drying technique was used for the preparation of the sample. The sol-gel prepolymer solution was drop-casted on a fused silica substrate, which is used due to its high melting temperature. A heating process on a hot plate was implemented to dry the sol-gel prepolymer drop by the following temperature stages sequence: (1) 5 min from room temperature to 40 • C leaving the sample to stay for 10 min, (2) 5 min from the previous temperature to reach 70 • C staying for 10 min and (3) 5 min to reach 90 • C from the previous temperature staying for 40 min. By this method, the temperature gradually increases to ensure the prepolymer acquires a uniform hard gel form to be used for the fabrication of the desired micro-optical structure. The samples were developed overnight in a chemical bath of 4-methyl-2-pentanone after the LDW fabrication process to remove nonpolymerized material and leave the self-standing structures attached to the surface of the glass substrate. An optimization of the development process of the micro-structure from the flat standard is shown in Figure 1b with an inclination of the sample during the chemical development. The presented method for developing ensures a reduction in the polymer waste on the final sample. The sample was later left to dry at room temperature in ambient conditions prior to further examination.
Geometry
The 3D micro-structures were designed to consist of two main parts: the micro-lens itself and its supports. The micro-lenses were formed by concentric circles, which allowed a high control over the fabrication parameter, i.e., the fabrication light intensity at the center of the lens, to avoid burning and defects, as shown in Figure 2a. We designed a 50 µm of diameter micro-lens with 65 µm radius and 5 µm height, where such characteristics are shown in Figure 2b.
The supports are formed by two main parts: bottom straight and top bent sections. A raster scanning was used for fabrication of the supports with faster scanning speed and higher intensity since the resulting surface quality was not relevant and only enough rigidity for mechanical stability was required. Three different arbitrary selected compensation angles were implemented in the bent section of the supports of the micro-lenses: 0 • , 16 • and 26 • . Figure 2c shows a preliminary view of the design of micro-lenses and the difference in the supports' angle chosen to pre-compensate for the bending of the supports due to an up to 70% expected shrinkage (reduction by a factor of ≈1.4) of the micro-structure after the calcination process.
Employed Equipment
The principle employed laser fabrication equipment, and the production procedure was sequentially described in detail previously [24]. The micro-lenses were fabricated by 3D LDW by photo-exposing the prepolymer using the second harmonic femtosecond beam (λ centered at 515 nm) of a Yb:KGW laser with a pulse duration of 300 fs and repetition rate of 200 kHz. The beam was focused into the prepolymer drop by a Zeiss 63 × 1.4 NA microscope objective under the oil immersion method with Immersol 518 F, as shown in Figure 1a. The fabrication intensity (I 0 ) was in the order of 0.23 TW/cm 2 with an approximate 10% reduction in intensity I c = 0.9 I 0 at the center of the micro-lens to avoid burning and defects. The supports were fabricated at a higher raster scanning speed and applied exposure intensity, 500 µm/s and 0.37 TW/cm 2 , for a more rapid fabrication. They consisted of straight and tilted parts in order to match the pre-compensation angle α, as shown in Figure 2c.
The polymerization was confined due to thresholded non-linear (multi-photon) absorption. Some linear absorption, avalanche ionization and thermal effects could contribute to the fabrication too. Despite that, the technique is compatible with the typical LDW twophoton polymerization setups employing ultrafast lasers.
=0°=16°=26°C entral intensity 0.9 Figure 2. Fabrication strategy characteristics: (a) Top view of micro-lens fabricated by concentric circles with tunable intensity (within the central part of the scanning, a 10% decrease in nominal value I 0 was applied); (b) schematic of micro-lens design parameters; (c) micro-structure design: supports: consisting of two parts (marked in yellow straight and green color for the tilted part) to adjust the pre-compensation angle α of the micro-lens support.
Calcination
In order to convert the laser 3D fabricated organic-inorganic micro-optics, the calcination process was applied: a heat-treatment was performed at 1100 • C to remove the organic components, as was applied for diverse architectures [4,8]. The calcination process was performed in a high-temperature Nabertherm oven, raising the temperature for 12 h until reaching 1100 • C and maintaining it for 3 h. A cooling curve was designed to slowly decrease the heating of the sample, from 1100 • C to room temperature. The specific temperature was empirically found as 1000 • C does not always result in transparent phases, while 1200 • C might already induce some undesired melting and loss of shape. This was not studied systematically while performing this study, yet the knowledge was obtained carrying out manufacturing experiments and is published in detail elsewhere [4,8]. Though, in principle, all photopolymers will experience shrinkage while sintering, only hybrid ones enable conversion into inorganic substance phases. In comparison, the SZ2080 TM hybrid was shown as superior for quality structure production with respect to OrmoComp ® [25].
Optical and Scanning Electron Microscopy Characterization
The initial characterization was made using an optical microscope (OM) to confirm the survival from the development of the micro-objects. Then the physical dimensions and the surface quality of the micro-optical structures were performed using a Scanning Electron Microscope (SEM, Hitachi TM-1000 (Tokyo, Japan) and Thermo Scientific Quanta 250 (Agawam, MA, USA), where no metallic coating was applied.
Performance Evaluation
The optical performance was obtained, judging the resolving power of the microlenses before and after the calcination process using the optical microscope for imaging purposes. A target unit Thorlabs R3L1S4P-Positive 1951 USAF provided the resolution lines to obtain the corresponding resolving power of the micro-lenses. The micro-lenses were placed directly over the target unit and manually aligned to different lines groups to perform their imaging using the transmission mode of the optical microscope (illuminating with a bright field white light). By translating the cover-slip where the micro-structures were standing, as shown in Figure 1e, it is possible to image the group lines from the target unit and obtain the resolving power of the micro-lenses in both stages of the experiment, as presented in Figure 1f: prior, and post calcination to compare any change in their optical performance. The same equipment and methodology was employed for assessing the n-the distance of the micro-lens and its focal plane were measured by translating it with stages, then recalculated ("reverse engineered") to the corresponding value for the exact shape of the produced micro-lenses.
Results
We have used light intensity at the sample as the key irradiation parameter determining the non-reversible light-matter interaction, since the photo modification occurs only when and where a certain light intensity level is reached. The Intensity-(I), which takes into account the measured average laser optical power-P, optical systems' including objective transmittance-T, pulse repetition rate-R, nominal beam waste radius-w 0 , and single pulse duration-τ can be expressed in the following: More specifics and details regarding laser exposure parameters including polymerization and optical damage intensity threshold behavior can be found in the recent review topic covering spatio-temporally confined light and its initiated physical-chemical polymerization mechanisms [5]. Based on this sole parameter, the light processing conditions can be significantly easier reproduced while employing various pulsed laser sources with different average powers, pulse durations, repetition rates and energies, exposure doses and durations.
The microstructures were fabricated using a concentric circle scanning method with an intensity of 0.23 TW/cm 2 . This scanning method presents the advantage of high control of the fabrication parameters allowing to reduce 10% of the intensity at the center, which proved to avoid burning at the center of the micro-lens and defects, which affect their optical performance. The SEM images' prior calcination in the upper row of Figure 3 shows the three different arbitrary angles that were chosen to study as the pre-compensation supports' angles.
The obtained resolving power of the calcinated micro-lenses was determined to be 4.39 µm (line size) or 228 lp/mm based on line group 7 item 6 in a positive 1951 USAF target, as shown in Figure 4. The value is close to the ones recently reported in the literature. Again, one should note that the calcinated micro-lenses are of sub-100 µm dimensions and made out of inorganic glass material with respect to mm-scaled lenses reported elsewhere [12,26].
The dimensions of the fabricated micro-structures were retrieved by two different methods. The diameter (D) of the thin spherical micro-lenses was obtained by an analysis of the top view of images taken by an optical microscope in transmission mode and compared with the side view of SEM images of the structures, both analyzed using Im-ageJ [27] software. Table 1 shows the comparison of size for the optical microscope and SEM images prior to and post the calcination process. The diameter of the micro-lenses shows the repeatability of the fabrication with results of 50 ± 1.5 µm. The results after calcination at 1100 • C show the shrinkage of the structures by around 40%. No fissure of the structures was observed within the breadth of the carried experiments. The fracture of structures could potentially emerge in the case that they would be fully attached to the substrate and form larger areas in contact with non-shrinking substrate, which would induce stress accumulation. However, the support structures ensure isotropic shrinkage without fragmentation or delamination of the volumes/surfaces. Figure 3, and the focal distance was measured as visualized in Figure 1. The shrinkage ratio and averaged values were calculated accordingly. Within the limitation of the employed techniques and equipment in the current study, we could not obtain the precise shape of the lenses and measure the exact focal distance in order to confidently estimate n. The preliminary measuring using illumination with a bright field white light indicated an averaged value of ≈1.609, which is reasonable but needs to be assessed using alternative method(s) for confirmation. However, due to densification of the material during calcination, it is expected to increase as it was observed for the glass preparation out of ZrO 2 -SiO 2 sol-gels. Namely for the SZ2080 TM -like material, it can be up to n = 1.617, as previously reported for non-structured material in Reference [28].
Thermal Effects of the Hybrid Organic-Inorganic Material SZ2080 TM
Recent works showed that one of the best ways to achieve high quality free-form 3D manufacturing of inorganic materials is combining LDW and thermal post-processing. More specifically, the thermal treatment of SZ2080 TM up to 1100 • C in an air atmosphere results in the organic component of the hybrid materials decomposition and gives pure inorganic metal (Zr) and semimetal (Si) oxides structures with properties close to glass. As an added bonus, the SZ2080 TM material loses 28% of its weight, while a substantial downscaling of a structure reduces it by up to 21% of the initial size in the 2D line case and 60% in the 3D case [8,18].
This indicates that the obtained inorganic structures become denser compared to the original polymeric structures. The shrinkage is also completely isotropic in all directions, making it very predictable. In addition to shrinkage, densification and demonstrated transparency, high-temperature exposure to SZ2080 TM derivatives provides resistance to aggressive environmental conditions, such as: chemicals (acids and bases, for instance: Piranha solution), extreme temperatures (−200 • C to 1100 • C) and mechanical effects (rinsing in ultrasonic bath, metal sputtering in vacuum, multiple handling of the sample) [4].
Findings in a Wider Context
The LDW 3D nanolithography enables the exposure of a dose-dependent modification depth (degree of conversion/polymerization degree), which can offer a 4D option in tuning the n and at the same time requires even more precise adjustment of laser processing parameters as the geometry is not fully compliant with optical density [29,30]. It was reported that thermal post-curing can serve as an efficient strategy, eliminating the process parameters' sensitivity in the mechanical properties of two-photon polymerized materials [31]. Therefore, here we see the proposed calcination route as the way to exclude the n variation of the micro-optics ad the conversion to an inorganic substance after the evaporation of organic substances leaves no memory effect of exposure. This is a very important issue being recognized in free-form 3D micro-optics. Furthermore, it allows the extinction of any photo-initiator used during the laser photopolymerization as an organic molecule, which is non-preferable due to its absorbance [32] and yellowing effects [26]-both limiting optical performance of micro-optics.
Benchmarking Achievements
Finally, we produced benchmarking structures resembling micro-optical components as free-standing, mounted and assembled miniature systems, shown in Figure 5a-d respectively. As the geometry of the manufactured pristine element can be arranged freely to pre-compensate for the calcination induced sintering, it can be simply adjusted for any architecture by including the angle α with respect to the non-shrinking surface of the substrate. Specific different benchmarking micro-optical components were produced using slightly different exposure parameters (adjusted for the specific geometries) and characterized only geometrically (architecture wise, not in the optical performance sense). This was made purely to validate the feasibility of the proposed methods for the manufacturing of non-spherical (i.e., aspherical with concentric grating (Figure 5a,b, flat faced (Figure 5c)) optical surfaces and already assembled devices (Figure 5d).
All flat faced surfaces of the elements were raster printed with typical layer hatching and slicing periods of 250 and 400 nm, respectively. Aspherical surfaces were printed by scanning spiral loops separated by 200 nm in lateral and 250 nm in axial directions. In all cases, linear writing velocity was set to 500 µm/s and an intensity of 0.27 TW/cm 2 (which in other measures maintains a ∼2.5 nm separation distance between E = 0.3 nJ femtosecond pulses). A triplet lens, consisting of three co-axially aligned singlet lenses (96, 66, and 34 µm in diameter) were 3D-printed and heat treated in the air atmosphere at 300 • C for 4 h. After calcination, diameters of the lenses were found to be 77.5, 53 and 27 µm, respectively, showing uniform down-scaling of the whole compound to a factor of 0.8 ± 0.006 of its initial dimensions. In addition, no visible defacement of the triplet alignment was observed.
All structures were printed on the foldable supports, pre-compensated by inclination angle α such that supports would stand perpendicularly to the substrate after heat treatment. The inclination angle depends on several parameters (namely down-scaling factor, initial diameter and height of the structure) and was calculated to be α ≈ 11 • .
Future Research Directions
In regard to space applications, it is a rising interest in both technological light-based solutions (e.g., materials providing resistance to ionization radiations, space-grade optical fibers, etc.) and in light-driven natural phenomena that could be exploited, especially for outer space applications (e.g., solar sails). We anticipate the reported findings to be valuable for advancing this direction forward.
It is worth noting that the developed approach is in principle compatible with applying bio-derived plant-derived resins as organic ingredients of the hybrid photopolymer prior to the calcination, thus evaporating the renewable resources instead of fossil-originating synthetic polymers [33]. Furthermore, the proposed route is compatible with the multiscaling options within various existing and emerging platforms [34]. Namely, fields such as soft-robotics [35], especially light-mediated manipulation of actuators [36], next to flexible electronics [37] enabled by femtosecond LDW technology [38], is expected to benefit from the revealed findings.
Conclusions
The performed conceptual work was experimentally validated, as glassy 3D microlenses were made, their transparency imaging properties were approved, and the achieved resolving power reached 4.39 µm. The findings outreach far beyond the carried out experiment as it can be compatible with various laser 3D lithography setups and hybrid materials, both including commercial ones. Overall, it can be summarized to the following essential conclusions:
1.
Laser multi-photon 3D nanolithography of hybrid materials in combination with high-temperature calcination is enabling (nano-)additive manufacturing of free-form micro-optics out of transparent and pure inorganic glasses without any fissure of crucial geometrical distortions.
2.
The proposed method offers advantage of uniforming the material in respect to the laser lithography 3D structuring and developing process, thus making the n insensitive to the specific exposure conditions by improving its internal homogeneity and surface quality [39]. 3.
The future work will be targeted for improving the element itself by additionally pre-compensating for the lens shape (it can be made concave initially to balance the volume of the material), optimizing the calcination treatments (taking into account the specific elevation/cooling steps), and modifying the pristine material (some different Si:Zr ratios as well as validating other inorganic ingredients).
4.
We anticipate it as a strategy for extending the additive manufacturing of inorganic 3D structures, offering high complexity integrated devices for micro-sensing [40], nano-fluidic [41] and astro-photonic [42] applications.
5.
Finally, the developed methodology is offers the production of highly resilient 3D micro-optical components for harsh chemical, mechanical, pressure and temperature variation environments, including a high optical damage threshold.
This pioneering work is creating a new dimension for true 3D and free-form inorganic micro-optics and extending the possibilities of well-established laser multi-photon 3D lithography as mature LDW technology. The calcination process removes the organic photo-initiator and thus provides benefits of keeping the inorganic micro-optics free from absorbing/coloring agents, which could cause loss of light [18] and even induce damage [19]. On the contrary, the LDW 3D nanolithography of glass-ceramics is in principle compatible with the doping of inorganic active compounds preserving their functionality while being embedded in a 3D structure's matrix [43]. Thus, it makes possible selective ma-terial removal, optimizing the 3D components targeted for high-duty performance. Further efforts will be invested in quantitative measurement of effective n, which we anticipate might additionally depend on specific geometries of the micro-optical elements. | 5,255.6 | 2021-11-08T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Chitosan Graft Copolymers with N-Vinylimidazole as Promising Matrices for Immobilization of Bromelain, Ficin, and Papain
This work aims to synthesize graft copolymers of chitosan and N-vinylimidazole (VI) with different compositions to be used as matrices for the immobilization of cysteine proteases—bromelain, ficin, and papain. The copolymers are synthesized by free radical solution copolymerization with a potassium persulfate-sodium metabisulfite blend initiator. The copolymers have a relatively high frequency of grafting and yields. All the synthesized graft copolymers are water-soluble, and their solutions are characterized by DLS and laser Doppler microelectrophoresis. The copolymers are self-assembled in aqueous solutions, and they have a cationic nature and pH-sensitivity correlating to the VI content. The FTIR data demonstrate that synthesized graft copolymers conjugate cysteine proteases. The synthesized copolymer adsorbs more enzyme macromolecules compared to non-modified chitosan with the same molecular weight. The proteolytic activity of the immobilized enzymes is increased up to 100% compared to native ones. The immobilized ficin retains up to 97% of the initial activity after a one-day incubation, the immobilized bromelain retains 69% of activity after a 3-day incubation, and the immobilized papain retains 57% of the initial activity after a 7-day incubation. Therefore, the synthesized copolymers can be used as matrices for the immobilization of bromelain, ficin, and papain.
Introduction
Biopolymers are macromolecules obtained from living organisms, and they are attracting considerable attention from researchers. The most important reasons for this are their renewability, low toxicity, and biocompatibility. Traditionally, such materials are used in the food, cosmetic, and pharmaceutical industries [1][2][3]. However, with the recent development of biomedical technologies, the greatest attention is paid to the natural macromolecules in this field [4][5][6]. For example, they are used as components in drug delivery systems [7,8] or wound dressing [9][10][11][12].
One of the promising biomaterials for medicine is chitosan [13,14]. This natural polysaccharide exhibits the properties of a polycation, which is quite rare for such polymers, and it can thus interact with a wide range of biologically active substances [15]. For example, chitosan has successfully proven itself as a matrix for the immobilization of hydrolytic enzymes [16,17]. By adsorption, it is possible to obtain immobilized papain or ficin characterized by high stability, antibacterial activity, and the ability to destruct microorganism biofilms [18,19], increasing the effectiveness of antibacterial substances.
On the other hand, chitosan is characterized by a limited pH range of solubility (pKa~6.5) [20]. Due to this fact, it is difficult to obtain substances containing a sufficient amount of the immobilized enzyme. In addition, the range of chitosan solubility may not coincide with the optima of the enzyme activity. The modification of chitosan by tuning its properties can solve these problems. One of the promising methods for its modification is the graft copolymerization of chitosan with vinyl monomers [21]. For example, the N-vinylimidazole homopolymer, poly(N-vinylimidazole), PVI, is water-soluble and this will expand the range of solubility of the polysaccharide [22]. In addition, the high complexing properties of PVI to various compounds are well known [23][24][25], as well as the ability to form protein-like (co)polymers [26]. Based on these facts, macromolecules with a combination of a backbone of non-toxic biocompatible chitosan with side grafted chains from PVI will provide high efficiency for enzyme binding, forming a conjugate with low toxicity. However, it is well known that immobilized forms of enzymes are usually characterized by lower catalytic activity compared to native enzymes [16]. Therefore, when choosing a matrix for immobilizing an enzyme, one should consider not only the stability of the resulting enzyme preparation but also its catalytic activity.
Plant cysteine proteases are of particular interest for the production of enzyme formulations with a matrix based on PVI polymers. However, like many enzymes, these proteins are characterized by a low stability in aqueous solutions, which can be increased by creating immobilized forms of enzymes. The active site of cysteine proteases consisting of cysteine, aspartic acid, and histidine residues [27] contains an azole ring. Therefore, we can expect the efficient formation of a conjugate enzyme: a graft copolymer of chitosan with N-vinylimidazole, which is of interest for the evaluation the effect of this interaction on the preservation of the proteolytic activity of the obtained substances. In addition, cysteine proteases have found wide applications in food technologies, pharmaceuticals, and biomedicine [27]. This modification enhances the applicable value of researching immobilized enzymes.
In this regard, this work was targeted to synthesize graft copolymers of chitosan and N-vinylimidazole of various compositions and to study the possibility of their use as matrices for the immobilization of cysteine proteases-bromelain, ficin, and papain.
Synthesis of the Cht-g-PVI Copolymers
For the typical experiment, 0.5 g of Cht and 50 cm 3 of 2% w/v acetic acid were placed in a thermostatically controlled reactor equipped with a mechanical stirrer. The mixture was kept at 40 ± 2 • C with stirring until the polymer dissolved completely. Then the calculated amount of PPS and SMB mixture (1:1 mol) was added to obtain initiator concentration 4 × 10 −3 mol × L −1 , and the calculated amount of VI aqueous solution containing HCl (1:0.1 mol to prevent PVI chain transfer reaction) after being maintained for 15 min was also appended (Table 1). After 24 h the reaction mixture was placed in a beaker containing 200 cm 3 of acetone and was stirred. The forming precipitate was filtered off using a Buchner funnel and washed. The resulting copolymers were purified by dialysis for 1 week against distilled water using dialysis tubing cellulose membrane, cutoff 12 kDa, and freeze dried to a constant weight.
Instrumental Section
The IR spectra were recorded on an IR-Affinity 1 FTIR spectrometer (Shimadzu Instruments, Japan) in the ATR mode with a ZnSe prism in a frequency range from 700-4000 cm −1 and a resolution of 4 cm −1 . Analyzed samples were in the form of dry powders. The composition of copolymers was calculated by FTIR data considering the ratio of the areas of the absorption bands related to the vibrations of 1,4-glycosidic bonds of Cht and C-N bonds of imidazole ring at 1189 cm −1 and 915 cm −1 , respectively.
The grafting efficiency (GE) was calculated by the following equation: where m Cht-g-PVI , m Cht , and m VI are the mass graft copolymer obtained, chitosan, and VI used in the polymerization, g, respectively.
The frequency of grafting (FG) is expressed as the number of grafted polymer chains per anhydrous glucosamine unit (AGU) of the backbone polymer and is obtained from the relationship [21]: where PVI, % is the percentage of grafted PVI in Cht-g-PVI; M PVI is the molecular weight of PVI, M AGU is the average weight of anhydrous glucose unit of Cht; Cht,% is the percent of Cht in Cht-g-PVI.
To determine molecular weight of the grafted PVI chains, synthesized copolymers were subjected to oxidative degradation [22] followed by viscosimetry in ethanol at 20 • C of the isolated grafted chains. The molecular weights were calculated from the viscosity data processed by the Mark-Kuhn-Houwink-Sakurada equation [28]: The electrokinetic potential (ξ-potential) of polymer particles in aqueous solutions was determined by laser capillary Doppler microelectrophoresis with a Malvern ZetaSizer Nano instrument (Malvern Instruments, UK) in cuvettes equipped with a gold electrode at 25 • C.
Hydrodynamic diameters D h of the polymer particles in the water solution were determined by DLS with Malvern Zetasizer Nano instrument (Malvern Instruments, UK). A He-Ne laser with λ = 633 nm was utilized as a light source, the scattering angel was 170 • .
Transmission electron microscopy (TEM) was performed to confirm the DLS data by using a Libra 120 Carl Zeiss electron microscope. A droplet of the sample liquid was deposited on a Formvar-coated copper grid and air-dried for 1 min, and then the excess of the solution was blotted off.
Enzyme's Immobilization
The immobilization of bromelain, papain, and ficin on synthesized copolymers was performed using the adsorption approach developed previously [16]. The protein content in immobilized formulations was determined by the Lowry method [29]. Measurement of the proteolytic activity of the enzyme was carried out on the substrate azocasein [30]. The statistical significance of differences between the control and experimental values was determined by Student's t-test (p < 0.05) since all results were characterized by a normal distribution.
Molecular Docking
The enzyme structures were prepared for docking according to the standard scheme for Autodock Vina package: the atoms (together with their coordinates) of the solvent, buffer, and ligand molecules were removed from the PDB input file. Before carrying out the numerical calculations, a charge was placed on the surface of proteins using MGLTools. The center of the molecule and the parameters of the box («cells») were set manually, ensuring that the whole protease molecule could fit in the box.
The Cht-g-PVI copolymer structure model was drawn in the HyperChem molecular designer; this structure was consistently optimized first in the AMBER force field, and then quantum-chemically in PM3. The ligands in the docking calculations had the maximum conformational freedom; rotation of the functional groups around all single bonds was allowed. Charge arrangement on the Cht-g-PVI copolymer molecule and its protonation/deprotonation was performed automatically in the MGLTools 1.5.6 package (https://ccsb.scripps.edu/mgltools/1-5-6/, accessed on 15 May 2022) [16].
Synthesis and Characterizations of the Cht-g-PVI Copolymers
The graft copolymers of chitosan and N-vinylimidazole with different compositions denoted as Cht-g-PVI were synthesized via free radical solution polymerization in the presence of a PPS-SMB system. The temperature and pH of the reaction medium have a significant effect on the polymerization process and architecture of the resulting polysaccharide-based polymers. According to the research data [31], low pH values and high temperature during the polymerization lead to the formation of carbohydrate block copolymers, while mild conditions favor the formation of graft copolymers with a polysaccharide backbone and synthetic side chains. Moreover, heating a highly acidic solution of carbohydrates (pH~1-2) leads to an increase in the resulting product's polydispersity due to the destruction of the initial macromolecules. Acid degradation of the polysaccharides can also lead to the accumulation of toxic degradation by-products, which can have a carcinogenic effect. For this reason, we have chosen mild conditions that contribute to the production of graft copolymers.
Another important factor determining the efficiency of the graft polymerization process is the initiator. Various types of initiation are known to be used to obtain graft copolymers. However, for solution polymerization, redox initiators are most often used, triggering a radical process. Transition metal compounds such as cerium (IV), manganese (VII), iron (II), etc., are often used in such systems [32]. However, it is well known that the most complete and efficient redox reaction occurs in an acidic medium. The undesirable degradation of polysaccharides can occur under these conditions. Another widely used initiator is potassium persulfate, which effectively initiates polymerization over a wide pH range. The operating temperature of potassium persulfate starts at 50 • C, and it is proposed to use the PPS-SMB system to reduce it. It is known that such a combination not only expands the range of PPS operation but also reduces its destructive effect on polysaccharide macromolecules. That is why we used the initiating system PPS-SMB in a molar ratio of 1:1 in this work.
It is well known that VI tends to participate in the side chain transfer reaction due to the formation of macroradical resonance forms (Scheme 1) [33]: It is well known that VI tends to participate in the side chain transfer reaction due to the formation of macroradical resonance forms (Scheme 1) [33]: To prevent this, a quantity of HCl (VI:HCl = 1:0.1 mol) was added. The pH of the resulting polymerization mixture was in the range of 3.9-4.5 which allows it to achieve mild polymerization conditions resulting in a graft polymer.
The polymerization mechanism includes the following steps. Firstly, PPS and SMB dissociate in an aqueous medium, and metabisulfite ions interact with water molecules turning into hydrosulfite anions. Persulfate anions react with HSO3 − ions forming sulfate ion-radicals and hydrosulfite radicals which interact with the water molecules resulting in hydroxide radicals [34]. Then, all the types of the formed radicals react with the OH or NH2 groups of the chitosan generating macromolecule radicals. These radicals interact with N-vinylimidazole molecules forming growing polymeric chains. The chain terminations proceed due to the recombination. The polymerization scheme is represented below (Scheme 2).
To prevent this, a quantity of HCl (VI:HCl = 1:0.1 mol) was added. The pH of the resulting polymerization mixture was in the range of 3.9-4.5 which allows it to achieve mild polymerization conditions resulting in a graft polymer.
The polymerization mechanism includes the following steps. Firstly, PPS and SMB dissociate in an aqueous medium, and metabisulfite ions interact with water molecules turning into hydrosulfite anions. Persulfate anions react with HSO 3 − ions forming sulfate ion-radicals and hydrosulfite radicals which interact with the water molecules resulting in hydroxide radicals [34]. Then, all the types of the formed radicals react with the OH or NH 2 groups of the chitosan generating macromolecule radicals. These radicals interact with N-vinylimidazole molecules forming growing polymeric chains. The chain terminations proceed due to the recombination. The polymerization scheme is represented below (Scheme 2).
The structure of the synthesized copolymers was confirmed by FTIR ( Figure 1). The FTIR spectrum contains the following characteristic absorption bands: at 915 cm −1 attributed to the deformation vibrations of the imidazole cycles; at 1057 and 1098 cm −1 corresponding to the stretching symmetric skeletal vibrations of the pyranose cycles and the C-O-C bond, respectively; at 1189 cm −1 , due to 1,4-glycosidic bonds; at 1228 cm −1 relating to the stretching vibrations of the C-N bonds and the deformation vibrations of the C-H bonds of the imidazole cycles; at 1588 cm −1 , describing the deformation vibrations of the N-H bonds of the chitosan amino groups; a band at 2875 cm −1 ascribing to the stretching vibrations of the methylene groups; and a wide band in the region of 3000-3500 cm −1 related to vibrations of OH and NH 2 groups of chitosan [35]. Several of the bands at 1320, 1412, and 1500 cm −1 corresponding to the vibrations of the dissociated carboxylic groups indicate that the obtained copolymers were in the acetate salt form. The absence of the low-intensity absorption band near 1610 cm −1 confirms that graft polymerization occurs due to the opening of the C=C bond of N-vinylimidazole [35,36].
The graft copolymer compositions were also determined by FTIR data and are presented in Table 2. As it can be seen, the VI content in the copolymers grows with an increase in the VI content of the polymerization mixture. In addition, the same pattern is observed for the molecular weights of the PVI grafted chains, as these values also rise with VI content growth in the reaction feed. However, the graft-copolymer yields decrease with the VI content growth in the reaction feed. This is due to the enhancing side processes such as VI homopolymerization. The yield data correlates to the grafting efficiency (GE) values which also decrease due to side processes occurring. The structure of the synthesized copolymers was confirmed by FTIR ( Figure 1). The FTIR spectrum contains the following characteristic absorption bands: at 915 cm −1 attributed to the deformation vibrations of the imidazole cycles; at 1057 and 1098 cm −1 corresponding to the stretching symmetric skeletal vibrations of the pyranose cycles and the C-O-C bond, respectively; at 1189 cm −1 , due to 1,4-glycosidic bonds; at 1228 cm −1 relating to the stretching vibrations of the C-N bonds and the deformation vibrations of the C-H bonds of the imidazole cycles; at 1588 cm −1 , describing the deformation vibrations of the N-H bonds of the chitosan amino groups; a band at 2875 cm −1 ascribing to the stretching vibrations of the methylene groups; and a wide band in the region of 3000-3500 cm −1 related to vibrations of OH and NH2 groups of chitosan [35]. Several of the bands at 1320, 1412, and 1500 cm −1 corresponding to the vibrations of the dissociated carboxylic groups indicate that the obtained copolymers were in the acetate salt form. The absence of the low-intensity absorption band near 1610 cm −1 confirms that graft polymerization occurs due to the opening of the C=C bond of N-vinylimidazole [35,36].
The graft copolymer compositions were also determined by FTIR data and are presented in Table 2. As it can be seen, the VI content in the copolymers grows with an increase in the VI content of the polymerization mixture. In addition, the same pattern is observed for the molecular weights of the PVI grafted chains, as these values also rise with VI content growth in the reaction feed. However, the graft-copolymer yields decrease The frequency of grafting (FG) that was obtained was relatively high despite the chitosan's molecular weight. It is shown that chitosan functional groups were available for interactions with radicals and monomer due to the chitosan concertation in the polymerization solution being lower than the critical coil overlap concentration [22].
So, the acquirement of the chitosan and N-vinylimidazole graft copolymers with different compositions via radical solution polymerization was confirmed by FTIR. It was found that graft copolymer yields and grafting efficiency decrease with VI content growth in the polymerization feed. On the other hand, the molecular weights of the grafted PVI chains and the frequency of the grafting increases with a monomer content rise. Moreover, all the synthesized graft copolymers are characterized by the high frequency of PVI grafting. with the VI content growth in the reaction feed. This is due to the enhancing side processes such as VI homopolymerization. The yield data correlates to the grafting efficiency (GE) values which also decrease due to side processes occurring. The frequency of grafting (FG) that was obtained was relatively high despite the chitosan's molecular weight. It is shown that chitosan functional groups were available for interactions with radicals and monomers due to the chitosan concertation in the polymerization solution being lower than the critical coil overlap concentration [22]. So, the acquirement of the chitosan and N-vinylimidazole graft copolymers with different compositions via radical solution polymerization was confirmed by FTIR. It was found that graft copolymer yields and grafting efficiency decrease with VI content growth in the polymerization feed. On the other hand, the molecular weights of the grafted PVI chains and the frequency of the grafting increases with a monomer content rise. Moreover, all the synthesized graft copolymers are characterized by the high frequency of PVI grafting.
Copolymer Solution Properties
It is well known that chitosan has been widely used in biomedicine due to its beneficial properties such as biocompatibility, low toxicity, and film-forming ability, and it is soluble in aqueous solutions with pH < 6.5 by amino group protonation turning into ionized polycation which is a rarity for biopolymers. The grafting of some side water-soluble
Copolymer Solution Properties
It is well known that chitosan has been widely used in biomedicine due to its beneficial properties such as biocompatibility, low toxicity, and film-forming ability, and it is soluble in aqueous solutions with pH < 6.5 by amino group protonation turning into ionized polycation which is a rarity for biopolymers. The grafting of some side water-soluble polymeric chains can enhance the pH solubility range keeping the chitosan's unique cationic behavior. Poly(N-vinylimidazole) is a water-soluble synthetic polymer with low-basic properties (pKa~6) and high complex ability for a variety of different substances [22][23][24][25]. Cht-g-PVI copolymers are expected to retain their polycationic behavior and be pH sensitive in solutions. So, the next part of our investigation deals with the research of the copolymer solution properties.
All the synthesized copolymers are soluble in distilled water (pH = 6.7 ± 0.02) and their 1% w/v aqueous solutions can be obtained. These are the polymer strength solutions, and they are opalescent viscous liquids. Thus, 0.1% w/v solutions were chosen for further research.
The relatively high viscosity of the obtained solutions can indicate the copolymer macromolecule's self-association. To research the copolymer association behavior in the solution the DLS method was used. The results of the hydrodynamic diameter (D h ) measurements are presented in Table 3. The data obtained show that copolymer macrochains are aggregated, and D h values significantly increase with the growth of the PVI content in the copolymers. Therefore, PVI links act as a trigger for the self-association of the macromolecules. These results correlate with the earlier obtained data for azole-containing copolymers [36], and the DLS size data are in good agreement with the size results obtained by TEM (Figure 2). The copolymer particles have an oval-like shape and are characterized by a dense-core structure with less core corona. The self-associated polymers are characterized by critical coil overlap concentration, c*, which was determined by DLS ( Figure 3A). The solutions were researched in the concentration range of 10 −3 -0.1% w/v and a sharp increase in the size of the copolymer particles was taken as c* ( Table 3). As it can be seen, c* values increase with PVI copolymer content and PVI grafted chain molecular weight. These results are in good agreement with the earlier obtained data [22]. Table 3. The characterization of the 0.1% w/v Cht-g-PVI copolymer aqueous solutions (pH = 6.5 ± 0.02).
No
Copolymer Name their 1% w/v aqueous solutions can be obtained. These are the polymer strength solutions, and they are opalescent viscous liquids. Thus, 0.1% w/v solutions were chosen for further research.
The relatively high viscosity of the obtained solutions can indicate the copolymer macromolecule's self-association. To research the copolymer association behavior in the solution the DLS method was used. The results of the hydrodynamic diameter (Dh) measurements are presented in Table 3. The data obtained show that copolymer macrochains are aggregated, and Dh values significantly increase with the growth of the PVI content in the copolymers. Therefore, PVI links act as a trigger for the self-association of the macromolecules. These results correlate with the earlier obtained data for azole-containing copolymers [36], and the DLS size data are in good agreement with the size results obtained by TEM (Figure 2). The copolymer particles have an oval-like shape and are characterized by a dense-core structure with less core corona. The self-associated polymers are characterized by critical coil overlap concentration, c*, which was determined by DLS ( Figure 3A). The solutions were researched in the concentration range of 10 −3 -0.1% w/v and a sharp increase in the size of the copolymer particles was taken as c* ( Table 3). As it can be seen, c* values increase with PVI copolymer content and PVI grafted chain molecular weight. These results are in good agreement with the earlier obtained data [22]. To confirm the polycationic behavior of the synthesized Cht-g-PVI copolymers, ζpotential measurements were carried out ( Table 3). As it can be seen from the data obtained, ζ-potential positive values in the range 44-55 mV characterize Cht-g-PVI copolymer solutions. Moreover, ζ-potential values rise with the PVI content growth in the copolymer. The mobility and conductivity measurement data correlates with the ζ-potential indicating the contribution of PVI links to the charge of the copolymers. The high values of the ζ-potential can be also attributed to the formation of chitosan acetate from the chitosan and imidazole cycles due to the presence of acetic acid in the polymerization feed.
The presence of the polymer charge may indicate the pH sensitivity properties. Therefore, the research on the effect of pH on the ζ-potential and Dh values was conducted by DLS ( Figure 3B). As expected, the researched parameters increase in the acidic medium and decrease in the alkaline environment. In an acidic medium, the available imidazole rings and chitosan amino groups are protonated and the electrostatic repulsion of similarly charged fragments of the macromolecule associates is enhanced. This leads to an increase in the ζ-potential and size of the latter. In an alkaline medium, protonation does not occur, and the size and potential of the particles decrease. In brief, synthesized Cht-g-PVI copolymers are water-soluble and self-assembled in the concentration range of 10 −3 -0.1% w/v. The critical coil overlap concentration of the synthesized copolymers rises with the increase of the PVI content and molecular weight. Chtg-PVI copolymers are characterized by a cationic nature and pH sensitivity correlating with the PVI content. To confirm the polycationic behavior of the synthesized Cht-g-PVI copolymers, ζ-potential measurements were carried out ( Table 3). As it can be seen from the data obtained, ζ-potential positive values in the range 44-55 mV characterize Cht-g-PVI copolymer solutions. Moreover, ζ-potential values rise with the PVI content growth in the copolymer. The mobility and conductivity measurement data correlates with the ζ-potential indicating the contribution of PVI links to the charge of the copolymers. The high values of the ζ-potential can be also attributed to the formation of chitosan acetate from the chitosan and imidazole cycles due to the presence of acetic acid in the polymerization feed.
Enzyme Immobilization and Interactions with the Cht-g-PVI Copolymer
The presence of the polymer charge may indicate the pH sensitivity properties. Therefore, the research on the effect of pH on the ζ-potential and D h values was conducted by DLS ( Figure 3B). As expected, the researched parameters increase in the acidic medium and decrease in the alkaline environment. In an acidic medium, the available imidazole rings and chitosan amino groups are protonated and the electrostatic repulsion of similarly charged fragments of the macromolecule associates is enhanced. This leads to an increase in the ζ-potential and size of the latter. In an alkaline medium, protonation does not occur, and the size and potential of the particles decrease.
In brief, synthesized Cht-g-PVI copolymers are water-soluble and self-assembled in the concentration range of 10 −3 -0.1% w/v. The critical coil overlap concentration of the synthesized copolymers rises with the increase of the PVI content and molecular weight. Cht-g-PVI copolymers are characterized by a cationic nature and pH sensitivity correlating with the PVI content.
Enzyme Immobilization and Interactions with the Cht-g-PVI Copolymer
Polymer-immobilized enzymes are widely used for various applications in biotechnology, cosmetology, and pharmacy. This is due to a biocatalyst stability increase and environmental enzyme protection. The most promising immobilization method is adsorption, i.e., a process without the covalent binding of a protein to a polymer. The use possibility of a polymer as a matrix for enzyme immobilization is determined by their ability to interact with each other. Therefore, the interaction possibility of the synthesized copolymer with the enzyme was studied by FTIR before the immobilization experiments. The Cht-g-PVI-3 copolymer was chosen for research because it is characterized by the lowest molecular weight of grafted side chains and the smallest particle size in solutions. This should provide greater steric availability of the copolymer functional groups for interaction with the enzymes.
The FTIR spectrum of a graft copolymer and ficin blend shows the characteristic absorption bands of the copolymer described above (Figure 1). Moreover, there is an intensity increase of the bands at 915, 1057, 1228, and 2875 cm −1 , and shifts in the wavenumber of some bands, from 1057 to 1030 cm −1 (pyranose cycles) and from 1588 to 1596 cm −1 (chitosan amino groups). This indicates that the conjugate formation between the enzyme and the Cht-g-PVI-3 copolymer, as well as hydroxy-, amino groups, and imidazole cycles, are significantly involved in the conjugation process. The FTIR spectra of the immobilized papain and bromelain are the same and correlate with the results obtained for ficin.
As mentioned above, cysteine proteases are hydrolytic enzymes united by the presence of a cysteine in the active site and catalyzing the hydrolysis of proteins and peptides. The catalytic activity of cysteine proteases is due to the presence of a catalytic dyad, according to some sources, a triad, which includes the imidazole base from a histidine residue, and an activating nucleophilic center which is the thiol group of cysteine [37]. If we consider the active site of these enzymes as a triad, then it also includes the carboxyl group of aspartic acid, which shifts the electron density in the azole ring. The thiol group is a strong reducing agent, easily oxidized under fairly mild conditions. Therefore, for the industrial use of cysteine proteases, it is advisable to choose their immobilized forms, in which the active site is protected from the negative effects of the environment, and thus the catalytic activity of the enzyme is maintained for a longer time [16].
The results of the enzyme content evaluation after immobilization are represented in Table 4. As it can be seen from the data obtained, grafting of the VI links to chitosan significantly increases the amount of the bounded enzymes due to the appearance of new centers to interact with proteins. The proteolytic activity (U × mL −1 ) of immobilized enzymes was 77.5 ± 1.7 for bromelain, 39.4 ± 8 for ficin, and 96.6 ± 1.5 for papain. The values obtained are lower than those for native bromelain and ficin, however, they are the same for papain (Table 4). These results demonstrate that bromelain and ficin change the conformations to less active ones, but papain conformation is the same in the main. Moreover, the intensity increase of the absorption bands attributed to the imidazole cycles in the FTIR spectrum of the immobilized ficin indicates that the enzyme interacts with the Cht-g-PVI copolymer via a histidine azole ring, probably including the histidine residue from the ficin active site, and the proteolytic activity of the enzymes immobilized on chitosan are higher compared to the Cht-g-PVI-immobilized enzymes, which confirms the interaction of the PVI chains with proteins. Despite some decreasing of the protease catalytic activity, the immobilized enzymes better retain their properties during this time. In the next series of experiments, we evaluated the stability of the immobilized enzymes obtained by measuring the residual catalytic activity (U × mL −1 ) after incubation at 37 • C in 50 mm Tris-HCl buffer, pH 7.5.
After a one-day incubation, the immobilized ficin retains more than 97% of its activity, while the native one retains only about 51%, and bromelain retains 84% and 53% of proteolytic activity for the immobilized and native one, respectively ( Figure 4). The immobilized papain is characterized by 80% activity after a five-day incubation and by only 57% for the native enzyme. Moreover, as it can be seen, the immobilized ficin is six times more active compared to the native one and possesses 57% activity. After 7 days or more, the differences in the catalytic ability loss of the native and immobilized enzymes became more significant. The activity of the free enzymes after 21 days of incubation was 3.8% for ficin, 15% for papain, 5.8% for bromelain, and for immobilized enzymes the values are the following: 32% for ficin, 21% for papain, and 20% for bromelain (Figure 4).
For a detailed analysis of our results, we conducted in silico experiments on molecular docking. The higher the modules of the enzyme affinity values (kcal/mol) for Cht-g-PVI (Table 5), the greater the amount of protein sorbed onto the carrier (Table 4), but for catalytic activity the situation was much more complicated. Figure 5 and Table 5 show bonds and interactions between bromelain (PDB ID:1W0Q), ficin (PDB ID:4YYW), papain (PDB ID:9PAP), and Cht-g-PVI, which arise as a result of enzyme immobilization. It was established that for all three enzymes during their sorption immobilization, the polymer molecule is located near the cleft with the active site between two domains. This should reflected-and according to our data really reflected-in the catalytic activity of the samples. Ficin, which forms eight hydrogen bonds with Cht-g-PVI after immobilization, loses its activity to a greater extent than other enzymes. Papain, which forms two hydrogen bonds, does not reduce it at all. Moreover, in the papain molecule, hydrogen bonds are formed with the amino acids of the active site-Cys25 and His159. Obviously, its catalytically optimal conformation is stabilized. Bromelain occupies an intermediate position regarding the loss of activity during immobilization and forms six hydrogen bonds with Cht-g-PVI, yet not chemical bonds, but physical interactions are with the active site of the enzyme (Cys26 and His158). more active compared to the native one and possesses 57% activity. After 7 days or more, the differences in the catalytic ability loss of the native and immobilized enzymes became more significant. The activity of the free enzymes after 21 days of incubation was 3.8% for ficin, 15% for papain, 5.8% for bromelain, and for immobilized enzymes the values are the following: 32% for ficin, 21% for papain, and 20% for bromelain ( Figure 4). For a detailed analysis of our results, we conducted in silico experiments on molecular docking. The higher the modules of the enzyme affinity values (kcal/mol) for Cht-g-PVI (Table 5), the greater the amount of protein sorbed onto the carrier (Table 4), but for catalytic activity the situation was much more complicated. Figure 5 and Table 5 show bonds and interactions between bromelain (PDB ID:1W0Q), ficin (PDB ID:4YYW), papain (PDB ID:9PAP), and Cht-g-PVI, which arise as a result of enzyme immobilization. It was established that for all three enzymes during their sorption immobilization, the polymer molecule is located near the cleft with the active site between two domains. This should reflected-and according to our data really reflected-in the catalytic activity of the samples. Ficin, which forms eight hydrogen bonds with Cht-g-PVI after immobilization, loses its activity to a greater extent than other enzymes. Papain, which forms two hydrogen bonds, does not reduce it at all. Moreover, in the papain molecule, hydrogen bonds are formed with the amino acids of the active site-Cys25 and His159. Obviously, its catalytically optimal conformation is stabilized. Bromelain occupies an intermediate position regarding the loss of activity during immobilization and forms six hydrogen bonds with Cht-g-PVI, yet not chemical bonds, but physical interactions are with the active site of the enzyme (Cys26 and His158). Thus, it was found that synthesized Cht-g-PVI copolymers form conjugates with ficin, papain, and bromelain. The interaction mechanism of the synthesized copolymer and cysteine proteases was researched by FTIR and produced by molecular docking. The results of in silico research and proteolytic activity determination are in good agreement. The proteolytic activity of the immobilized enzymes is lower compared to native for bromelain, at 80%, and it is 41% for ficin, however, papain maintains full proteolytic activity. Moreover, the immobilized enzymes are more stable in solutions and retain up to 84% and 32% of initial activity after 3-day and 21-day incubations, respectively.
Conclusions
By conducting this research, we demonstrate that a potassium persulfate-sodium metabisulfite blend is a suitable initiator for obtaining chitosan and poly(N-vinylimidazole) graft-copolymers with a relatively high grafting frequency and efficiency. The synthesized copolymers are water-soluble despite their PVI content, demonstrating their polycationic and pH-sensitive nature, and alongside their self-assembled properties, synthesized copolymers conjugate cysteine proteases, such as ficin, papain, and bromelain. The immobilized proteins retain up to 100% of their proteolytic activity compared to the native ones, and moreover, they are more stable in solutions. The immobilized ficin retains up to 97% of its initial activity after a one-day incubation, while the native one retains only about 51%; the immobilized bromelain retains 69% of activity after a 3-day incubation, while for the native one the value is 41%. The immobilized papain retains 57% of its initial activity after a 7-day incubation, and the difference between the immobilized and native forms increases after 7 days. Therefore, the synthesized copolymers can act as promising materials for cysteine protease immobilization materials to prolonging their catalytic activity. | 8,258.6 | 2022-06-01T00:00:00.000 | [
"Biology"
] |
Quantum loops in radiative decays of the $a_1$ and $b_1$ axial-vector mesons
A previous model where the low-lying axial-vector mesons are dynamically generated, implementing unitarity in coupled channels in the vector-pseudoscalar ($VP$) meson interaction, is applied to evaluate the decay widths of the a_1(1260)$ and $b_1(1235)$ axial-vector mesons into $\pi\gamma$. Unlike the case of the $a_1$, the $b_1$ radiative decay is systematically underestimated at tree level. In this work we evaluate for the first time the loop contribution coming from an initial $VP$ vertex. Despite the large superficial divergence of the loops, the convergence of the relevant loops can be established by using arguments of gauge invariance. The partial decay widths obtained agree very well with the experimental values within uncertainties, and show that the loop contribution is crucial in the $b_1$ case and also important for the $a_1$ case.
Introduction
The unitary extensions of chiral perturbation theory (χP T ) have allowed to extend the range of energies where the hadron interaction can be studied.At the same time they have also shown that many meson and baryon resonances are dynamically generated and can be interpreted as quasibound states of pairs of hadrons in coupled channels [1].A case very well studied is the one of the interaction of the octet of pseudoscalar mesons [2][3][4][5][6] from where a nonet of scalar mesons are generated.Much less studied is the case of the interaction of vector mesons with pseudoscalar mesons, where two independent works [7,8] have shown that the axial vector mesons can be generated dynamically.This novel idea should be confronted with experiment to test the accuracy of its predictions.Some of these predictions have already been tested in Ref. [9].Contrary to other pictures like quark models, where external sources are coupled to the quarks, in the dynamically generated picture one assumes that the largest weight of the wave function is due to the two meson cloud, and consequently, the coupling of external sources proceeds via the coupling to the meson components.One interesting test which brings light into this issue is the radiative decay of the resonances.This is the purpose of the present work where we concentrate on the radiative decay of the b + 1 and a + 1 axial vectors into π + γ.The a + 1 radiative decay has been studied within different contexts, for instance vector meson dominance is used in [10,11], relating the radiative decay with the ρπ decay of the a + 1 .Chiral Lagrangians with vector meson dominance (VMD) are also used in [12] to obtain the radiative width of a + 1 → π + γ.An SU(3) symmetric Lagrangian is used in [13] to account for strong decays of the axial vector mesons and by means of VMD the amplitude for a + 1 → π + γ is studied and related to the one of [12].A common feature of these works is that the b + 1 → π + γ reaction is not discussed and its evaluation in [13] using VMD along the same lines as the a + 1 → π + γ gives rise to a decay rate substantially smaller than experiment.The b + 1 → π + γ decay is also neglected in the analysis of [11] citing the small rates obtained.
The rates of a + 1 → π + γ and b + 1 → π + γ are also evaluated in [14] using quark models for the a 1 → πρ and b 1 → πω and VMD to relate these amplitudes with the radiative decay.It is emphasized there that because of the factor 1/3 of the ωγ coupling relative to the one of ργ there is a reduction factor of 1/9 for the radiative decay b + 1 → π + γ compared to that of the a + 1 → π + γ decay, resulting in a ratio of these two rates in contradiction with experiment (this is the same argument found in [13] as responsible for the small rate of the b + 1 → π + γ decay).
In the present work we shall also use the tree level VMD amplitudes, but in addition, the nature of the axial vector mesons as dynamically generated resonance provides a strong coupling to K * K and K * K, and subsequent loops with these intermediate states and the photon emitted from these constituents should be considered.We show that the loops are very important, particularly for the case of the b + 1 → π + γ decay, and the simultaneous consideration of the VMD amplitudes at tree level and loop contributions leads to a good description of both radiative decays.
We shall also show some technical details involving loops with vector mesons.Using arguments of gauge invariance and the Feynman parametrization, one can prove that the loops involving one vector meson and two pseudoscalar mesons are finite, in spite of the large degree of superficial divergence.This was found in [15][16][17][18] with the loops involved in radiative decay of the φ containing three pseudoscalar mesons.
Formalism
In Ref. [8] most of the low-lying axial vector mesons were dynamically generated from the s-wave interaction of the octet of vector-mesons with the octet of pseudoscalar-mesons by using the techniques of the chiral unitary theory.With the only input of a chiral Lagrangian for a vector and pseudoscalar (VP) mesons and the implementation of unitarity in coupled channels, these resonances show up as poles in the second Riemann sheet of the unitarized scattering amplitudes.By evaluating the residues of the scattering amplitudes at the pole positions, the couplings of the dynamically generated axial-vector resonances to the different V P channels can be obtained.By using these couplings, we found a nice agreement with the experimental V P partial decay rates, despite the fact that no parameters were fitted to experimental data of the axial-vector mesons.
In view of the dominant contribution of the V P channels in the building up and decay of the axial-vector resonances, our starting point to study the radiative decay of the b 1 and a 1 is the transition of these resonances into the allowed V P channels and attaching the photon to the relevant meson lines and vertices.The first kind of mechanisms considered are the tree level vector meson dominance (VMD) contributions, shown in Fig. 1a).Furthermore, the radiative decay can also proceed from loops of the V P pair with the photon emitted from either the pseudoscalar or the vector meson leg, Fig. 1b) and c).A diagram with the photon directly emitted from the V P P vertex is needed to ensure gauge invariance, but we will explain later on (after Eq. ( 26)) that, using arguments of gauge invariance, we do not need to evaluate it directly.On the other hand, another kind of loops containing the V P γ and V V P vertices is also possible, however, they involve two abnormal intrinsic parity vertices, and hence its contribution should be rather small compared to those already considered.This is indeed the case in the analogous loops present in the radiative decay of the φ meson, as it was found in [19].The intermediate V P states possible in the loops are those used in Ref. [8] to build up the axial vector mesons.These are, for the b 1 : 1/ √ 2( K * K + K * K), φπ, ωπ, ρη, and for the a 1 :1/ √ 2( K * K − K * K), ρπ.Note however, that the coupling of the φπ, ωπ, ρη to the final pion violates G−parity and hence these channels do not contribute to the b 1 radiative decay.Thus, only the diagrams in Fig. 1 must be evaluated.
Let us start with the evaluation of the tree level contributions.For the V γ vertex we use the amplitude with λ V = 1, 1/3, − √ 2/3 for ρ, ω and φ respectively, F V = 156 ± 5MeV [19], M V is the vector meson mass and ǫ V and ǫ are the vector-meson and photon polarization vectors respectively, and e is taken positive.
The axial-vector meson coupling to V P can be expressed [8] as where ǫ A is the axial-vector meson polarization vector.The couplings g AV P are obtained in Ref. [8] by evaluating the residues at the poles of the V P unitarized scattering amplitudes and are given in table VII of that reference.Note that in Ref. [8] the couplings are given in isospin base and for given G−parity states, hence the appropriate projection to charge base has to be done.In Ref. [8] no theoretical errors were quoted for these AV P couplings.However, for the purpose of evaluating the theoretical uncertainty in the calculations of the present work, we have estimated the uncertainties in these couplings in the following way: for the b 1 case, we have considered the change in the couplings due to a resonable uncertainty of 10% in the only free parameter of the model, the subtraction constant a ∼ −1.85 (see Ref. [8] for details).We consider further uncertainties from changing f , as will be explained after Eq. ( 7).For the a 1 case, in Ref. [8] the mass obtained was 1011 MeV, somewhat below the nominal mass in the PDG [20], 1230 MeV.(Note however that the total width is 250 − 600 MeV in the PDG, which gives an idea of the uncertainty in the mass).In Ref. [8], the mass was obtained with a value of a = −1.85.If we use the value a = −1.1 and f = f K , we obtain a mass closer to the nominal of 1080 MeV, and it is not easy to get larger mass.In this case the coupling to ρπ, the dominant channel, is increased by ∼ 25%.From there, we get an idea of the uncertainties in the a 1 couplings.This leads to the following radiative decay amplitudes for the tree level diagrams, (Fig. 1a)): In the evaluation of the loops an apparent problem arises given the large superficial divergence due to the loop momentum dependence of the vertices and the q µ q ν /M 2 V terms of the vector meson propagators.However, we will explain in detail how one can circumvent this problem invoking gauge invariance and using a suitable Feynman parametrization of the loop integrals.
Since the only external momenta available are P (the axial-vector meson momentum) and k (the photon momentum), the general expression of the amplitude can be written as with Note that, due to the Lorentz condition, ǫ Aµ P µ = 0, ǫ ν k ν = 0, all the terms in Eq. ( 4) vanish except for the a and d terms.On the other hand, gauge invariance implies that T µν k ν = 0, from where one gets This is obviously valid in any reference frame, however, in the axial-vector meson rest frame and taking the Coulomb gauge for the photon, only the a term survives in Eq. ( 4) since P = 0 and ǫ 0 = 0.This means that, in the end, we will only need the a coefficient for the evaluation of the process.However, the a coefficient can be evaluated from the d term thanks to Eq. ( 6).The advantage of doing this is that there are few mechanisms contributing to the d term and by dimensional reasons the number of powers of the loop momentum in the numerator will be reduced, as will be clearly manifest from the discussion below.
Let us start by evaluating the diagram 1b) with the photon emitted from the pseudoscalar leg for the b + 1 → π + γ decay channel (the other channels are totally analogous).We will call this diagram type-b, in contrast with the type-c, with the photon attached to the vector-meson leg which will be evaluated later on, (see Fig. 1).
For the evaluation of this diagram we also need the V P P and P P γ vertices.The V P P Lagrangian used is (see Ref. [21] for normalizations used) where V µ and P are the usual SU(3) matrices containing the vector and pseudoscalar mesons.In Eq. ( 7) ... means SU(3) trace and g = −M V G V /f 2 , where M V is the vector meson mass, G V = 55 ± 5 MeV [19] and f is the pion decay constant that we take from 93 MeV to 1.15 × 93 MeV to take into account the uncertainty due to the use of f instead of f K which could actually enter some of the expressions.These uncertainties in the parameters, together with the uncertainties in other couplings of the theory, will be taken into account later on in the evaluation of the theoretical uncertainties of our results.The P P γ vertex is easily obtained from the lowest order ChP T Lagrangian [22] where the photon field appears in the covariant derivative.
The amplitude for the type-b mechanism (see Fig. 1b) for the b + 1 → π + γ reads: with In Eq. ( 9), g b 1 K * K is the coupling of the b 1 to the K * K and K * K G-parity positive combination as defined in Ref. [8].By looking at Eq. ( 10) one can see that in the worst case the loop integral, as it is written, is quadratically divergent.At this point, we can take advantage of the fact that we only need to evaluate the contribution to the d term of Eq. ( 5), as explained above.In fact, the most divergent term, the one with (P − q) 2 , does not contribute to the that term.Indeed, we can write and then the two first addends of the right hand side of the above equation give which does not depend explicitly on P , since the (P − q) 2 − m 2 K * cancels the propagator where the P appears and, therefore, this integral cannot give contribution to the d term.Hence, for the purpose of evaluating the k µ P ν contribution, Eq. ( 10) can be written as which has one power less in the variable q than Eq. ( 10).Next we use the Feynman parametrization to evaluate this integral and we will see that the contribution to the d term is convergent.
We use the identity By setting and performing the change of variable we have However, in Eq. ( 17), all the terms that contribute to the d coefficient are finite.Hence, in the end, the evaluation of the amplitude of the type-b diagram is completely finite, and gives: In the derivation of Eq. ( 19) from Eq. ( 17) we have used the relation between the a and d coefficients given by Eq. ( 6).We have also used that [23] and that terms with odd powers of q ′ vanish when performing the integration, by symmetry reasons.It is also worth explaining a subtle cancellation which occurs between two logarithmic divergent pieces when deriving Eq. ( 19), as explained below: In Eq. ( 17), apart from the terms which provide a finite contribution to the d-term, already considered in Eq. ( 19), there are two more terms which contribute to the d-term and which are logarithmically divergent.One of them goes as yk µ q ′ ν q ′ α (k − P ) α .After the q ′ integration, this gives a term proportional to −yk µ P ν , since the q ′ ν q ′ α gives a result proportional to g να .The other term goes as q ′ µ (1 − x)P ν q ′ α (k − P ) α and gives a term proportional to (1−x)k µ P ν after the q ′ integration, with the same proportionality coefficient as in the other case.However after doing the x and y integration these two terms give the same result but with opposite sign.Hence these two possible sources of divergent contribution to the k µ P ν cancel exactly among themselves.Therefore the expression of the amplitude in Eq. ( 19) is totally finite.It is worth stressing again the power of the technique used here to evaluate the amplitude coming from the type-b loops since, despite starting from a loop quadratically divergent, we have been able to get rid of all the divergences in an exact way.
At this point, it is worth noting that the numerical evaluation of the term proportional to 1/m 2 K * in Eq. ( 19) is about 5% of the other term.This term comes from the last factor of Eq. ( 17) which essentially is due to the the p µ V p ν V /m V 2 part of the vector meson propagator.Hence, the 1/m V 2 terms can be safely ignored in the evaluation of the type-c diagram.This approximation is expected to be very accurate since, advancing some results, the type-c diagrams will be found to be very small compared to the type-b and hence the 1/m V 2 is a small piece of a diagram contributing little to the radiative decay width.Nonetheless, we will include later on this uncertainty in the theoretical error analysis.Now we evaluate the amplitude corresponding to the type-c diagram, Fig. 1c).We also need in these case the V V γ vertex that we get from gauging the charged vector meson kinetic term with After neglecting the 1/m V 2 term of the vector meson propagator by the reasons explained above, we have: with After keeping only the terms contributing to k µ P ν , doing the Feynman parametrization and using the relation of Eq. ( 6) the final expression of the amplitude coming from the type-c diagram is with Despite the smallness of the terms coming from the 1/m V 2 term of the vector meson propagator in the type-b mechanism, it is worth mentioning a technicality regarding the cancellations of the divergences had we evaluated these terms in the type-c loop.If one keeps these 1/m V 2 terms in the vector-meson propagators one obtains that the terms with 1/m V 4 do not contribute to the d k µ P ν term and there remains a logarithmic divergence proportional to 1/m V 2 .This divergence should be expected to cancel had one included suitable tadpoles which could cancel the offshellness of the momentums of the vector-meson in the loops, in a similar way to what was shown in [24], where the factorization of the q 2 terms in the loop was justified.For the same reasons, this factorization was also used in [8].Should one take this prescription here, the 1/m V 2 terms would be also very small.In any case we will make a conservative estimate of the errors induced by considering these terms in one way or another.By knowing that the contribution to the width of the type-c loop diagram is one order of magnitude smaller than that of the type-b one, and that the changes found for the loop of type-b are of the order of 5%, an estimate of 10% error of the radiative width coming from these considerations is a safe estimate.
Adding both type-b and type-c loops, we have: Another possible diagram with the photon directly emitted from the V P P vertex, which is needed to ensure gauge invariance of the set of diagrams, does not give contribution to the d coefficient since the vertices involved are both of the type ǫ ′ • ǫ, with ǫ ′ either the vector or axial-vector meson polarization vector.Therefore there is no k momentum dependence either in the vertices or in the propagators and hence the integration cannot give contribution to the k µ P ν structure.
Concerning the a + 1 → π + γ decay, the evaluation is totally analogous to the previous one.The amplitude from the K * K loops, when adding both type-b and -c mechanisms is: where one has to change m b 1 by m a 1 in the evaluation of s and s ′ in Eqs. ( 18) and (25).Note the relative minus sign in the terms with s ′ of Eq. ( 27) with respect to Eq. ( 26).This is due to the fact that, as we mentioned above, the b 1 couples to the positive G−parity combination ( K * K + K * K) while the a 1 couples to the negative G−parity combination ( K * K − K * K).
In the a 1 case there is also the possibility of having ρ and π in the loops.This mechanism gives: where one has to change m b 1 by m a 1 , m K * by m ρ , and m K by m π in the evaluation of s and s ′ in Eqs.(18) and (25).
The expression of the radiative decay width in the axial-vector meson rest frame is where M A stands for the mass of the decaying axial-vector meson and T is the sum of the amplitudes from the tree level and loop mechanisms removing the ǫ A • ǫ factor.On the other hand, giving the large width of the axial-vector meson, particularly for the a 1 , it is appropriate to fold the expression of the amplitude with the mass distribution of the axial-vector resonance.Hence the final amplitude is obtained from the expression where Θ is the step function, M A and Γ A are the nominal mass and total axial-vector meson width from the PDG [20] and s th A is the threshold for the dominant A decay channels.The errors quoted in the PDG for these magnitudes are taken into account in the error analysis.In Eq. ( 30), N is a normalization factor in the convolution integral obtained from the same integral as in Eq. ( 30) setting Γ( Once the formalism and the different vertices have been exposed, we are in a situation to address the possible contribution from mechanisms involving the π-a 1 mixing.The mixing of axial-vector and pseudoscalar mesons (or vectors and scalars) is possible through the longitudinal component of the spin-1 propagator P µ P ν /P 2 [25][26][27][28][29].In the Appendix we show that the diagrams corresponding to the present problem, involving this mixing, vanish in our formalism.[31] Table 1: Different contributions to the radiative decay widths.All the units are KeV.
Γ
In table 1 we show the different contributions to the partial decay width coming from the different mechanisms considered in the calculation.The theoretical error in the final results have been obtained by doing a Monte-Carlo sampling of the parameters of the model within their uncertainty and considering the uncertainties in the couplings discussed above.Note, however, that we have no freedom in the theory once the relevant parameters (actually a subtraction constant) are fixed.To these errors we add in quadrature the 10% from the arguments used above concerning the 1/m V 2 terms.From the results one can see that the tree level contribution for the a 1 accounts for most of the decay width.However, for the b 1 the tree level by itself only accounts for about 1/3 of the experimental result, despite having two diagrams, φ and ω, that contribute to the tree level process.The smallness of the tree level contribution comes from the − √ 2/3 and 1/3 factor of the φ and ω coupling to the photon in comparison to the factor 1 for the ρ case present in the a 1 decay and also to the fact that the a 1 ρπ coupling obtained with the chiral unitary model [8] is larger than the b 1 φπ and b 1 ωπ.
Note also the constructive interference between the φ and ω diagrams despite the coefficient of the V γ coupling having a different sign.This is so because the couplings of the b 1 to ρπ and ωπ have also relative different sign.These relative signs are also a genuine non-trivial prediction of the chiral unitary model of Ref. [8].
Regarding the loop contribution, the total loop results for the b 1 and a 1 decays have a comparable absolute value.In the b 1 case it increases the decay rate to account very well for the observed experimental result, after interfering constructively with the tree level mechanism.Note that the most important contribution from the loops comes from the type-b mechanisms (see Fig. 1b).Particularly, for the a 1 case the dominant contribution to the loops come from the ρπ loops.
It is worth stressing the important role of the interferences among the different terms to give the final result.The interferences depend essentially on the sign of the couplings and the imaginary part of the loop functions.The values and relative signs of the AV P couplings are non-trivial predictions of the chiral unitary model of Ref. [8] and hence, the agreement of our calculated radiative decay widths with experiment gives support to the model of Ref. [8] and, hence, the dynamical nature of these axial-vector resonances.
Conclusions
We have studied here the radiative decays a + 1 → π + γ and b + 1 → π + γ which had proved problematic before in several approaches.The novelty which allowed us to obtain a simultaneous good description of both decays was the consideration of the a 1 and b 1 axial vectors as dynamically generated resonances within the context of unitarized chiral perturbation theory.Because of that we found important loop contributions that were essential in reproducing the experimental values.Technically, it is particularly rewarding to see that, by invoking gauge invariance, the calculation is simplified and the relevant loops are shown to be convergent despite their large superficial degree of divergence.The nature of the resonances as quasibound states of meson and vector-meson states reverted into a loop contribution which provided a substantial contribution to the radiative amplitudes, particularly to the one of the b 1 radiative decay.
One might think that loop contributions could have been considered without resorting to the concept of a dynamically generated resonance, by simply taking the couplings of the resonance to their decay channels.However, for the important case of the b 1 we found that the contribution of the ωπ loop vanished and the relevant contribution was coming from KK * and K * K which is a closed decay channel (up to the width of the states) and for which there is no valuable experimental information.The chiral unitary approach provides directly such couplings with definite signs since these states are a part of the building blocks of the resonances in the coupled channel approach.Similarly, with the use of a phenomenological Lagrangian, like the one of Ref. [13], one could get such couplings, but these are based on SU(3) symmetry which is actually broken when one generates dynamically resonances with a nonperturbative approach like the one in Ref. [8].One example of relevance to the present case is that, with the phenomenological Lagrangian, the b 1 → φπ coupling is forbidden while in our case, the nonperturbative treatment of the problem, involving many iterative loops, generates a finite coupling that is dominant in the tree level contribution of b 1 → πγ (see Table 1).
The fact that we obtain a good description of the two radiative decay rates for the first time provides support for the idea of the axial vector mesons as dynamically generated states within chiral dynamics.Other tests could follow as we get increased and more accurate information on the axial vector mesons, and the findings of the present work should serve to stimulate efforts in this direction.
5 Appendix: Mechanisms related to the mixing of axial-vector and pseudoscalar mesons In addition to the terms discussed so far in this paper, we could have terms involving the mixing of the axial-vector and pseudoscalar mesons [25][26][27][28][29], through the longitudinal component of the axial-vector resonance.In our case this occurs with a 1 -π mixing, allowed by G-parity.The possible mechanisms involving this mixing in our scheme are given in Fig. 2.However, we shall demonstrate here that they vanish in our formalism.A free spin-1 massive meson propagator can be written as −g µν + P µ P ν M 2 P 2 − M 2 = −g µν + P µ P ν P 2 where in the second term of the equality a separation has been done in terms of a transverse part (−g µν + P µ P ν P 2 ) and a longitudinal one (P µ P ν ).Note that the pole of the particle appears only in the transverse part.
In our formalism, the axial-vector resonance is dynamically generated from the V P interaction and is associated to the poles of the scattering matrix.In the Appendix B of Ref. [8] we made explicitly the separation into transverse and longitudinal part, with the result that the poles appeared only in the transverse part of the amplitude.There we found for T where P is the total momentum of the V P system and ǫ, ǫ ′ , the polarizations of the two vector mesons.In Eq. ( 32), c is very small compared to b and of opposite sign, such that there are no poles in the longitudinal part.If we consider also a 1 -π mixing, we would have to add terms like in Fig. 3 to our V P amplitude.The loop function appearing in Fig. 3 has the structure J(P 2 )P µ .The sum of terms in Fig. 3 renormalizes the longitudinal part of Eq. ( 32) which is changed to V c 1 − c − βJ 2 (P 2 ) P 2 −m 2 P µ P ν P 2 = V c(P 2 − m 2 ) (1 − c)(P 2 − m 2 ) − βJ 2 (P 2 ) where m is the pion mass.The amplitude has the unphysical feature of providing a pole related to the pion pole (close to m 2 if β is small).The way to remove this unphysical behaviour of the longitudinal part is to demand that J(P 2 = m 2 ) = 0, which also appears in other formalism [28].In other works [29] it is shown explicitly that the renormalized full vector meson propagator contains only one pole which does not show up in the longitudinal part.
The contribution of the mechanisms of Fig. 2 in our formalism would have to be considered through the a 1 pole of the amplitudes of Fig. Diagram Fig. 4a), with the photon emitted either from the pseudoscalar or the vector in the loop, is proportional to J(P 2 π = m 2 ) and hence vanishes.
Figure 1 :
Figure 1: Feynman diagrams contributing to the radiative axial-vector meson decay.
Figure 3 :
Figure 3: Extra contributions to the V P → V P interaction involving the a 1 -π mixing
Figure 4 :
Figure 4: Mechanisms of Fig. 2 in the dynamical formalism | 7,179.2 | 2006-11-07T00:00:00.000 | [
"Physics"
] |
Raman sensitivity to crystal structure in InAs nanowires
We report a combined electron transmission and Raman spectroscopy study of InAs nanowires. We demonstrate that the temperature dependent behavior of optical phonon energies can be used to determine the relative wurtzite fraction in the InAs nanowires. Furthermore, we propose that the interfacial strain between zincblende and wurtzite phases along the length of the wires manifests in the temperature-evolution of the phonon linewidths. From these studies, temperature-dependent Raman measurements emerge has a non-invasive method to study polytypism in such nanowires.
Arsenide based III-V semiconductors are usually found in the zinc blende (ZB) phase. However, it is now well established that, by controlling growth parameters, it is possible to obtain nanowires (NWs) of the above class of materials in wurtzite (WZ) phase as one of the main crystalline structures. [1][2][3][4][5] In general, high resolution transmission electron microscopy and electron diffraction measurements are used to detect crystal structures in NWs. 3,4,5,6 In addition, Raman spectroscopy and imaging have been largely used to probe the structural evolution of InAs NWs, fabricated under varying growth conditions. 1,7,8 However, no experiments to date have been able to extract the crucial information on interfacial strain between ZB and WZ phases along the length of the NW. Here we address this issue by analyzing the Raman line width of the phonon mode. In addition, we prescribe a technique to compare the relative WZ phase in NWs using temperature dependent Raman measurements. This approach offers an easy alternative to TEM measurements to compare the polytypism in NWs.
Aligned InAs NWs are grown on InAs substrate {111} using chemical beam epitaxy technique, varying the group III/V flux ratio to tune the WZ and ZB fraction of crystal phases along their length. Sample NW1082 (NW1249) has been grown at 425±5 o C (430±5 o C) with MO line pressures of 0.3 and 1.0 (2.0) Torr for TMIn and TBAs. The crystal structure of the NWs is studied using a Zeiss Libra 120 transmission electron microscope (TEM). Raman scattering measurements are carried out in back-scattering geometry using a micro-Raman spectrometer equipped with 488 nm (2.55 eV) of 5 mW air-cooled Ar + laser as the excitation light source, a spectrometer (model TRIAX550, JY) and a CCD detector. Raman spectra of all samples, over the temperature range between 120K and 230K, have been recorded using a sample cell (Model Link-600S, JY). For TEM (Raman) measurements the NWs are mechanically transferred from the substrate to a 300 mesh copper TEM grids coated with a carbon film (silicon substrate). In order to measure the length fraction of WZ and ZB in both samples in a comparable way, we collected dark field TEM images of the NWs oriented in [2-1-10] zone axis with the objective diaphragm placed as indicated by a white circle in Fig. 1 (d). The diaphragm selects two WZ spots and one ZB spot that belongs to one of the two possible twins of the ZB structure.
In this configuration the WZ and ZB structures have a strong and well-defined contrast difference and the ZB segment appears brighter or darker than WZ, depending whether it belongs to the twin individual which gives the spots in the diaphragm or to the other individual, respectively. In the upper corner of Fig. 1 and NW1249 to be 95% and 83%, respectively. Hence, the ratio of WZ phase fraction between NW1082 and NW1249, is 1.14.
Representative Raman spectra of bulk InAs and vertically aligned as-grown InAs NWs, NW1082, in z(xx) z and z(xy) z scattering geometries and recorded at 140 K, are shown in Fig. 2(a) and (b). A recent report 8 As mentioned earlier, information of both ZB and WZ phases are present in the unresolved TO modes of NWs. Variation of ω TO over the temperature range between 120 K and 230 K for bulk and NWs of InAs are plotted in Fig. 2(c). The magnitudes of the slopes are −0.0090, −0.0052 and −0.0057 cm -1 /K for bulk InAs, NW1082 and NW1249.
The pure anharmonic contribution on phonons, and thermal expansion of solid, changes in phonon frequency ω(T) with temperature. [11][12][13] The temperature derivative of ω(Τ) for higher temperature range (yet, below Debye temperature) is independent of temperature. 14 The variation of ω TO with temperature for bulk InAs (black symbols in Fig 2(c)) must be related to anharmonicity and thermal properties of the ZB phase of InAs. As both phases are present in NW1082 and NW1249, for a particular NW sample with mixed phase, variation of ω TO with temperature can be approximated as, We now proceed to explain how the interfacial strain between two phases in InAs NWs manifests in the variation in width of the LO phonon mode (Γ LO ) with temperature. The changes in Γ LO with temperature for bulk and NWs of InAs, below Debye temperature (255 K), are plotted in Fig. 3(a)−(c). The anharmonicity in the vibrational deformation potential leads to the increase in Γ LO with temperature, as observed for bulk InAs (in Fig. 3(a)). For NW1082 and NW1249 the variation of Γ LO (T) with temperature is observed to be very different. Moreover, we find that the value of Γ LO at 120 K is ~5 cm -1 for bulk InAs; while the value of Γ LO at 120K for NW1082 (6 cm -1 ) is less than the same for NW1249 (8 cm -1 ), however higher than the same for bulk InAs. It is well known that the width of a Raman line is strongly related to the strain in the layer. 15 The larger value of Γ LO in NWs, most likely, is the manifestation of interfacial strain between two phases along the NW. Having more number of interfaces, as seen in Fig. 1, the contribution of the interfacial strain between two phases is expected to be more in NW1249 than in NW1082. We believe that the relaxation of the strain with increase in temperature overshadows the effect of anharmonicity and leads to a negative slope of dΓ LO /dT for NW1249.
It appears that the effect of anharmonicity nearly cancels the effect of relaxation of strain with temperature for NW1082. At 250K the width is observed to be same as that of bulk InAs (~ 6 cm -1 ) for both NWs. Beyond 250K the variation of Γ LO with T for NWs and bulk is observed to be very similar in nature. Here we would like to mention that, in above, while analyzing the ω TO mode to estimate the relative WZ fraction, the shift in one TO component due to compressive strain in one phase nearly nullifies the opposite shift in TO component due to the tensile strain in the other phase. It is to be recalled that the observed TO peak is the convolution of TO modes from WZ and ZB phases which could not be resolved. Thus, the variation of unresolved mean ω ΤΟ with temperature is mostly dominated by the effect of anharmonicity and thermal expansion, as discussed.
In conclusion, we have demonstrated that Raman scattering measurements, like TEM, can be used as a spectroscopic tool to estimate the relative WZ phase in III-V semiconductor NWs with mixed phases. The manifestation of interfacial strain between two phases in the crystal structure is addressed from the analysis of Raman line profile. | 1,712.2 | 2011-12-24T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Analysis of Iterative Erasure Insertion and Decoding of FH / MFSK Systems without Channel State Information
We analyze the symbol measures for iterative erasure insertion and decoding of a Reed-Solomon coded SFH/MFSK system over jamming channels. In contrast to conventional erasure insertion schemes, iterative schemes do not require any preoptimized threshold or channel state information at the receiver. We confirm the performance improvement using a generalized minimum distance (GMD) decoding method with three different symbol measures. To analyze performance, we propose a new analysis framework considering the “trapped-error” probability. From analysis and the simulation results, we show that ratio-based GMD decoding has the best performance among the one-dimensional iterative erasure insertion and decoding schemes.
Introduction
To mitigate the threat of jamming attacks, Reed-Solomon (RS) coded frequency-hopping spread-spectrum systems with frequency shift keying (FSK) have been widely applied to military communication systems [1].Although these systems can reduce the effect of jamming attacks, some signals are corrupted when the jamming signal collides with the hopping frequency.To recover information from the contaminated signals, various channel codes have been applied with an interleaver [1].In many previous studies [2][3][4][5][6][7][8][9][10][11], erasure insertion schemes with an error and erasure decoder have been used to increase system reliability under this kind of jamming.The main aim of erasure insertion is to erase the corrupted received symbols (or dwells) using the threshold test.In most cases, the receiver should select an optimal threshold based on the system settings and the channel state at the time.If the receiver knows perfect channel state information, the performance can be improved using softdecision-based soft-decoding algorithms [12][13][14][15][16][17][18].However, it is very difficult to estimate the channel state information at the receiver (CSIR) when the channel is jammed [7].
In this paper, we consider an erasure insertion scheme that does not require any CSIR.Using appropriate symbol measures, we can improve the performance using several well-known iterative decoding algorithms [19][20][21][22].These algorithms require the order of reliability of the received symbols, not exact soft measure of the symbols.Specially, we consider "output," "ratio," and "sum" measures using generalized minimum distance (GMD) decoding.Some other decoding algorithms, such as [23], can be applied after the iterative erasure insertion, but it is out of our scope.This iterative erasure insertion and decoding scheme does not guarantee independent processing of the received symbols; therefore, we propose a new analysis framework in terms of trapped-error probability.Finally, we confirm that the ratio measure works best and its concatenated scheme with another measure improves the performance when the jamming signal is strong.In [8], the authors proposed another iterative erasure insertion and decoding scheme, but they considered a partial-band jamming channel and a standard bounded-distance decoding.In this paper, we cover the partial-band and partial-time jamming, and we exploit GMD decoding, not a bounded-distance decoding.This paper is organized as follows.In Section 2, we describe the system model.In Section 3, we review several conventional erasure insertion schemes and describe the iterative erasure insertion and decoding schemes.In Section 4, the trapped-error probability of the measures is derived.In Section 5, we compare the trapped-error probabilities and confirm the performances using computer simulations.Finally, concluding remarks are given in Section 6.
System Model
In this paper, we consider the RS coded slow frequencyhopping -ary FSK system over a partial-band (or partialtime) jamming channel.At the transmitter, information symbols are encoded by rate / RS encoder, where is the number of codeword symbols.It is well known that (, ) RS codes can recover the received word if + 2V < − + 1 and bounded-distance decoding is applied, where and V are the number of erasures and errors in the received word, respectively.The encoded codeword is represented by where , ∈ {0, . . ., − 1}, is the th codeword symbol.We assume that an -ary FSK modulated symbol represents the single corresponding RS codeword symbol over (), where = 2 and is a positive integer.Furthermore, we assume = − 1, because the conventional RS codes over () have length = − 1.
The system model is shown in Figure 1.The jamming signal is inserted into the frequency domain with fraction (jamming duty ratio) , which is the ratio of the jamming signal bandwidth to the spread-spectrum bandwidth.We assume that the partial-band jamming signal has a Gaussian distribution with zero-mean and /2 = 2 power spectral density in the jamming bandwidth.In the figure, represents the channel coefficients of the transmitter-receiver channel.If the transmitter-receiver channel is modeled as an additive white Gaussian noise (AWGN) channel, then the channel coefficient is set to = 1.The noise signal is modeled by a Gaussian distribution with zero-mean and 0 /2 = 2 power spectral density over the entire frequency domain.
Throughout this paper, we assume an ideal interleaver.From this assumption, every RS codeword symbol is transmitted over a different frequency hop, and these symbols are independently interfered with the jamming signal with probability .This assumption (and our result) covers every case such that the received symbol is independently interfered with probability , such as partial-band and partial-time jamming.Based on this assumption, we will discuss the symbols that are independently jammed with probability . Let be a general form of the transmitted signal, where , , and ℎ are symbol energy, symbol duration, and a selected hopping frequency, respectively.The orthogonal frequency = Δ, for ∈ {0, . . ., − 1}, is the selected frequency associated with , where Δ = 1/.If the transmitted symbol is contaminated by a jamming signal, the th received signal () is expressed as where () and () are the jamming signal and Gaussian noise, respectively.A uniform random phase is denoted by .
The non-coherent MFSK demodulator calculates , for the th symbol of the codeword and th MFSK tone as follows [24]: The symbol index can be omitted if it is obvious based on the context.Thus, denotes the value , in this case.After deinterleaving, all of the values of , , for 0 ≤ ≤ − 1 and 0 ≤ ≤ − 1, are input to "erasure insertion."This block erases several symbols with a selected rule, which will be described in Section 3.
Overview of Erasure Insertion Schemes
3.1.1.Conventional Schemes.Next, we briefly introduce the three conventional erasure insertion schemes: the ratio threshold test (RTT), the output threshold test (OTT), and the maximum output-ratio threshold test (MO-RTT) [4][5][6][7]25].Let max{⋅} and max {⋅} represent the first and second maximum values of a given set, respectively.For each from 0 to − 1, the th received symbol is erased by RTT if max { }/max { } > , where is a given threshold.OTT erases the symbol if max { } > , for a given threshold .Sometimes, the OTT rule is applied as max { } < [6]; however, this reversed inequality is not considered in this paper.We note that the conventional RTT and OTT are one-dimensional erasure insertion schemes, because the decision is made using a single measure.On the other hand, MO-RTT is a two-dimensional erasure insertion scheme, combining OTT and RTT.MO-RTT generally exhibits better performance than RTT or OTT with their optimal thresholds.Before we discuss the problems of conventional schemes, we introduce a new threshold test scheme, that is, sum-based threshold test (STT).This test erases the symbol if ∑ −1 =0 > , where is a given threshold.
Threshold test-based conventional schemes have used the one-time erasure insertion and the bounded-distance decoding with preoptimized thresholds.The optimum thresholds depend on the channel state; therefore, the receiver should know the channel state information at the time.Because of the difficulty of obtaining perfect CSIR over jamming channels, we consider the iterative erasure insertion scheme, which does not require any CSIR.
Iterative Erasure Insertion and Decoding Schemes.
To increase the error correction capability of RS codes, many iterative decoding algorithms have been proposed [19][20][21][22][23].For an unjammed channel, we can estimate the channel state information using various techniques.Thereafter, we can apply various decoding algorithms.However, if the jamming signal exists and CSIR is not available, we must address the jamming attack without CSIR.To use the iterative decoding algorithms without CSIR, we will consider a simplified version of the GMD decoding algorithm [26], as an example.The main idea of GMD decoding is iterative decoding trials with erasures.During each iteration, GMD decoding erases the least reliable symbol and attempts an error and erasure decoding.In this paper, we use the symbol measures ratio (max { }/max { }), output (max { }), and sum (∑ −1 =0 ), for the GMD decoding algorithm.These schemes will be called ratio-based GMD (R-GMD), outputbased GMD (O-GMD), and sum-based GMD (S-GMD) decoding, respectively.Note that the three measures are calculated without CSIR.
In Algorithm 1, the one-dimensional GMD algorithm is described using the three measures.Here, is the index of the codeword symbol and is the index of detectors of the MFSK demodulator.Initially, the receiver decides that = arg max { , } for each from 0 to − 1.In the beginning of while loop, the decoder attempts to recover the transmit word w in (1) from c with error-only decoding.This step prevents performance degradation when there is no jamming signal.Additionally, we will not consider the undetected errors of the decoded codeword, because they seldom occur [27].If the decoder declares a decoding failure, it erases the most suspicious (likely to have been jammed) symbol max and replaces it with an erasure , where max is determined by the selected decision rule.For each measure, max is determined as follows: Ratio measure: Output measure: Sum measure: After max is determined, it is added to , which is the current set of erased symbol indices.Then, error and erasure decoding is applied.As we noted in Section 2, (, ) RS codes can correct erasures and V errors if + 2V < − + 1 is satisfied.If the decoding is successful, then the erasure insertion loop is terminated and the decoded codeword becomes the output c.Otherwise, the algorithm continues to the second iteration of the while loop.In the second iteration and thereafter, all of the previously erased symbol indices () are maintained and max is determined over all the unerased indices.This iteration runs successively until the decoding succeeds or the number of erased symbols is the same as the number of parity symbols, that is, || = − .The while loop in Algorithm 1 is described by the feedback loop in Figure 1.We note that this iterative erasure insertion and GMD decoding algorithm runs independently for each codeword and does not make comparisons with any preoptimized threshold.Furthermore, if the conventional erasure insertion scheme can recover the codeword, then the corresponding iterative scheme can recover the codeword.
Two-Dimensional Schemes.
We introduced the MO-RTT at the beginning of Section 3 which has two-dimensional measures, the joint threshold tests of OTT and RTT.To exploit the diversity of multiple measures, various combinations of iterative schemes with some different measures are considered.In this paper, however, we will focus only on serial concatenation of R-GMD decoding and S-GMD decoding, called ratio-to-sum-based GMD (RS-GMD) decoding.First, RS-GMD decoder runs R-GMD decoding.If R-GMD decoding fails to decode with || = − + 1, then the receiver runs S-GMD decoding again from the beginning.We will show the performance improvement of RS-GMD decoding as compared to the performance of MO-RTT over various channel situations in Section 5.In Table 2, the conventional, iterative, and two-dimensional schemes are classified into the four types of measures.
Trapped-Error Probability
For the conventional erasure insertion schemes, that is, RTT, OTT, and MO-RTT, the error and erasure probabilities are derived in [4][5][6].And we have included the error and erasure probabilities of STT in the Appendix.Calculation of the error and erasure probabilities of a given symbol cannot be applied to the analysis of iterative schemes because the erasure insertion for one symbol is not independent of that for all other symbols.For performance analysis of the iterative schemes, we propose trapped-error probability analysis based on three measures: "ratio," "output," and "sum."The trappederror probability is a new and useful analysis framework to compare the quality of each measure.Consider a received word c of length .Let be the measure of one symbol of c for one of the three measures: = max { , }/max { , }, = max { , }, and = ∑ −1 =0 , for ratio, output, and sum measures, respectively.First, we sort the received symbols in descending order according to values that are calculated by one selected way (among the three kinds of measures).Let denote the ratio of the window of interest to the length .This ratio should be chosen in the range of 0 < ≤ ( − )/, because the (, ) RS code can correct up to − erasures.Now, we count the number of erroneous symbols in the upper most portion of symbols.Let denote the total number of erroneous symbols in the received word c of length .Then, the trapped-error probability Γ is defined as the ratio between and : For example, let us assume that there are = 100 received symbols with a total of = 15 erroneous symbols.Let us assume that = 0.1.And we consider the first 10 (=pN) positions when all of the symbols are arranged in decreasing order of their corresponding measures.Let us assume that = 8, 1, and 5 erroneous symbols are placed among these 10 positions for ratio, output, and sum measures, respectively.
Then, the trapped-error probabilities are Γ R = 8/15, Γ O = 1/15, and Γ S = 5/15 for these respective measures.In this case, because the iterative schemes successively erase the symbols in decreasing order of each measure, the ratio measure works best for trapping (or identifying) the erroneous symbols, and the output measure works worst.For a given , a larger trapped-error probability indicates better performance.
Without loss of generality, we assume that all-zero codeword w = (0, . . ., 0) is transmitted; that is, 0 is selected for every symbol transmission.If a transmitted symbol does not interfere with the jamming signal and the AWGN channel is assumed, then the probability density functions (PDFs) of 0 and , ∈ {1, . . ., − 1}, are derived as follows [24]: For the jammed case, where 2 = 2 + 2 and 0 (⋅) is the modified Bessel function of order zero.
Recall that is the measure of the symbol for one of the three measures.For simplicity, we will omit the subscript : = max { }/max { }, = max { }, or = ∑ −1 =0 for ratio, output, or sum measures, respectively.The CDF is simply denoted by ().Define 0 as a representation of the event of a raw symbol error, that is, the event where there exists at least one ∈ {1, . . ., −1} that satisfies 0 < . 0 's complementary event is denoted by 0 .Now, we consider the conditional CDFs | ( | 0 ) and | ( | 0 ), where is the discrete event variable ∈ { 0 , 0 }.Then, where s denote the corresponding PDFs.
Define as the point that satisfies ( ) = 1 − . is uniquely determined because () is monotonically increasing from 0 to 1.Then, the trapped-error probability Γ( ) can be defined as follows: Now, we will change (13) into a form involving 0 instead of 0 , such that its computation can be easily performed.From (12), we observe that Using ( 14), the trapped-error probability in ( 13) can be written as Using the relation Pr[ 0 ] + Pr[ 0 ] = 1 and after some manipulation, we have the following: Here, Pr[ 0 ] can be calculated as follows: where = 1 − .
Security and Communication Networks
Let us assume AWGN and a partial-band jamming channel.Using a binomial expansion and some calculations [30], (17) becomes Similarly, for a Rayleigh fading and partial-band jamming channel, (17) becomes Next, we compute | ( | 0 ) as follows: Now, assume that we use the ratio measure.Then, the numerator of (20) becomes For AWGN and Rayleigh fading channels with partial-band jamming, ( 21) becomes ( 22) and ( 23), respectively.
Security and Communication Networks 7 For the sum measure, the numerator of ( 20) is given as (27).Actually, ( 27) is the same as (A.4) in the Appendix, where = .Using the derived equations, we can calculate the trappederror probabilities of three measures.
Trapped-Error Probabilities.
In this subsection, we confirm the trapped-error probability analysis over AWGN and Rayleigh fading channels with partial-band jamming.Let us assume 4-FSK, = 0.1, and / 0 = 5dB for the AWGN channel and / 0 = 12 dB for the Rayleigh fading channel.For the higher order of modulation, such as 32-FSK, which we will consider, ( 27) requires 32-dimensional integration; therefore, we select 4-FSK as an example.To calculate the probabilities Γ( ) for each measure, the input should be obtained for the given set of system parameters.
For this, we further assume that = 0.1 and 0.2 for AWGN and Rayleigh fading channels, respectively.As we described in Section 4, we found values that satisfy ( ) = 1 − using simulations.They are listed in Tables 1 and 3 for various / .R, O, and S refer to ratio, output, and sum, respectively.In the investigation, we transmitted 10 5 symbols for various channel states and calculated their ratio, output, and sum measures.We note that these trapped-error probabilities do not depend on a specific coding scheme, because these statistics are observed at the demodulator output without coding.Figures 2 and 3 show the trapped-error probabilities of the three measures obtained using simulations and from (16) with the threshold values given in Tables 1 and 3. Solid and dashed lines indicate the probabilities obtained by simulation and ( 16), respectively.The cross, circle, and square marks represent Γ( ) values of ratio, output, and sum measures, respectively.These two figures confirm that the simulation is exact enough to closely follow the theoretical result in (16).Specially, Γ O in the analysis is almost the same as Γ O in the simulation results.This result implies the correctness of our performance simulation.
In these figures, the trapped-error probability of the ratio measure is increased as SJR is increased; therefore, we predict that the R-GMD decoding will perform better than the other one-dimensional schemes at high SJR.Additionally, we predict that S-GMD decoding is better than O-GMD decoding in the middle SJR region.Recall that the higher the trapped-error probability, the better the performance.This observation shows the influence on the two-dimensional schemes, because the trapped-error probability affects the number of decoding trials.We will discuss it in the following simulation results.
Performance over AWGN Channel.
Throughout performance simulations, we consider 32-ary FSK modulation with (31, 20) RS code over a field of size 32, an ideal interleaver, and a random hopping pattern.We first consider partialband jamming (or partial-time jamming with the same ) and AWGN channels.In this case, = 1 in Figure 1.For Figures 4-7, the channel noise is fixed, as we used / 0 = 5 dB.Various signals to jamming signal ratio / have been considered for the simulations.
In Figure 4, the performance of various erasure insertion schemes is plotted over the partial-band jamming channel with = 0.1.The dashed line shows the baseline performance "Without EI," which does not use any erasure insertion scheme.The unmarked solid line represents the performance of the MO-RTT, which is the conventional erasure insertion scheme.We note that the MO-RTT uses preoptimized thresholds and CSIR.To obtain the optimum thresholds for MO-RTT, we optimized the thresholds for / = 0∼30 dB with 10 dB steps.Solid lines with circle, square, cross, plus, and diamond marks represent the performance of output-based, sum-based, ratio-based, ratio-to-output-based, and ratio-tosum-based GMD decoding, respectively.In this figure, we also present the performance using log-likelihood ratio (LLR) based GMD decoding as the triangle-marked line.This is the performance of a system with perfect CSIR, such as SNR, SJR, and the existence of a jamming signal.This LLR-GMD decoding iteratively erases the least reliable symbols based on LLR, as described in [19,22,26].Because it is well known that MO-RTT exhibits better performance than OTT or RTT [5][6][7], the performances of OTT and RTT are omitted.Before discussing the simulation result, we note that RSO-GMD decoding, which serially exploits the three measures, has almost the same performance as RS-GMD decoding; therefore, we have also omitted it.From Figure 4, we make the following observations: (1) As we expect, LLR-GMD decoding with CSIR shows the best performance.We will focus on the performance gap between LLR-GMD decoding with CSIR and the other erasure insertion schemes without CSIR.To achieve WER = 10 −4 , LLR-GMD decoding requires / = 19dB.The ratio-based GMD decoding and its two-dimensional schemes achieve the target WER at / = 21 dB which is much closer than that of MO-RTT.Even if the jamming power is decreased as / = 25 dB, the performance of MO-RTT is still far from WER = 10 −4 .In the low SJR region, only the two two-dimensional iterative schemes can achieve WER = 10 −2 without CSIR.We note that the one-dimensional iterative schemes and MO-RTT require / > 9dB to achieve WER = 10 −2 .
(2) The performance of RS-GMD decoding approaches that of LLR-GMD decoding, as SJR is increased.At / = 20dB, LLR-GMD decoding with perfect CSIR has 35% less WER than RS-GMD decoding.We note that this gap cannot be closed even if the jamming power is decreased.
(3) When the jamming power is decreased, that is, the channel approaches the unjammed channel, the performance of R-GMD and RS-GMD decoding is still better than the performance of "Without EI" or MO-RTT.This implies that the iterative erasure insertion and decoding schemes do not decrease the performance if there are no jamming signals.Because of this, we do not need to detect the jamming signal.
In Figure 5, trapped-error probabilities of ratio, output, and sum measures are represented, which corresponds to Figure 4. To obtain the trapped-error probabilities, SJR and SNR were scaled based on the code rate 20/31 = /, which is different from Section 5.1.We used = 0.3, and the other parameters are the same as those used in Figure 4.In Figure 5, Γ R is much larger than Γ O and Γ S .The values of Γ R explain the good performance of R-GMD decoding.We find that Γ O goes to <1 −/ ≈ 0.3 at / ≈ 10 dB.Any iterative scheme that has zero trapped-error probability at a given SJR has the same performance as the performance of "Without EI".This result explains why O-GMD decoding has a performance degradation region near 10 dB, in Figure 4.Because the performance of O-GMD decoding rapidly approaches the performance of "Without EI", its performance is degraded as SJR is increased.We note that the performance of S-GMD decoding slowly approaches the performance of "Without EI"; therefore, there is no performance degradation region.
The performance gain of the iterative schemes in Figure 4 cannot be obtained without drawbacks: the iterative schemes require more decoding trials.To investigate how many decoding trials are required, we determine the average number of decoding iterations of various iterative schemes via simulations.The results are shown in Figure 6.In fact, we simulated iterative schemes that erase one symbol during the first loop and then erase two symbols for each remaining loop, because this process does not decrease the performance, as described in [19].In general, the average number of decoding trials decreases rapidly as SJR increases.At high SJR, most of the received words are completely decoded by a single trial of error-only decoding, which holds true because higher SJR implies that the environment becomes less and less hostile.For low SJR, the average decoding trial of RS-GMD decoding is approximately 1.75 (at / = 0dB), which decreases as SJR increases.This result indicates that the iterative erasure insertion and decoding scheme is practically implementable.We note that the trial number of O-GMD decoding is increased at approximately / ≈ 10dB, and that is consistent with the performance degradation of O-GMD decoding in Figures 4 and 5.Because O-GMD decoding could not accurately erase the erroneous symbols, the decoder has to try more decoding.Because of this result, we must use S-GMD decoding, instead of O-GMD decoding, for the two-dimensional scheme with R-GMD decoding.Now, recall the analysis results of Section 5.1.In Figure 2, we observed that the trapped-error probability of the ratio measure is increased as SJR is increased and that of the sum measure is decreased.For the case in which the trapped-error probabilities cross at the point of SJR, if we want to have fewer decodings at the higher SJR, then we must use R-GMD decoding first in RS-GMD decoding.If we want to have fewer decodings at the lower SJR, then S-GMD decoding should be applied first.In Figures 4 and 6, because we are considering the SJR region in which Γ R > Γ S , we have first exploited R-GMD decoding for RS-GMD decoding.In Figure 7, the performances of MO-RTT and RS-GMD decoding are shown over a partial-band jamming and an AWGN channel with various jamming duty ratios with fixed / 0 = 5 dB and / = 10 dB.For all values of , RS-GMD decoding works much better than MO-RTT.We note that two erasure insertion schemes are more efficient for smaller channels.
Performance over Rayleigh Fading
Channel. Figure 8 shows the performance of various erasure insertion schemes over partial-band jamming and Rayleigh fading channel with = 0.1 and / 0 = 12 dB.For each symbol transmission, in Figure 1 is realized by an i.i.d.Rayleigh distribution.We assume that fading coefficients are not known to the receiver, except for the system of MO-RTT.
In Figure 8, RS-GMD decoding exhibits better performance than MO-RTT, which is the same result as we found over the AWGN channel.For the same WER of 10 −3 , the jammer should spend 14 dB more .The WER of ratio-based schemes (including two-dimensional schemes) approach 10 −4 .Unlike the results for an AWGN channel, R-GMD decoding is dominant and the contribution of S-GMD decoding is marginal.Therefore, we conclude that R-GMD decoding is sufficient for Rayleigh fading channels.Additionally, we present the performance of LLR-GMD decoding which knows perfect CSIR.As we discussed in Section 5.2, the performance of R-SEI approaches that of LLR-SEI and the gap is maintained for every SJR.
In Figure 9, the trapped-error probabilities of the measures are displayed over a Rayleigh fading channel with / 0 = 12 dB.The other parameters are the same as those in Figure 5.In this figure, the trapped-error probability of the ratio measure is 1 in all SJR regions.In other words, R-GMD decoding is the best one-dimensional iterative scheme for the target system parameters.As we found for the other trappederror probability results, the trapped-error probabilities of sum and output measures decrease as SJR increases.Where the trapped-error probabilities are lower than <1 −/ ≈ 0.3, that is, / = 10dB and 16 dB for output and sum, respectively, the performances of O-GMD and S-GMD decoding follow the performance of "Without EI".As we discussed above, the fast decreasing trapped-error probability of the output measure causes the performance degradation of O-GMD decoding at approximately 10 dB in Figure 9.
Concluding Remarks
In this paper, we considered iterative erasure insertion and decoding schemes that do not require any preoptimized thresholds or any CSIR.Additionally, we proposed a new analysis framework for the ratio, output, and sum measures.From the simulation results, we confirmed that the ratio-based GMD decoding scheme and its two-dimensional schemes have the best performance.Using the trappederror probability, the performances of the iterative erasure insertion and decoding schemes are explained.
Figure 1 :
Figure 1: System block diagram with erasure insertion component.
Figure 4 :
Figure 4: Performance of various erasure insertion schemes over partial-band jamming and AWGN channels with = 0.1 and / 0 = 5 dB.
Figure 6 :Figure 7 :
Figure 6: Average number of decoding iterations for various erasure insertion schemes over partial-band jamming and AWGN channels with = 0.1 and / 0 = 5 dB.
Table 2 :
Class of erasure insertion schemes. | 6,484.8 | 2018-01-24T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Performance Analysis in DF Energy Harvesting Full-Duplex Relaying Network with MRC and SC at the Receiver under Impact of Eavesdropper
Faculty of Electronics Technology, Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam Faculty of Automobile Technology, Van Lang University, Ho Chi Minh City, Vietnam Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical & Electronics Engineering, Ton Duc (ang University, Ho Chi Minh City, Vietnam Wireless Communications Research Group, Faculty of Electrical and Electronics Engineering, Ton Duc (ang University, Ho Chi Minh City, Vietnam Department of Computer Fundamentals, FPT University, Ho Chi Minh City, Vietnam
(i) We present an FD-and SWIPT-assisted relaying network in decode-and-forward (DF) under the presence of a direct link. Particularly, an eavesdropper is able to overhear the information transmission from source to destination via a relay. Moreover, an FDenabled relay node is able to get energy from a transmitter and use it to transfer signals to the a receiver. Notably, the relay node can simultaneously receive information from the source and transmit it to the destination using the FD technique. (ii) We derive closed-form expressions of intercept probability (IP) at the eavesdropper E and outage probability (OP) at the destination D in maximal ratio combining (MRC) and selection combining (SC) techniques. (iii) e correctness of the developed analysis is validated through the Monte Carlo simulation. On one hand, we investigate the security perspective in terms of intercept probability. On the other hand, system reliability is also studied through outage probability. Consequently, a trade-off between IP and OP can provide many insightful and useful perspectives for system designers.
System Model
In Figure 1, we consider a relaying network where a relay R aids in conveying data from a transmitter S to a receiver D in the presence of one eavesdropper E. In particular, an eavesdropper is trying to get the information from S and R by applying maximal ratio combining (MRC) and selection combining (SC) techniques. In Figure 2, the relay R can harvest energy from the source during αT. In the remaining time, (1 − α)T, the information process is executed. We assume that the channel between two users follows block Rayleigh fading, where channel coefficients are unchanged during a time frame and change independently across time frames. Moreover, let us denote h XY for XY ∈ SR, RD, RR, SD, RE, SE { } as the channel coefficient of the link between nodes X and Y. Because the channels are Rayleigh distribution, the channel gains such as |h RD | 2 and |h SD | 2 are exponential random variables (RVs) whose cumulative distribution function (CDF) and probability density function (PDF) are, respectively, represented as where λ is rate parameter of exponential distribution. e received signal at the relay can be expressed as where x S is the energy symbol and E |x S | 2 � P S , x R is the loopback interference due to full-duplex relaying and satisfies E |x R | 2 � P R , where Ε · { } denotes the expectation operation. n R denotes the zero mean additive white Gaussian noise (AWGN) with variance N 0 .
At the first phase, the harvested energy at the relay can be computed by where 0 < η ≤ 1 denotes the energy conversion efficiency. From (3), the average transmit power of the relay node can be obtained as where κ � ηα/1 − α. Next, in the second phase, the eavesdropper E may intercept signals from both relay R and source S. Nevertheless, source S also generates artificial noise x S to prevent E from overhearing the source information. Moreover, since the relay R and destination D are legitimate users, they are assumed to know the artificial noise created by S. Consequently, they can cancel the artificial noise at the receiver circuit. erefore, the received signal at E from relay R and source S can be, respectively, expressed as where n E � EI n � n II E is the AWGN with variance N 0 . Since we adopt the decode-and-forward (DF) protocol, the signal to interference noise ratios (SINR) at the eavesdropper in the second phase from (5) are, respectively, given by
Wiretap links
Information flow Energy flow As mentioned in the above discussion, the destination D can cancel the artificial noise from source S. Consequently, the received signal at the destination from relay R and source S during the second phase can be expressed as where n D � n I D � n II D is the AWGN with variance N 0 .
Intercept Probability (IP) Analysis
Destination D will be intercepted if E can successfully wiretap signal; that is, c E ≥ c th , where c th � 2 R − 1 and R is the target rate. erefore, the IP of the system can be expressed as
Instantaneous End-to-End SNR at E Using the MRC
Technique. In this case, the end-to-end SNR at E from (6) can be given by en, the IP in (7) can be rewritten as where In order to find the probability in (10), we have to find the CDF of X and PDF of Y. So, the CDF of X can be calculated by By applying (Eq. 3.324.1, [26]), (11) can be obtained by Next, the CDF of Y can be formulated as e PDF of Y is given by Applying (12) and (14), the IP, in this case, can be claimed by Journal of Electrical and Computer Engineering
Instantaneous End-to-End SNR at E Using the SC
Technique. In this case, the end-to-end SNR at E can be given by en, the IP can be expressed as By applying (12) and (13), the expression of IP SC can be expressed as
Outage Probability (OP) Analysis
e OP can be defined by
Instantaneous End-to-End SNR at D Using the MRC
Technique. From (2) and (7), outage probability of relay link can be computed at From (20), we can see that, to successfully receive data at the destination D, the system needs to decode in the first and second hop.
Equivalently, we can represent the end-to-end SINR of the relay path by By using MRC technique, the received SINR at destination can be given as where Z � Ψ|h SD | 2 . e OP, in this case, can be expressed by From (21), F T (t) can be calculated as where Substituting (23) and (24) into (22), F T (t) can be mathematically calculated as By substituting (25) into (21), OP MRC can be given by 4 Journal of Electrical and Computer Engineering
Instantaneous End-to-End SNR at D Using the SC
Technique. Similar to MRC technique as mentioned above, the overall SNR at D can be given by Hence, the OP can be calculated as where P 3 � Pr min 1 Finally, substituting (28) and (29) into (27), the OP in this scenario can be expressed as Journal of Electrical and Computer Engineering 5
Simulation Results
e simulation results are given to validate the performance, that is, IP and OP, of our proposed schemes under maximal ratio combining (MRC) and selection combining (SC) techniques. e results are obtained by averaging 10 5 Rayleigh channels [27][28][29].
In Figures 3 and 4, we investigate the IP and OP as functions of Ψ(dB), where c th � 1, α � 0.5, η � 1. One can observe from Figure 3 that as Ψ increases from −5 to 20 dB, the IP performance improves accordingly. e source's transmit power is proportional to Ψ value, since Ψ is defined as a ratio between source transmit power and additive white Gaussian noise. us, the higher the Ψ value is, the better the SNR at eavesdropper can be obtained. Furthermore, the IP of the MRC technique outperforms that of the SC technique. It is because the eavesdropper can overhear information from both source S and relay R using MRC while only receiving signals from the source with the SC technique. In Figure 4, the outage performance of MRC is superior to that of the SC method. It is because the destination can combine signals from relay and source in MRC, which only receives this information from relay user in the SC method. As shown from Figures 3 and 4, the performances of both destination D and eavesdropper E can be continuously improved by increasing the transmit power. us, the designer should select a suitable value of Ψ when designing in practice for trade-off between security and reliability of the system. Figures 5 and 6 show the IP and OP as a function of α for the time-switching relaying (TSR) protocol, where c th � 1, Ψ � 3 dB, η � 1. e value of α is crucial, since it influences both the harvested energy at the relay and the information transmission from the relay to the destination. As a result, the higher the value of α is, the more energy the relay can harvest. However, there is less time for information transmission to the destination. erefore, the OP can obtain the best value at the optimal point of α; then the performance worsens. Notably, when the value of α is small, the eavesdropper has a low probability of intercepting the information. For instance, the IPs of MRC and SC are 0.073 and 0.0068, respectively, when α equals 0.05. When α is higher than the optimal value, the outage performance and system security are worse. It provides useful information for designing a practical system. In Figures 7 and 8 , we investigate the IP and OP as a function of rate threshold requirement to decode the signal successfully, where α � 0.85, Ψ � 5 dB, η � 1. As observed from Figures 7 and 8, as the rate threshold increases from 0.25 to 4 bps/Hz, the IP and OP performance degrades accordingly. It is expected since when the rate requirement is higher, the eavesdropper and destination need to obtain a higher transmission rate to decode the signal. However, the transmission rate is limited by many factors such as channel gain and allocated time for data transmission. One more interesting point is that the IP and OP performances of MRC and SC are converged to a saturation value when the rate threshold increases.
In Figures 9 and 10, we study the influences of energy conversion efficiency on the network performance, that is, IP Journal of Electrical and Computer Engineering
Conclusion
is paper investigated the decode-and-forward (DF) fullduplex (FD) relaying networks under the presence of a direct link. Specifically, the relay node can harvest energy from the source and use it to transmit information to the destination. By considering the above discussions, we derive the closedform expressions of the intercept probability (IP) and the outage probability (OP) in both maximal ratio combining (MRC) and selection combining (SC) techniques at the receiver. Besides, the simulation results show the exactness of the mathematical results compared to simulation ones. Besides, the IP and OP of the MRC technique obtain better performance in comparison to those of the SC technique. In particular, the system security is improved significantly when the time splitting factor value is small. We can extend this work to the case where the source and eavesdropper are equipped with multiple antennas.
Data Availability
No data were used in this paper. e authors just proposed the system and simulated it by MATLAB.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2,679.6 | 2021-06-26T00:00:00.000 | [
"Computer Science"
] |
Eigenstate thermalization hypothesis, time operator, and extremely quick relaxation of fidelity
The eigenstate thermalization hypothesis (ETH) insists that for nonintegrable systems each energy eigenstate accurately gives microcanonical expectation values for a class of observables. As a mechanism for ETH to hold, we show that the energy eigenstates are superposition of uncountably many quasi eigenstates of operationally defined"time operator", which are thermal for thermodynamic isolated quantum many-body systems and approximately orthogonal in terms of extremely short relaxation time of the fidelity. In this way, our scenario provides a theoretical explanation of ETH.
Introduction
A considerable research attention has been devoted to the foundation of statistical mechanics on the basis of intrinsic thermal nature of individual pure states. The long-standing fundamental problems involve deriving the principle of equal weight and to explain the mechanism of irreversible thermalization in terms of isolated quantum many-body systems [1][2][3][4][5].
In particular, typicality shows that a pure state uniform randomly sampled with respect to the Haar measure from an appropriate energy shell well represents the microcanonical ensemble [6][7][8][9], and provides a simple scenario to justify the principle of equal weight: Fix a set of observables, then, a majority of the pure states in the Hilbert space are similar to each on another in terms of the expectation values. Thus, we may superpose them with an almost arbitrary weight, and this includes the case of equal weight.
Several different approaches have been studied, including through a restriction on the macroscopic observables [10,11], the general evaluation of relaxation time [12][13][14][15], the Eigenstate thermalization hypothesis (ETH) [1,[16][17][18][19][20][21], and dynamical experiments in autonomous cold atomic systems [22][23][24]. Of these, we focus on the foundation of ETH in terms of the time-energy uncertainty by noting that the energy eigenstates are globally distributed in the basis of suitably defined 'time operator' as detailed below (1) and in section 2. Before this, in the rest of this paragraph, we recall the basic knowledges of ETH. The ETH claims that each energy eigenstate well represents the microcanonical ensemble for nonintegrable systems, i.e. their expectation values for a class of observables agree well with the microcanonical averages. By requiring this property and nondegeneracy condition, arbitrary initial pure states equilibrate for the long time average of expectation values of the fixed observables. The ETH has been discussed in terms of the nonintegrability [1,25,26], partly because the relaxation property is considered to be sensitive to the presence of integrals of motion. On the other hand, [25][26][27] orredhave indicated that most energy eigenstates of integrable systems are thermal. Such an intrinsic thermal nature, shared by most energy eigenstates of integrable systems is often called weak ETH. By considering the observables of a small subsystem, the ETH resembles typicality, although there is still a possibility that the deviations from microcanonical ensemble average of typical states and energy eigenstates are quantitatively different. We will numerically evaluate the deviations later in section 3.
Let us try to understand the mechanism of ETH in terms of typicality. Our starting point is to seek a relevant basis n f ñ {| }that is thermal, and where each energy eigenstate is a superposition of sufficient number of Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. orthonormal states: ] is the dimension of the energy shell, and c n d 1 = ( ) .
In this paper, we present a scenario where the ETH holds by explaining that the quasi eigenstates t Y ñ | ( ) of 'time operator' T form the relevant basis by considering a thermodynamic system where t Y ñ | ( ) thermalizes and stays in equilibrium. Note that the 'time operator' is constructed in (2) via spectral decomposition, and is approximately canonical conjugate to the Hamiltonian up to a constant owing to a long time cutoff. However, proper definitions of 'time operator' remains still controversial in general. Instead of inquiring into the best definition, we explore a foundation of ETH by introducing 'time operator' as (2) and its quasi eigenstates, which are approximately orthogonal through the extremely quick relaxation of the fidelity [12][13][14][15]. Note that such quasi orthogonality is analogous to that of the coherent state, and is used in quantum non demolition measurement [28,29]. In particular, we show that each energy eigenstate can be expressed as a superposition of many mutually almost orthogonal pure states that are considered thermal. Note that [20] quantified the degree of superposition with the use of Shannon entropy, which is basis dependent and maximized to guarantee ETH. Subsequently, [21] addressed the issue to specify a class of observables, such as local and extensive quantities, that satisfy ETH in terms of mutually unbiased basis with respect to the Hamiltonian. Mutually unbiasedness can be regarded as a generalization of the concept of the canonical conjugate, which is significant for our argument, and thus [20,21] are related to the present study. The main difference is that in this article, the quasi eigenstates of the 'time operator' are unbiased with respect to the Hamiltonian. However, we do not attempt to apply ETH to the 'time operator' itself, and instead, we explain that the vast majority of the quasi eigenstates of 'time operator' are regarded as thermal with the use of the typicality [6][7][8][9] by considering observables of a subsystem and the assumption of equilibration. Then, the energy eigenstates are regarded as thermal.
The remainder of this paper is organized as follows. In section 2, we express the energy eigenstates in terms of quasi eigenstates of 'time operator', and explore their orthogonality and thermal nature of quasi eigenstates of the 'time operator'. In section 3, we numerically verify the approximate orthogonality and thermal nature of quasi eigenstates and the ETH. Section 4 is devoted to a summary. (3) is slightly modified. We set c n as real, since the phase factor at t=0 can be absorbed into the definition of energy eigenstates E n ñ | . Then, we can show that
Time-evolved states
, which is explained later. Let us formally define the 'time operator' as where we consider a large but finite time T (c.f. infinitesimally small cut-off [30,31]). It is well-known that 'time operator', which is canonical conjugate to the Hamiltonian, does not exist as an observable [32][33][34][35], partly because the Hamiltonian is bounded below. On the other contrary, 'time operator' defined by (2) approximately satisfies the commutation relation up to a boundary constant, just as in the case of 'phase operator' [34]:
and a partial integral
| as T ¥ from the nondegeneracy assumption. In our case, the diagonal basis t Y ñ | ( ) of (2) are approximate eigenstates of T , as the orthogonality holds with time resolution τ, which is in marked contrast to the case of mechanical observables. We explain how the time resolution τ is extremely short for thermodynamic systems.
We can express the energy eigenstates by the inverse Fourier transform: Equation (5) shows that the energy eigenstates are superpositions of continuously many quasi eigenstates of 'time operator', which approximately satisfies the orthogonality(8) [12]. In the next section, we numerically verify how the quasi eigenstate t Y ñ | ( ) typically well represents the microcanonical state, and is considered as a relevant basis to discuss the foundation of ETH. Here, we analytically explore the quasi orthogonality and equilibrium nature of the basis t Y ñ | ( ) . First, we recall the calculation of the fidelity detailed in [12] (see also [14,15]). By using where the discrete sum is evaluated as integral using the density of the states E W ¢ ( ), which renders the spectrum of the eigenenergy continuous, and the dynamics are supposed to be irreversible. At this stage, the recurrence phenomena in case of extremely long time is omitted. Such a continuous approximation accurately holds as shown in figure 1. We expand the density of the states as Here, we evaluate ΔE eff from the condition that the absolute value of the first order term β x is much larger than that of the second order x for x=ΔE eff , which yields C V ?βΔE eff . For thermodynamic density of the states, the energy width ΔE eff is considered to be of the same order as 1 b . As the heat capacity is proportional to the system size, we can accurately calculate the integral up to the first order of x.
, the inner product (8) becomes considerably small [12], |(ˆ|ˆ| ) | , which is compatible to our evaluation of τ. It is also well-known that for long-term regime, the fidelity shows power-law decay by Pailey-Wiener theorem for Fourier-Laplace transformation. Meanwhile, an exponential decay is observed for the time scale of interest to us.
We now discuss some properties of the 'time operator'. The operator | from the quasi orthogonality. Note that the projection operator (9) can be used for measurement of time as projection to t Y ñ | ( ) : Given a state t Y ñ | ( ) with unknown t, such projection determines t with an accuracy τ. By repeating this thought experiment many times with randomly distributed t, we actually obtain the spectral fluctuation. The projection operator (9) also satisfies the completeness
Numerical simulation
First, we explore the quasi orthogonality for quasi eigenstates t Y ñ | ( ) . Then, we also verify the validity of ETH, and investigate the thermal nature of time-evolved states. Regarding the quasi orthogonality, further details of calculation are shown in [12]. For the sake of simplicity and concreteness, we first consider one-dimensional ] as a subspace spanned by eigenstates E n ñ | with (a) 151n200 and (b)251n300. We set the parameters to J=1 and α=1. On the contrary, we also explored various choices of γ j such as uniform case γ j =0.5 (γ=0 corresponds to the integrable case), randomly distributed case γ j ä[0, Δ] with Δ=0.5, 1. The eigenenergies were in increasing order (red broken line) calculated using equation (8), where we set ÿ=1. Note that the relaxation time τ was quite general [12] and was the same order as the Boltzmann time 2πβÿ for macroscopic systems [13], which is extremely short at room temperature s 10 12 t~-. Therefore, we can conclude the bases t Y ñ | ( ) (t0) in the expansion (5) are mutually orthogonal. We then verified the thermalization of the quasi eigenstates of T , i.e. the basis t Y ñ | ( ) well represented the microcanonical state for most t 0, ]. For this purpose, it was necessary to calculate the expectation values of a class of observables  for t Y ñ | ( ) and compare with those of the microcanonical ensemble. Theoretically, t Y ñ | ( ) describes thermal equilibrium for most t according to the typicality [6,7] and the unitarity of the time evolution. Numerically, we investigated the expectation values of arbitrary observables defined on the left-most m sites A m [8,9,16,25]. Thus, we calculate the Hilbert-Schmidt distance t t m 0 Here, Tr N−m stands for the partial trace for the right-most N−m sites.
In figure 2(a) figure 4, we show the dependences of r Dˆ(blue curve) and t Dˆ(red curve) on the subsystem size m for the case of uniform magnetic field γ j =0.5, and randomly sampled γ j from [0, Δ] with Δ=0.5, 1. The deviations r Dˆand t Dˆagreed well one another all three cases. This means that ETH holds with the same accuracy as the thermal property of t Y ñ | ( ) for these cases. The agreement may get better when the finiteness of the energy width can be negligible for the calculation of t Dˆ.
In the presence of strong spatial disorder, the ETH breaks down [37][38][39]. Exploring the case of non-thermal case including the many-body localization possibly in more than one dimensions is an important future problem. | 2,816.8 | 2018-02-28T00:00:00.000 | [
"Physics"
] |
The 2.6 Å Structure of a Tulane Virus Variant with Minor Mutations Leading to Receptor Change
Human noroviruses (HuNoVs) are a major cause of acute gastroenteritis, contributing significantly to annual foodborne illness cases. However, studying these viruses has been challenging due to limitations in tissue culture techniques for over four decades. Tulane virus (TV) has emerged as a crucial surrogate for HuNoVs due to its close resemblance in amino acid composition and the availability of a robust cell culture system. Initially isolated from rhesus macaques in 2008, TV represents a novel Calicivirus belonging to the Recovirus genus. Its significance lies in sharing the same host cell receptor, histo-blood group antigen (HBGA), as HuNoVs. In this study, we introduce, through cryo-electron microscopy (cryo-EM), the structure of a specific TV variant (the 9-6-17 TV) that has notably lost its ability to bind to its receptor, B-type HBGA—a finding confirmed using an enzyme-linked immunosorbent assay (ELISA). These results offer a profound insight into the genetic modifications occurring in TV that are necessary for adaptation to cell culture environments. This research significantly contributes to advancing our understanding of the genetic changes that are pivotal to successful adaptation, shedding light on fundamental aspects of Calicivirus evolution.
Introduction
Human noroviruses (HuNoVs) have been a leading cause of severe gastroenteritis for many years and are still causing many outbreaks all over the world, often with mutations that alter susceptible populations.It is a member of the Caliciviridae family, which comprises 11 genera: Bavovirus, Lagovirus, Minovirus, Nacovirus, Nebovirus, Norovirus, Recovirus, Salovirus, Valovirus and Vesivirus [1].All members of the Caliciviridae family are nonenveloped viruses exhibiting icosahedral symmetry, with diameters ranging from 27 to 50 nm.
Among the genera in Caliciviridae, only Norovirus and Sapovirus have caused acute gastroenteritis outbreaks in humans.Noroviruses exhibit extensive genetic diversity, encompassing over 10 distinct genogroups.Among these, Genogroup II genotype 4 (GII.4)strain of HuNoV has been responsible for the majority of norovirus outbreaks since 2002.Novel GII.4 variants have continued to be discovered in recent outbreaks and sporadic cases [2].Research progress on HuNoV was hampered by the lack of a robust cell culture system for more than four decades.Only in 2016 was HuNoV successfully cultured by Dr. Mary Estes and her team at Baylor College of Medicine in stem cell-derived human enteroids with the addition of bile extract [3].The other genera demonstrate a broad range of animal hosts, spanning from domestic pigs to cats, rabbits, birds, fish and walruses.
Considering the lack of an efficient cell culture system for HuNoVs, Tulane virus (TV) [4] stands out as a valuable surrogate for HuNoVs.TV is the prototype of the Recovirus genus.It represents a non-segmented, single positive-strand RNA virus with the smallest genome-comprising 6.7 k nucleotides-in the Caliciviridae family.Initially isolated from stool samples of Rhesus macaques at the Tulane National Primate Research Center in 2008, TV was found to be cultivable in several monkey kidney cell lines.In comparison to murine norovirus (MNV), TV exhibits a closer genetic and structural affinity to the Norovirus genus.TV and HuNoV share similar genome organization (Figure 1A), with three open reading frames: ORF1 encoding a nonstructural polyprotein that undergoes further digestion into several nonstructural proteins, including RNA-dependent RNA polymerase (RdRp); ORF2 encoding the major capsid protein VP1; and ORF3 encoding the minor capsid protein VP2.After binding to its receptor, the VP2 protein of feline calicivirus was observed to assemble into a portal-like structure, potentially serving as a channel for genome translocation [5].
Cell Culture and Purification of Tulane Virus
LLC-MK2 cells were cultured with M199 medium (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum, 100 U/mL penicillin, and 100 µg/mL streptomycin.When the 100 mL cells reached a confluency of about 90%, they were inoculated with 300 µL Tulane virus stock (3 × 10 8 PFU/mL).After 48 h of incubation, the 100 mL infected culture was collected and used to infect 800 mL of LLC-MK2 confluent cell culture.The infection was carried out for 2 h at 37 °C, followed by the replacement of the medium with M199 containing 2% FBS and subsequent incubation at 37 °C.After 48 h of incubation, adherent cells were scraped off and the culture was collected and centrifuged at 2000× g.The cell sediments were resuspended and went through three rounds of freeze-thaw cycles to break the cells and release the viruses.Centrifugation at 8000× g was performed to remove the cell debris.The supernatant was collected, combined and centrifuged at 150,000× g for 2 h with a Ti 50.2 rotor (Beckmann, Brea, CA, USA).The supernatant was discarded, and the sediment was resuspended in PBS pH 7.4 overnight at 4 °C.Then, it was subjected to density gradient centrifugation (Opti-prep; 222,000× g for 4 h at 4 °C; SW-41 Ti swinging-bucket rotor) using a Beckman Coulter Optima L-90 ultracentrifuge (Beckman, USA).The gradients were fractionated by bo(om puncture.SDS-PAGE electrophoresis was performed to identify the fraction with the major capsid protein VP1 band.The fractions that contained virions were combined and concentrated with a 100 kDa cut-off centrifugal filter unit (Millipore, MA, USA).Finally, the virus sample was buffer-exchanged and stored in PBS at 4 °C.
Viral RNA Extraction and Genome Sequencing
The whole TV RNA genome was extracted with a QIAamp Viral RNA Mini Kit (Cat No./ID: 52904, QIAGEN, Germantown, MD, USA) according to the manufacturer's protocol.The cDNA library was constructed from the extracted virus RNA using an Illumina TruSeq Stranded Total RNA kit without ribo-depletion.The cDNA library was sequenced The structure of TV reveals the characteristic T = 3 icosahedral lattice comprising 90 dimers of the capsid protein VP1.Each icosahedral asymmetric unit consists of three subunits-A, B and C-with A and B forming an A/B dimer around the five-fold axes of symmetry, and the C/C dimer around the two-fold axes.VP1 subunits further divide into two domains: the shell domain (S domain), constituting the inner spherical shell of the virus capsid, and the protruding domain (P domain), further divided into the P1 and P2 sub-domains.Notably, the P2 sub-domain is accountable for receptor or antibody binding.VP1, with 534 amino acids and a molecular weight of 57.8 kDa, exhibits high sequence identity across the full length of the capsid protein.Structurally, the capsid protein VP1 of TV, human norovirus (GII.4), and Norwalk virus (Genogroup 1 of Norovirus) adopts a similar fold in both the S and P domains (Figure 1B).Within the S domain, an eight-stranded jellyroll fold, a common motif found in virus structures, is observed.
More importantly, it has also been reported to be able to interact with the histoblood group antigens (HBGAs) [6,7], which are known as the cellular receptor of HuNoVs.HBGAs are carbohydrates found on the cell surfaces in most epithelial tissues and in secretions [8].HBGAs are also utilized as mediators of infection by many other human pathogens [9], for instance, the rotavirus and some types of bacteria, like Pseudomonas aeruginosa, Helicobacter pylori, and enterotoxigenic Escherichia coli (ETEC).
Previously, the structural study of TV has been hampered by the low titer such that only a few particles can be found under TEM.Therefore, an antibody-based affinity grid method was developed to enrich the virus particles on the EM grid [10,11].A 2.5 Å resolution TV structure was determined with virus particles captured by the antibody on the grid.However, the virus structure alone does not answer the question of virus-host interactions.In the process of studying the virus receptor complex, a new TV variant (the 9-6-17 strain) was derived in the lab, resulting from adaptation to the cell culture.As confirmed by the results of ELISA, it has lost its binding to the original TV host cell receptor, B-type HBGA.It presents a unique opportunity to study the crucial interactions of TV and receptor binding.Using sequence analysis, structural biology, and mutagenesis, we aim to elucidate the molecular basis of HBGA binding in TV and gain deeper insights into how HuNoVs exploit these receptors to establish infection.Ultimately, our findings may contribute to the development of effective vaccines and antiviral strategies against HuNoVs and other related caliciviruses.
Cell Culture and Purification of Tulane Virus
LLC-MK2 cells were cultured with M199 medium (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum, 100 U/mL penicillin, and 100 µg/mL streptomycin.When the 100 mL cells reached a confluency of about 90%, they were inoculated with 300 µL Tulane virus stock (3 × 10 8 PFU/mL).After 48 h of incubation, the 100 mL infected culture was collected and used to infect 800 mL of LLC-MK2 confluent cell culture.The infection was carried out for 2 h at 37 • C, followed by the replacement of the medium with M199 containing 2% FBS and subsequent incubation at 37 • C.After 48 h of incubation, adherent cells were scraped off and the culture was collected and centrifuged at 2000× g.The cell sediments were resuspended and went through three rounds of freeze-thaw cycles to break the cells and release the viruses.Centrifugation at 8000× g was performed to remove the cell debris.The supernatant was collected, combined and centrifuged at 150,000× g for 2 h with a Ti 50.2 rotor (Beckmann, Brea, CA, USA).The supernatant was discarded, and the sediment was resuspended in PBS pH 7.4 overnight at 4 • C.Then, it was subjected to density gradient centrifugation (Opti-prep; 222,000× g for 4 h at 4 • C; SW-41 Ti swinging-bucket rotor) using a Beckman Coulter Optima L-90 ultracentrifuge (Beckman, USA).The gradients were fractionated by bottom puncture.SDS-PAGE electrophoresis was performed to identify the fraction with the major capsid protein VP1 band.The fractions that contained virions were combined and concentrated with a 100 kDa cut-off centrifugal filter unit (Millipore, MA, USA).Finally, the virus sample was buffer-exchanged and stored in PBS at 4 • C.
Viral RNA Extraction and Genome Sequencing
The whole TV RNA genome was extracted with a QIAamp Viral RNA Mini Kit (Cat No./ID: 52904, QIAGEN, Germantown, MD, USA) according to the manufacturer's protocol.The cDNA library was constructed from the extracted virus RNA using an Illumina TruSeq Stranded Total RNA kit without ribo-depletion.The cDNA library was sequenced using a MiSeq (Illumina, CA, USA) in the Purdue Genomics Core Facility.The genome was assembled from raw data using the SPAdes software v3.0.0.The wild-type TV sequence was obtained from GenBank: EU391643.1.The genome sequence of this 9-6-17 TV will be deposited to GenBank.
Saliva-Based ELISA for Measurement of HBGA Binding
The saliva samples were treated via boiling for 10 min prior to the assay for the denaturation of potential antibodies that may interfere with the assay.The expression levels of the A, B, H-type 1, H-type 2, and Lewis's antigens in the saliva were determined previously using anti-H type 1 (BG-4), anti-Leb (BG-6), and anti-Ley (BG-8) (Signet Laboratories Inc. Dedham, MA, USA) MAbs, and anti-H type 2 (BCR 9031), anti-A (BCR 9010), and anti-B (BCRM 11007) MAbs (Accurate Chemical and Scientific Corporation, Westbury, NY, USA).To test HBGA binding by the TV strains, saliva samples diluted with PBS at 1:1000 were coated onto plates at 4 • C overnight, and then, incubated with a serially diluted TV preparation.The salivary HBGA-bound TVs were detected by mouse anti-TV serum (1:3500), and then, by HRP-conjugated goat anti-mouse IgG (1:5000).The color signal was developed with the TMB (3,3 ′ ,5,5 ′ -Tetramethylbenzidine), and OD was read at a wavelength of 450 nm.
Image Processing
The movies were aligned with Motioncor2/1.0.5 [12] and binned to 1.5×.The dataset with DTT treatment was binned to 1.6×.Subsequently, CTF determination of doseweighted micrographs was performed with CTFFIND4 [13].The particles were picked via cisTEM [14] and imported into cryoSPARC [15] for 2D classification, ab-initial reconstruction and homogeneous refinement with icosahedral symmetry.The particle numbers retained in each step are listed in Table S1.The cryoSPARC homogenous refined maps were further refined with icosahedral symmetry in JSPR [16,17].The viral RNA genome exhibits notably strong intensity.To mitigate the influence of RNA density on the FSC calculation, a circular mask with a diameter of 210 Å was utilized to exclude the inner RNA density.Subsequently, a larger circular mask, 420 Å in diameter, was applied to envelop the entire virus before employing the trueFSC.pyprogram in JSPR.This program generates an adaptive mask based on the mass of the viral protein shell.The local resolutions were evaluated using RELION/3.0[18][19][20][21][22].The maps were sharpened with EMAN2 [23] e2proc3d.pyor DeepEMhancer [24].
Model Refinement
The PDB model of the wild-type TV structure (PDB: 8VG6) was fitted into the cryo-EM maps of the 9-6-17 TV with Chimera [25].The asymmetric unit was cut out from the entire virus map and refined with phenix.real_space_refine[26] and Rosetta [27] via the GNU parallel system [28].The model with the best model-map FSC score was selected for further analysis.All models and maps were visualized in Coot [29].The data collection and model statistics are shown in Table S1.
A New TV Variant Has Lost Its Ability to Bind to the Type B HBGA Receptor
In the process of TV culture, we obtained a new TV variant that has lost the ability to bind to its receptor, HBGA.We named this variant the 9-6-17 TV.We purified the 9-6-17 TV and performed ELISA with the type B saliva sample (OH68) and type O saliva sample (OH7), representing B-type and O-type HBGA.It has been shown previously that Tulane virus can be adhered to the type B saliva samples [30].The type O saliva sample was used as a negative control since it has been reported to be the phenotype that TV does not interact with.Different types of saliva samples were first applied to the plate, and the serial diluted virus samples were added subsequently to measure the binding ability.The absorbance signal should correspond to the amount of virus that binds with the saliva sample on the plate.Figure 2 demonstrates distinct A450 absorbance patterns in the wild-type TV and the 9-6-17 TV when exposed to the type B saliva sample.The wild-type TV exhibits a robust A450 signal with a clear dose-response relationship, displaying a decrease in absorbance as the virus sample concentration decreases.Conversely, the 9-6-17 TV presents a notably weak absorbance signal, regardless of the concentration of the virus sample.These findings suggest significant divergence between the wild-type TV and the 9-6-17 TV in their responsiveness to the type B saliva sample.
as a negative control since it has been reported to be the phenotype that TV does not in-teract with.Different types of saliva samples were first applied to the plate, and the serial diluted virus samples were added subsequently to measure the binding ability.The absorbance signal should correspond to the amount of virus that binds with the saliva sample on the plate.Figure 2 demonstrates distinct A450 absorbance pa(erns in the wild-type TV and the 9-6-17 TV when exposed to the type B saliva sample.The wild-type TV exhibits a robust A450 signal with a clear dose-response relationship, displaying a decrease in absorbance as the virus sample concentration decreases.Conversely, the 9-6-17 TV presents a notably weak absorbance signal, regardless of the concentration of the virus sample.These findings suggest significant divergence between the wild-type TV and the 9-6-17 TV in their responsiveness to the type B saliva sample.
Sequence Analysis of the 9-6-17 TV Strain
To investigate the genetic difference in this new TV strain from the wild-type TV, we performed whole-genome sequencing with the extracted viral RNA of the purified virions.Compared to the wild-type TV (GenBank: EU391643), the first eight nucleotides at the 5′ end of the 9-6-17 TV genome are missing, and 31 nucleotide mutations are identified in the 9-6-17 TV, which is about 0.45% of the whole genome length (Table 1).Among the eighteen amino acid mutations, eight amino acid mutations were located in the major capsid protein VP1, which is responsible for the receptor binding in the infection process.The eight amino acid mutations in VP1 are N3S, N284H, F334V, A335E, A343T, S367K, I451M and R452C.The multi-sequence alignment of the 9-6-17 TV strain and the wild-type TV capsid protein VP1 is shown in Figure 3.
Sequence Analysis of the 9-6-17 TV Strain
To investigate the genetic difference in this new TV strain from the wild-type TV, we performed whole-genome sequencing with the extracted viral RNA of the purified virions.Compared to the wild-type TV (GenBank: EU391643), the first eight nucleotides at the 5 ′ end of the 9-6-17 TV genome are missing, and 31 nucleotide mutations are identified in the 9-6-17 TV, which is about 0.45% of the whole genome length (Table 1).Among the eighteen amino acid mutations, eight amino acid mutations were located in the major capsid protein VP1, which is responsible for the receptor binding in the infection process.The eight amino acid mutations in VP1 are N3S, N284H, F334V, A335E, A343T, S367K, I451M and R452C.The multi-sequence alignment of the 9-6-17 TV strain and the wild-type TV capsid protein VP1 is shown in Figure 3.We next assessed which of these mutations are adaptive mutations instead of random mutations.We examined the VP1 sequence of ten other Recovirus strains.We compared the amino acid identity at each mutation site in VP1 in these Recovirus strains and listed them in Table S2.Among the eight mutation sites, only A343T is a conservative substitution.Nine of the homolog proteins have alanine at position 343, but there is one homolog protein that has threonine.For position 3 of VP1, six homolog proteins in the same genus have the same amino acid serine as in the 9-6-17 TV instead of asparagine in the wild-type TV.This suggests that this mutation in 9-6-17 TV is probably evolutionarily favored over the residue asparagine at position 3 since it is more frequently found in other Recovirus strains.The same situation applies to N284H and F334V.For the N284H mutation, it has a structural basis whereby it can form a π-π stacking interaction with tryptophan at position 287, as shown in Figure S1.The sequence alignment was performed using Clustal Omega [31][32][33] and displayed using ESPript 3.0 [34].The identical residues were marked in red boxes.Residues with high similarities were highlighted with a yellow background, while those with low similarities were colored in red.
Structure Determination of the 9-6-17 TV
We have previously determined the structure of wild-type TV (PDB:8VG6) to a 2.6 Å resolution using antibody-based affinity cryo-EM [10].To investigate the structural variation in the 9-6-TV, we collected three cryo-EM datasets of the purified 9-6-17 TV.The first The wildtype TV sequence was obtained from GenBank under accession number EU391643.The GenBank accession number of 9-6-17 TV is PP098449.The sequence alignment was performed using Clustal Omega [31][32][33] and displayed using ESPript 3.0 [34].The identical residues were marked in red boxes.Residues with high similarities were highlighted with a yellow background, while those with low similarities were colored in red.
At position 35, all ten Recovirus sequences are aspartic acid, while the 9-6-17 TV has a glutamic acid replacing alanine at that position.The 335 position is at the tip of the P domain.It is near the dimer interface but is not directly involved.The side chain of Asp335 protrudes outward and does not interact with neighboring residues.Therefore, this residue is very likely to be involved in receptor binding.Since both aspartic acid and glutamic acid are negatively charged, it must be essential to have negatively charged residues at this position.
Among the eight mutations, only the S367K and the conjunct mutations I451M and M452C are unique in our 9-6-17 TV strain.They are not observed in any other Recoviruses.However, the S367K mutation is less likely to be a random mutation because the S367K mutation is induced by two nucleotide changes, from codons AGT to AAA.The probability of two consecutive random mutations should be minimal, although we could not rule out this possibility.
Structure Determination of the 9-6-17 TV
We have previously determined the structure of wild-type TV (PDB:8VG6) to a 2.6 Å resolution using antibody-based affinity cryo-EM [10].To investigate the structural variation in the 9-6-TV, we collected three cryo-EM datasets of the purified 9-6-17 TV.The first dataset that we collected for 9-6-17 TV has inverted image contrast whereby the virus particle is brighter than the background (Figure 4A).We reasoned that this was likely caused by the residue iodixanol (OptiPrep) in the buffer not being fully dialyzed away.Iodixanol is the density gradient that we used in the final purification step.It has several iodine atoms in its structure (Figure 4B) that are able to scatter electrons stronger than virus particles.Therefore, the residual iodixanol in the buffer created a darker background in the cryo-EM image.Despite the reverse contrast, the reconstruction was performed as normal, and the map contrast can be easily inverted by multiplying −1 with the map voxel values (Figure 4C).In the end, a 3.2 Å resolution map (Figure 4D) was generated and the atomic model was derived from it.
Biomolecules 2024, 14, x FOR PEER REVIEW 8 of 15 dataset that we collected for 9-6-17 TV has inverted image contrast whereby the virus particle is brighter than the background (Figure 4A).We reasoned that this was likely caused by the residue iodixanol (OptiPrep) in the buffer not being fully dialyzed away.Iodixanol is the density gradient that we used in the final purification step.It has several iodine atoms in its structure (Figure 4B) that are able to sca(er electrons stronger than virus particles.Therefore, the residual iodixanol in the buffer created a darker background in the cryo-EM image.Despite the reverse contrast, the reconstruction was performed as normal, and the map contrast can be easily inverted by multiplying −1 with the map voxel values (Figure 4C).In the end, a 3.2 Å resolution map (Figure 4D) was generated and the atomic model was derived from it.
Disulfide Bond from the R452C Mutation Stabilizes Dimer Interaction
When we examined the density map of the 9-6-17 TV, we found a density bridge connecting the two subunits of A/B and C/C dimers at the position of Cys 452 (Figure 5A).This density is consistent with the Cys 452 -Cys 452 disulfide bond within all the dimers.To verify the dimerization of VP1 bridged by the disulfide bond, we performed SDS-PAGE with and without reducing reagents (DTT and beta-ME) in the loading buffer.A band at
Disulfide Bond from the R452C Mutation Stabilizes Dimer Interaction
When we examined the density map of the 9-6-17 TV, we found a density bridge connecting the two subunits of A/B and C/C dimers at the position of Cys 452 (Figure 5A).This density is consistent with the Cys 452 -Cys 452 disulfide bond within all the dimers.To verify the dimerization of VP1 bridged by the disulfide bond, we performed SDS-PAGE with and without reducing reagents (DTT and beta-ME) in the loading buffer.A band at around a 120 kDa molecular weight, the mass of a VP1 dimer, was observed in the sample without DTT but not in the sample with DTT (Figure 5B).This indicates that the Cys 452 -Cys 452 disulfide bond covalently linked the A/B or C/C subunits in a dimer.Research has shown that the Cd 2+ metal ion is bound between the P dimers via a conserved residue, His 460 , to stabilize the P dimer conformation in human norovirus GII.4 [35].The His 460 -Cd 2+ -His 460 interaction is in exactly the same position as the disulfide bond in TV, as shown in Figure 5C.This indicates an evolutionary preference for stabilizing P dimers at the center position for Caliciviruses.Two additional datasets were then collected to further explore the effects of the Cys 452 -Cys 452 disulfide bond in the virus structure.To reduce this disulfide bond, we treated one sample with DTT and the other one without DTT to serve as the control sample.These two datasets used the same batch of viruses and were collected in the same session.The OptiPrep was largely dialyzed away for this sample.These two datasets have normal image contrast and 2D class averages, as shown in Figure 6A-C, which indicates that the final reconstructed map has an average local resolution around 2.8 Å for the shell and only slightly lower resolution for the spikes.Our primary focus lies in the identification of mutation sites.In Figure 8, we labeled all the mutated residues within the refined 9-6-17 TV model.Notably, four of these mutations are situated on the top surface of the P2 domain, which serves as the interaction surface with host cell receptors.Additionally, two mutations (I451M and R452C) are positioned at the interface of the P dimer, contributing to the stabilization of the P dimer through the formation of a disulfide bond.Our primary focus lies in the identification of mutation sites.In Figure 8, we labeled all the mutated residues within the refined 9-6-17 TV model.Notably, four of these mutations are situated on the top surface of the P2 domain, which serves as the interaction surface with host cell receptors.Additionally, two mutations (I451M and R452C) are positioned at the interface of the P dimer, contributing to the stabilization of the P dimer through the formation of a disulfide bond.
Elongated Extra Density in the Hydrophobic Pocket in the P Dimer
An elongated extra density (Figure 9A,B) was found at the interface of the two subunits in both the A/B dimers and the C/C dimers near residues P425, V341 and M352.It has a volume of ~94 Å 3 and a surface area of ~220 Å 2 .The pocket around it is mostly composed of hydrophobic residues (F329, V341, I380, M382, P425) with only one charged residue, D342, and one polar residue, T432.Picornaviruses are known to have a pocket factor in the five-fold canyon to stabilize the virus capsid, and once the virus binds to its receptor, the pocket factor is dislodged, which induces genome release [36].However, a pocket factor has not been reported in Caliciviruses.The pocket factor of human enterovirus 71 (EV71) was modeled as lauric acid.We tried to fit the unmodified lauric (
Elongated Extra Density in the Hydrophobic Pocket in the P Dimer
An elongated extra density (Figure 9A,B) was found at the interface of the two subunit in both the A/B dimers and the C/C dimers near residues P425, V341 and M352.It has volume of ~94 Å 3 and a surface area of ~220 Å 2 .The pocket around it is mostly composed o hydrophobic residues (F329, V341, I380, M382, P425) with only one charged residue, D342 and one polar residue, T432.Picornaviruses are known to have a pocket factor in the five fold canyon to stabilize the virus capsid, and once the virus binds to its receptor, the pocke factor is dislodged, which induces genome release [36].However, a pocket factor has no been reported in Caliciviruses.The pocket factor of human enterovirus 71 (EV71) was mod eled as lauric acid.We tried to fit the unmodified lauric acid (C12H24O2) into the extra density (Figure 9C,D).It has the same length of extra density, but the extra density appears to hav C2 symmetry, while the lauric acid does not have symmetry.Since this extra density exist in both the A/B (C2 symmetry not imposed) and C/C (C2 symmetry imposed) dimers, its C symmetry reflects the true shape of an unknown molecule instead of an artifact of imag processing.
Discussion
Histo-blood group antigens (HBGAs) are intricate terminal carbohydrates found on cellular surfaces and are also secreted into various bodily fluids, including the saliva and intestinal secretions.The synthesis of these carbohydrates originates from disaccharide pre cursors, undergoing a stepwise addition of monosaccharides through the activity of specifi glycosyltransferases in defined locations, ultimately determining the distinct types o HBGAs.They are widely distributed on red blood cells and mucosal epithelium cells, play ing a crucial role in human susceptibility to various virus infections, including HuNoVs and rotavirus.TV, a non-pathogenic surrogate for HuNoVs, has also been reported to utiliz HBGAs as receptors.
Here, we introduce the novel 9-6-17 TV variant, showcasing distinct alterations from the wild-type TV.Previous studies demonstrated TV's recognition of B-type HBGAs and
Discussion
Histo-blood group antigens (HBGAs) are intricate terminal carbohydrates found on cellular surfaces and are also secreted into various bodily fluids, including the saliva and intestinal secretions.The synthesis of these carbohydrates originates from disaccharide precursors, undergoing a stepwise addition of monosaccharides through the activity of specific glycosyltransferases in defined locations, ultimately determining the distinct types of HBGAs.They are widely distributed on red blood cells and mucosal epithelium cells, playing a crucial role in human susceptibility to various virus infections, including HuNoVs and rotavirus.TV, a non-pathogenic surrogate for HuNoVs, has also been reported to utilize HBGAs as receptors.
Here, we introduce the novel 9-6-17 TV variant, showcasing distinct alterations from the wild-type TV.Previous studies demonstrated TV's recognition of B-type HBGAs and specific groups of A-type HBGAs.However, our findings reveal a remarkable loss of binding ability in the 9-6-17 TV variant toward its original receptor, B-type HBGA, resembling observations in certain norovirus strains that don't bind to HBGAs [37].
Intriguingly, we observed additional density in the hydrophobic pocket between the P dimers, which is at the exact same location as that in which murine norovirus binds to the bile acids GCDCA (glycochenodeoxycholic acid) and LCA (lithocholic acid) [38].While bile acids act as a cofactor for murine norovirus infection and are essential for human norovirus infection [3], TV, in contrast, does not require bile acid supplementation.This raises the prospect of an unidentified ligand in the hydrophobic pocket, warranting further exploration for future studies.
These outcomes suggest potential alternative receptors for both TV and HuNoVs.Despite strong evidence linking HBGAs to human norovirus infection, reports hint at the existence of other co-factors or receptors, at least for specific strains.Our study underscores the importance of specific amino acid residues in HBGA recognition, offering insights for developing broadly neutralizing antibodies or antiviral compounds targeting these interaction sites.Understanding the diversity of potential receptors used by Caliciviruses can inform the design of multivalent vaccines, providing broader protection against various viral strains and genotypes.
While our study presents a detailed analysis of the 9-6-17 TV variant, it highlights the need for further investigations.Future studies are needed to identify the ligand identity in the hydrophobic pocket and confirm its functional significance.Additionally, higherresolution structural analysis without icosahedral symmetry could unveil the finer details of the ligand density.
In sum, our study elucidates the pivotal role of receptor adaptation in TV, by presenting a novel TV variant that lacks the binding ability to its receptor while remaining infectious and identifying critical residues responsible for the loss of receptor binding ability.These findings pave the way for ongoing research into alternative receptors and co-factors of human norovirus, contributing to the development of effective preventative and therapeutic strategies against it.
Figure 1 .
Figure 1.(A) Genome organization of Tulane virus.The arrows indicate the putative protease digestion sites.(B) Structure comparison of capsid protein VP1 of Tulane virus (PDB: 8VG6), human norovirus VLP (unpublished) and Norwalk virus (PDB: 1IHM).The P1, P2 and S domains of TV VP1 single subunit are colored brown, cyan and coral, respectively.
Figure 2 .
Figure 2. Saliva-based ELISA for measurement of HBGA binding of TV and the growth curve of the 9-6-17 TV variant.One type B saliva sample (OH68) and one type O saliva sample as a negative control were used for testing the change in TV binding to the blood type B antigen.The titer of the 9-6-17 TV was 10 9 PFU/mL.The initial concentration in the ELISA starting at 1:40 was 2.5 × 10 7 PFU/mL.The wild-type TV sample was from the PBS-dialyzed pool of wild-type TV peak fractions (F7 and F8) after CsCl density gradient centrifugation.The wild-type TV sample was prepared in 2016 and the virus titer was estimated to be 3 × 10 8 PFU/mL based on the plaque assay.The virus inoculum was generated in 2015.The initial concentration in the ELISA starting at 1:20 was 1.5 × 10 7 PFU/mL.
Figure 2 .
Figure 2. Saliva-based ELISA for measurement of HBGA binding of TV and the growth curve of the 9-6-17 TV variant.One type B saliva sample (OH68) and one type O saliva sample as a negative control were used for testing the change in TV binding to the blood type B antigen.The titer of the 9-6-17 TV was 10 9 PFU/mL.The initial concentration in the ELISA starting at 1:40 was 2.5 × 10 7 PFU/mL.The wild-type TV sample was from the PBS-dialyzed pool of wild-type TV peak fractions (F7 and F8) after CsCl density gradient centrifugation.The wild-type TV sample was prepared in 2016 and the virus titer was estimated to be 3 × 10 8 PFU/mL based on the plaque assay.The virus inoculum was generated in 2015.The initial concentration in the ELISA starting at 1:20 was 1.5 × 10 7 PFU/mL.
Biomolecules 2024 , 15 Figure 3 .
Figure 3. Sequence alignment of VP1 of the wild-type TV and the 9-6-17 TV strains.The wild-type TV sequence was obtained from GenBank under accession number EU391643.The GenBank accession number of 9-6-17 TV is PP098449.The sequence alignment was performed using Clustal Omega[31][32][33] and displayed using ESPript 3.0[34].The identical residues were marked in red boxes.Residues with high similarities were highlighted with a yellow background, while those with low similarities were colored in red.
Figure 3 .
Figure 3. Sequence alignment of VP1 of the wild-type TV and the 9-6-17 TV strains.The wildtype TV sequence was obtained from GenBank under accession number EU391643.The GenBank accession number of 9-6-17 TV is PP098449.The sequence alignment was performed using Clustal Omega[31][32][33] and displayed using ESPript 3.0[34].The identical residues were marked in red boxes.Residues with high similarities were highlighted with a yellow background, while those with low similarities were colored in red.
Figure 4 .
Figure 4.The first dataset of 9-6-17 TV.(A) The representative micrograph shows the opposite contrast due to the iodixanol in the background.The scale bar is 200 nm.(B) The chemical structure of iodixanol (OptiPrep).(C) The central section of the reconstructed map before contrast inversion.(D) The FSC curve of the final reconstruction.The red do(ed line is where FSC equals 0.143.
Figure 4 .
Figure 4.The first dataset of 9-6-17 TV.(A) The representative micrograph shows the opposite contrast due to the iodixanol in the background.The scale bar is 200 nm.(B) The chemical structure of iodixanol (OptiPrep).(C) The central section of the reconstructed map before contrast inversion.(D) The FSC curve of the final reconstruction.The red dotted line is where FSC equals 0.143.
Biomolecules 2024 , 15 Figure 5 .
Figure 5.The disulfide bond formed by Cys 452 -Cys 452 .(A) The density of the disulfide bond of Cys 452 at the dimer interface.The contour level of the map for TV without DTT in ChimeraX is set to 2. The sulfur atoms are depicted in yellow, the electron density map is represented by a blue mesh, and the TV model is presented in purple.(B) The SDS-PAGE result of 9-6-17 TV with or without DTT treatment.Without reducing agents, the 9-6-17 TV shows a dimer band at around the 120kD position, while the one with reducing agents does not have the dimer band.(C) Superimposition of norovirus VLP (PDB:7MRY) VP1 dimer (cyan) onto TV VP1 dimer (purple).Note that the position of the Cys 452 -Cys 452 disulfide bond aligns precisely with the location where His 460 and Leu 459 interact with the Cd 2+ ion in the human norovirus VLP structure.The fla(ened circles represent the Cys 452 -Cys 452 disulfide bond.The Cyan residues belong to norovirus VP1.
Figure 5 .
Figure 5.The disulfide bond formed by Cys 452 -Cys 452 .(A) The density of the disulfide bond of Cys 452 at the dimer interface.The contour level of the map for TV without DTT in ChimeraX is set to 2. The sulfur atoms are depicted in yellow, the electron density map is represented by a blue mesh, and the TV model is presented in purple.(B) The SDS-PAGE result of 9-6-17 TV with or without DTT treatment.Without reducing agents, the 9-6-17 TV shows a dimer band at around the 120kD position, while the one with reducing agents does not have the dimer band.(C) Superimposition of norovirus VLP (PDB:7MRY) VP1 dimer (cyan) onto TV VP1 dimer (purple).Note that the position of the Cys 452 -Cys 452 disulfide bond aligns precisely with the location where His 460 and Leu 459 interact with the Cd 2+ ion in the human norovirus VLP structure.The flattened circles represent the Cys 452 -Cys 452 disulfide bond.The Cyan residues belong to norovirus VP1.
Figure 7 .
Figure 7.The 9-6-17 TV datasets with and without DTT treatment.(A) The gold-standard Fourier shell correlation (FSC) curves of these two datasets.(B) The cross-section of the reconstructed map with DTT treatment shows the estimated local resolution.(C) The ribbon diagram of the full capsid shows the T = 3 icosahedral organization with subunit A (blue), subunit B (red) and subunit C (yellow) colored, respectively.(D) Two segments, a.a.43-44 and a.a.189-191, were selected to present the quality of the electron density map.The electron density map is overlaid on the final refined structure of subunit A. The contour level of the DeepEMhancer sharpened density map is 0.138.
Figure 7 .
Figure 7.The 9-6-17 TV datasets with and without DTT treatment.(A) The gold-standard Fourier shell correlation (FSC) curves of these two datasets.(B) The cross-section of the reconstructed map with DTT treatment shows the estimated local resolution.(C) The ribbon diagram of the full capsid shows the T = 3 icosahedral organization with subunit A (blue), subunit B (red) and subunit C (yellow) colored, respectively.(D) Two segments, a.a.43-44 and a.a.189-191, were selected to present the quality of the electron density map.The electron density map is overlaid on the final refined structure of subunit A. The contour level of the DeepEMhancer sharpened density map is 0.138.
Figure 7 .
Figure 7.The 9-6-17 TV datasets with and without DTT treatment.(A) The gold-standard Fourier shell correlation (FSC) curves of these two datasets.(B) The cross-section of the reconstructed map with DTT treatment shows the estimated local resolution.(C) The ribbon diagram of the full capsid shows the T = 3 icosahedral organization with subunit A (blue), subunit B (red) and subunit C (yellow) colored, respectively.(D) Two segments, a.a.43-44 and a.a.189-191, were selected to present the quality of the electron density map.The electron density map is overlaid on the final refined structure of subunit A. The contour level of the DeepEMhancer sharpened density map is 0.138.
Figure 8 .
Figure 8.The atomic model of the 9-6-17 TV DTT-treated map.Subunit A is colored blue, while subunit B is colored red.A refined model of the AB dimer with seven of the eight mutation sites (except for N3S) is displayed and labeled.The insertion on the right shows the 90-degree tilted view of the top of the P domain containing 5 mutation sites.The bottom insert shows a close-up view of the two mutations at the dimer interface.
C 12 H 24 O 2 ) into the extra density (Figure 9C,D).It has the same length of extra density, but the extra density appears to have C2 symmetry, while the lauric acid does not have symmetry.Since this extra density exists in both the A/B (C2 symmetry not imposed) and C/C (C2 symmetry imposed) dimers, its C2 symmetry reflects the true shape of an unknown molecule instead of an artifact of image processing.
Figure 8 .
Figure 8.The atomic model of the 9-6-17 TV DTT-treated map.Subunit A is colored blue, whil subunit B is colored red.A refined model of the AB dimer with seven of the eight mutation site (except for N3S) is displayed and labeled.The insertion on the right shows the 90-degree tilted view of the top of the P domain containing 5 mutation sites.The bo(om insert shows a close-up view o the two mutations at the dimer interface.
Figure 9 .
Figure 9. Extra density in the hydrophobic pocket of the two dimers shown in top view (A) and sid view (B).Lauric acid is fi(ed into the extra density: top view (C), side view (D).
Figure 9 .
Figure 9. Extra density in the hydrophobic pocket of the two dimers shown in top view (A) and side view (B).Lauric acid is fitted into the extra density: top view (C), side view (D). | 9,184 | 2024-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ribosomal small subunit domains radiate from a central core
The domain architecture of a large RNA can help explain and/or predict folding, function, biogenesis and evolution. We offer a formal and general definition of an RNA domain and use that definition to experimentally characterize the rRNA of the ribosomal small subunit. Here the rRNA comprising a domain is compact, with a self-contained system of molecular interactions. A given rRNA helix or stem-loop must be allocated uniquely to a single domain. Local changes such as mutations can give domain-wide effects. Helices within a domain have interdependent orientations, stabilities and interactions. With these criteria we identify a core domain (domain A) of small subunit rRNA. Domain A acts as a hub, linking the four peripheral domains and imposing orientational and positional restraints on the other domains. Experimental characterization of isolated domain A, and mutations and truncations of it, by methods including selective 2′OH acylation analyzed by primer extension and circular dichroism spectroscopy are consistent with our architectural model. The results support the utility of the concept of an RNA domain. Domain A, which exhibits structural similarity to tRNA, appears to be an essential core of the small ribosomal subunit.
multiple domains 8,15,17 . In our revised model, with domain A as a nexus, each helix is allocated to a single domain. Here we experimentally test predictions of this domain model.
This domain model has utility, and explains some dynamical properties of the SSU. The spokes are relatively flexible, allowing the domains to move relative to each other during initiation and translocation 18,19 . Helix 3 is the spoke linking domain A to the 5′ domain. Helix 19 is the spoke linking the central domain while helix 28 is the spoke linking the 3′ major domain. The 3′ end of domain A is the spoke linking 3′ minor domain (Figs 1-3). Domain A incorporates the central pseudoknot (CPK) [20][21][22][23] and consists of helices 1, 2, 3, 19, 27, and 28 (Fig. 3).
Domain A imposes orientational and positional restraints on the other domains, which are depicted by arcs in (Figure 2). Helices 3 and 19 of domain A form one arc and helices 27 and 28 form another arc. These two orthogonal arcs intersect within the central pseudoknot (Fig. 2b). The intersecting arcs position the four peripheral domains. Nucleotides at the 5′ end of :SSU rRNA (nucleotides 9 to 13) interact with both arcs and stabilize their relative orientation. The molecular interactions that stabilize the intersecting arcs relative to each other are illustrated in Fig. 2c,d. Universally conserved nucleotides are shown in Supplementary Fig. 6. Small changes in domain A are propagated into larger motions of the peripheral domains during translocation 19 .
One goal here is to test this domain model. Therefore we isolated domain A from the rest of the :SSU rRNA. We refer to isolated domain A as "domain A ISO " (Fig. 3). To form domain A ISO as a single RNA polymer, we linked rRNA fragments together with three stem-loops (rGGCGUAAGCC), within helices 3, 19, and 28 (Fig. 3). The stem loops replace the connections between domain A and the four peripheral domains. The stem loops are intended to render domain A independent of the surrounding RNA without influencing its structure, especially its tertiary structure. We characterized domain A ISO and mutations and truncations of domain A ISO by methods including selective 2′ OH acylation analyzed by primer extension (SHAPE) and circular dichroism (CD) spectroscopy. In addition, we observe that the three-dimensional structure of domain A has analogy in other biological RNAs.
Results
Folding of domain A ISO . Domain A appears to satisfy the criteria of an RNA domain. Domain A ISO is characterized here by SHAPE reactivity and CD spectroscopy. We determine effects of mutations and of added Mg 2+ . (a) Secondary structure of the T. thermophilus SSU rRNA colored by domains. Domain A is black, the 5′ domain is yellow, the central domain is red, the 3′ major domain is blue, and the 3′ minor domain is green. (b) Three-dimensional structure of the SSU rRNA (PDB ID 4V51) 6 . The rRNA is represented in ribbon, except for domain A, which is in space filling representation. The domains in the three-dimensional representation are colored by same scheme as in the secondary structure. We compare the SHAPE reactivity of domain A ISO with that of the same rRNA elements within the intact SSU, previously published by Weeks and coworkers 24 .
Three-dimensional and secondary structures can be probed with SHAPE. Paired nucleotides, in double-stranded regions, are less reactive to the SHAPE reagent than unpaired nucleotides in loops, bulges and single strands 25 . Nucleotides involved in tertiary and Mg 2+ interactions change reactivity upon the addition of Mg 2+ 26-29 . The data suggest that in the presence of Na + alone, domain A ISO forms helices 1, 2, 3, 19, 27 and 28 (Fig. 4a). For helices 1, 3 and 19, the duplex regions are unreactive and the loop regions are reactive. High reactivity of nucleotide C31 suggests a defect near the loop of helix 3. Helix 27 shows the same anomalous pattern of reactivity in domain A ISO as in the intact SSU ( Supplementary Fig. 8).
Helices 2 and 28 are anomalously reactive in domain A ISO , consistent with their anomalous reactivity in the intact SSU 24,30 . Nucleotides involved in base triples in the intact SSU (nucleotides G9, U20, and G22) show suppressed reactivity in domain A ISO . The 5′ terminus of domain A ISO (which is also the terminus of the :SSU rRNA) shows elevated SHAPE reactivity as expected of unstructured RNA. Similarly, the single-stranded nucleotides between stems 3 and 19 (A45, U46, U47) have higher reactivity than the flanking stems.
Mg 2+ ions appear to stabilize domain A ISO and facilitate folding to the native state. Monovalent cations generally allow RNAs to form secondary structures and a subset of tertiary interactions. Divalent cations are required for complete folding to the native state 31,32 . Here we used CD spectroscopy along with SHAPE to characterize the effects of divalent cations (Fig. 5). The addition of Mg 2+ to domain A ISO increases the intensity of the diagnostic CD band at 265 nm. The intensity increases over the range of [Mg 2+ ] from 0 to 700 μ M after which it plateaus. These Mg 2+ effects on domain A ISO are similar to those of well-characterized globular RNAs such as tRNA 33 and P4-P6 of the Tetrahymena group I ribozyme 34 .
The CD results are consistent with SHAPE reactivities. Mg 2+ has subtle but widely distributed effects on the SHAPE reactivity of domain A ISO . Mg 2+ is expected to influence SHAPE reactivities of nucleotides that directly contact Mg 2+ or are involved in Mg 2+ -dependent tertiary interactions. This pattern of Mg 2+ -dependent SHAPE reactivity has previously been observed for tRNA, RNase P, the P4-P6 domain of the Tetrahymena Group I intron and Domain III of the 23S rRNA [27][28][29]35,36 . Upon the addition of Mg 2+ , nucleotides in domain A ISO show slight overall decreases in SHAPE reactivity while some loop regions and bulges show increases ( Fig. 4b and Supplementary Fig. 1). Reactivity of nucleotides A16 and C31 drop upon addition of Mg 2+ suggesting that correct folding of Helix 3 requires Mg 2+ . Based on the intact SSU, A16 is expected to interact directly with a Mg 2+ ion in the native structure 6 . Indeed, A16 shows the greatest change in SHAPE reactivity of any site in domain A ISO upon addition of Mg 2+ .
Helix 28 is an essential component of domain A.
The CPK 17 contains helices 1 and 2 ( Fig. 3). We anticipated that the structure and stability of the CPK, and of domain A ISO , should be dependent on helix 28, because it forms a continuous stack with helix 2 in the intact SSU ( Supplementary Fig. 4) and in our model of domain A ISO . If our model is correct, then helix 28 contributes globally to the stability of domain A ISO . Therefore, we have determined the effect of excision of helix 28 from domain A ISO .
Furthermore, it appears that base pairing is precluded between U14 and A16 in both the intact SSU 6,17 and in domain A ISO . These nucleotides are in a loop region in the native structure, and show a higher SHAPE reactivity than other sites in the CPK (Fig. 4a). However, when helix 28 is omitted from domain A ISO , U14 and A16 decrease in reactivity (Fig. 4d), suggesting non-native pairing interactions.
Domain-wide effects from the omission of helix 28 from domain A ISO are revealed by CD spectroscopy. Changes in the CD spectrum of domain A ISO upon addition of Mg 2+ are diminished by excision of helix 28. Figure 5a demonstrates that changes in CD spectra after addition of Mg 2+ are lessened by approximately 50% for domain A ISO lacking helix 28 compared to intact domain A ISO . The diagnostic 265 nm peak does not reach full intensity in the absence of helix 28 (Fig. 5, Supplementary Figs. 5 and 9). The combined SHAPE and CD data suggest that formation of the native folded state of domain A ISO is dependent on helix 28, supporting our domain model.
A single mutation of the central pseudoknot impacts the entire domain. Pleij 22 and Brink 23 demonstrated that a C18A mutation within the CPK inhibits translation by affecting subunit assembly. This mutation is expected to disrupt the C18-G102 base pair. We mutated C18 to A in domain A ISO . This mutation is seen to cause domain-wide effects on the structure. The C18A mutation lowers the general SHAPE reactivity of the domain A ISO and causes specific changes in helix 2 (U20), helix 19 (C65), helix 27 (U90) and helix 28 (G107, A108) ( Fig. 4c and Supplementary Fig. 3). In addition, the unusually high SHAPE reactivities of helix 27 Green indicates no change. The coloring scheme is shown in the outbox. Shape data for mutant and truncated domain A ISO were acquired in the presence of both Na + and Mg 2+ . Data were not obtained for the uncolored nucleotides. The primer binding tail is omitted for clarity. The full sequence of the construct is shown in the Supplementary Information (Supplementary Fig. 7). Fig. 8).
The C18A mutation affects the CD spectra of domain A ISO . The C18A mutation, like helix 28 excision, lessens the effect on Mg 2+ on the intensity of the 265 nm band by 50% (Fig. 5). These results indicate that domain-wide effects can be incurred by changes in sequence even if the number of nucleotides mutated is small (the C18A mutation changes 1 nucleotide while helix 28 truncation changes ~30 nucleotides compared to intact domain A ISO ). In sum, the data appear to support our domain model of the SSU rRNA.
The structure of domain A is conserved in all ribosomes. We superimposed SSU rRNAs from bacterial and eukaryotic domains of life, including T. thermophilus, E. coli, S. cerevisiae, D. melanogaster, and H. sapiens (Fig. 6) Table 1), consistent with a high degree of conservation of conformation. The greatest deviations are seen in the 5′ terminal region, which is single-stranded (Fig. 6). In addition, we have aligned sequences from 134 species from all three domains of life, and have calculated mutational Shannon entropies. For most of domain A, the sequences are universally conserved, with very low Shannon entropies. The sequences are most divergent in helix 3 and in the 5′ single stranded end (Supplementary Fig. 6).
Discussion
The SSU is a central assembly of all cellular life. The architecture of the SSU has implications for ribosomal function and evolution. Here, we use high-resolution structural information to propose a domain architecture of the SSU rRNA, and have constructed an experimental system to test predictions of the domain model.
We propose a SSU architecture in which four peripheral rRNA domains radiate from a central core, here called domain A (Fig. 1). The SSU is dendritic in structure, in contrast to the monolithic LSU. Domain A is an autonomous core at the structural and functional center of the SSU. Domain A, which includes the CPK, is a hub that connects to the peripheral SSU domains by helical spokes.
To help determine if domain A meets the formal criteria of a domain, we evaluated domain A ISO , an experimental model of domain A. We investigated the Mg 2+ -dependence of SHAPE reactivity and CD spectra of domain A ISO and several informative sequence variants. SHAPE and CD experiments suggest compact tertiary folding of domain A ISO rRNA to a near-native state in the presence of Mg 2+ ions. A C18A mutation or excision of helix 28 causes domain-wide effects. The results of experiments described here support the integrity of domain A, and our domain architecture of the SSU rRNA. The CPK is crucial for biogenesis of the SSU, for stability of the assembled subunits, and for initiation of translation [20][21][22][23] .
Domain A exhibits certain similarities in structure with tRNA ( Fig. 7 and Supplementary Video 1). Similarities in structure to tRNA have previously been observed in select elongation factors, viral RNAs and bacterial non-coding RNA [39][40][41][42][43] . The similarity of domain A with tRNA lays in the arrangement and local conformations of helices 1, 2, and 27. Helices 1 and 2 are coaxial, and are at right angles to helix 27, giving a L-shape structure. Helix 27 of domain A is a close approximation of the anti-codon stem loop. In this region domain A is very similar to valine tRNA, with correct positioning of the CAA anticodon. However, a significant difference between tRNA and domain A is seen when helix 27 is superimposed on the anticodon stem loop; helices 1 and 2 are offset relative to the acceptor and T-stems of tRNA. Ramakrishnan previously noted a similar structural similarity in the anticodon loop of tRNA and helix 6 of SSU rRNA 44 . The 5′ end of the SSU rRNA is a rough approximation of the tRNA amino acid acceptor stem, which is formed by the 3′ end of the tRNA. The relevant nucleotides of the SSU rRNA are universally conserved (Supplementary Fig. 6) and are involved in intersubunit bridge B2c via A-minor interactions 45,46 . Where the CCA amino acid acceptor end of the tRNA comprises a 3′ terminus, the corresponding region of domain A core rRNA contains a 5′ terminus. In sum, we propose a predictive model of SSU architecture by defining domain A as a hub connecting to the peripheral domains. We show that the domain concept is applicable and useful for understanding the SSU. Domain A plays a crucial role in SSU structure and function, forming a scaffold that links to each of the other SSU domains and is an evolutionary ancestor to the SSU rRNA (9). Our results support and explain previous in vivo and in vitro observations on inhibition of the protein synthesis by mutations in the CPK 22,23 . It has been shown that the CPK helps direct biogenesis, folding and function of the SSU. Time-resolved hydroxyl radical footprinting shows that the folding of the CPK occurs very early in subunit sythesis (10 −4 to 10 −2 s −1 ) 47 . Our scheme explains these results in the context of domain A, which includes the CPK. Defects in domain A impact subunit association and ultimately inhibit translation. Our results explain, on a molecular level, the effects of these mutations, which cause domain-wide changes in domain A folding as revealed by CD and SHAPE. Slight orientational alterations in helices 27-28 and 3-19 (which form intersecting orthogonal arcs) affect the overall structure, stability and dynamics of the SSU. Therefore, domain A is central player in protein synthesis machinery in all kingdoms of life.
Methods
Chemical reagents and synthetic oligonucleotides. The chemical reagents used here are molecular biology grade or higher. DNA primers and oligonucleotides were purchased from Operon MWG. All aqueous solutions were prepared with deionized, distilled, nuclease free water (HyClone, Thermo Scientific). For the experiments in the absence of divalent cations, nuclease free water was treated with the Chelex 100 Resin (Biorad) chelating resin and recovered with 0.2 μ m Ultrafree-MC-GV Centrifugal Filters (Milipore). All the experiments are reproducible and repeated at least 2 times unless otherwise stated. The domain A ISO gene was cloned into the pUC19 vector using the EcoRI and HindIII restriction sites. The transformation used 5 μ L of the ligation mix, which was added to 50 μ L DH5α cells using the heat-shock method. Plasmids obtained by minipreps were sequenced bidirectionally by Operon MWG.
Construction of the transcription vector for domain
Helix 28 was added with Q5 site-directed mutagenesis (NEB) using forward AAGCTTGGCGTAATCATGG and reverse TGTACAAGGGCCTTACGG primers. The C18A mutant was also made by Q5 site-directed mutagenesis, using forward AGAGTTTGATACTGGCTCAGG and reverse CCAACAACCCTATAGTGAG primers. For SHAPE experiments, a primer binding tail was added to the 3′ end by PCR using the reverse primer C AC CAAGCTTGAACCGGACCGAAGCCCGATTTGTGTACAAGGGCCTTACGGCCCCCCGTCAATTCC TTTGAGTTTCAGCCTTGC (5′ to 3′ ). The secondary structure of domain A ISO with the SHAPE tail is shown in the Supplementary Fig. 7.
Transcription and purification of the domain A ISO rRNA. The pUC19 plasmid containing the domain A ISO gene was digested with HindIII-HF (NEB) for 2 hours at 37 °C as described by manufacturer. The reaction mixture was incubated at 80 °C for 20 minutes to deactivate the enzyme. The reaction was purified with SmartSpin nucleic acid & purification columns (Denville Scientific Inc.) using DNA Clean & Concentrator Kit buffers (Zymo Research Corp.) Digested plasmid (400-1,000 ng) was used as a template for T7 RNA polymerase (NEB) transcription. Run-off transcription reaction was prepared according to manufacturer's description (NEB T7 High Yield RNA Synthesis Kit). The reaction mixture was incubated for 16 hours at 37 °C. After incubation, 1 μ L Turbo DNAse (Ambion) was added to the reaction mixture, which was then incubated for 30 minutes at 37 °C. RNA was purified by ammonium acetate precipitation. Ultimately, 40 μ L nuclease-free H 2 O was added to the dried pellet and the OD was measured with a Nanodrop (Thermo Scientific). RNA was further purified by G25 size exclusion chromatography (illustra TM NAP TM -10, GE Healthcare). MgCl 2 ) for RNA folding with sodium and magnesium. The sample was folded by incubated at 20 min at 37 °C and was divided two 18 μ L solutions. One of the solutions was added to 2 μ L of 800 mM benzoyl cyanide in anhydrous DMSO. The other solution was added 2 μ L of pure DMSO for a negative background control. The reaction mixture was incubated 2 min at room temperature. The modified RNA was purified using Zymo RNA Clean and Concentrator Kit and eluted in 25 μ L modified TE buffer (10 mM Tris, 0.1 mM EDTA). Primer annealing and extension reactions were as described above.
For the capillary electrophoresis, 1.5 μ L of reverse transcription reaction mixture was mixed with 0.5 μ L ROX-labeled DNA sizing ladder and 9 μ L of HiDi Formamide (Applied Biosystems) in a 96-well plate. To denature the cDNA, the plate was incubated for 5 min at 95 °C. The mixture was resolved on a 3130 Genetic Analyzer (Applied Biosystems). Capillary electrophoresis data were processed using in-house MatLab scripts as described 28 . First, data were aligned via standard peaks and the baseline was subtracted. Sequencing peaks were matched with SHAPE data peaks. The traces were integrated and processed with a signal decay correction, and were scaled and normalized. Circular dichroism spectroscopy. A solution of 25 ng/mL RNA, 5 mM sodium cacodylate, pH 6.8 was titrated with either a EDTA or Mg 2+ . The RNA was titrated first with the chelator, followed by back-titration with Mg 2+ , taking CD scans on a Jasco J-810 spectropolarimeter after each addition. Four CD spectra collected and averaged, from 350 to 220 nm with an integration time of 4 seconds, bandwidth of 4 nm, a scan speed of 50 nm/ min. The temperature was kept at 20 °C. RNA concentrations were kept constant for mutant and intact RNAs. 3D Modeling and Minimization. The domain A three-dimensional structure was modeled the ribosome structure of Ramakrishnan (PDB ID:4V51) 6 . Nucleotides 1-29, 554-569, 881-929, and 1388-1396 were extracted from the crystal structure and capped by a stem-loop containing the three base pairs and a tetra loop with a sequence GCCGUAAGGC. The 3D coordinates of the stem loop were obtained from Hsiao et al. 29 . The stem loops were positioned as extensions of the domain A helices and connected to it by adding O3′ -P bonds. The stem loops along with their two adjacent base-pairs from the domain A were subjected by the partial energy minimization, while the rest of the structure was held fixed. Energy Minimization. Partial minimization of the re-ligated rRNAs was performed with Sybyl-X 1.2 software (Tripos International, St. Louis, MO, USA) with the AMBER FF99 force field using an implicit solvent model with the distance dependent dielectric function D(r) = 20r. The non-bonded cut-off distance was set to 12 Å. Each system was minimized by 1,000 steps of steepest decent followed by 5,000 steps of conjugate gradient minimization.
Figures and Images. Figures of three-dimensional structures are prepared with PyMol or Maxon Cinema
4D with the ePMV plugin 50 . Secondary structures are obtained from in-house RiboVision server 48 . Labels are added in Adobe Illustrator or Adobe Photoshop. | 5,058.2 | 2016-02-15T00:00:00.000 | [
"Biology"
] |
Speech Emotion Recognition Using Unsupervised Feature Selection Algorithms
The use of the combination of different speech features is a common practice to improve the accuracy of Speech Emotion Recognition (SER). Sometimes, this leads to an abrupt increase in the processing time and some of these features contribute less to emotion recognition often resulting in an incorrect prediction of emotion due to which the accuracy of the SER system decreases substantially. Hence, there is a need to select the appropriate feature set that can contribute significantly to emotion recognition. This paper presents the use of Feature Selection with Adaptive Structure Learning (FSASL) and Unsupervised Feature Selection with Ordinal Locality (UFSOL) algorithms for feature dimension reduction to improve SER performance with reduced feature dimension. A novel Subset Feature Selection (SuFS) algorithm is proposed to reduce further the feature dimension and achieve a comparable better accuracy when used along with the FSASL and UFSOL algorithms. 1582 INTERSPEECH 2010 Paralinguistic, 20 Gammatone Cepstral Coefficients and Support Vector Machine classifier with 10-Fold Cross-Validation and Hold-Out Validation are considered in this work. The EMO-DB and IEMOCAP databases are used to evaluate the performance of the proposed SER system in terms of classification accuracy and computational time. From the result analysis, it is evident that the proposed SER system outperforms the existing ones.
Introduction
Speech Emotion Recognition (SER) is the method of detecting the emotional state of a speaker using a speech signal. The field of emotion recognition has gained a lot of interest in human-computer interaction these days, and intensive research is going on in this field using various feature extraction techniques and machine learning algorithms. SER is used in the applications viz., call-center services, in vehicles, as a diagnosing tool in medical services, story-telling and in E-tutoring applications etc.
There are six archetypal emotions: anger, neutral, happiness, disgust, surprise, fear and sadness. In situations where only a person's speech signals are available, SER plays a prominent role [1], [2]. Speech features can be classified as Continuous, Voice Quality, Spectral and Nonlinear Teager Energy Operator (TEO) features. Figure 1 shows the categorical representation of some of these speech features. A significant challenge in SER is the identification of useful speech features that holds the emotional characteristics from a speech signal, and most of the research related to SER is focused on identifying the effective feature set. It is evident from the literature that the feature fusion increased the classification accuracy of the SER system and became the most common practice.
Even though the classification accuracy of the SER system increases due to feature fusion, it also increases the computational overhead on the classifier. This is because some of the features contribute in a better way, while some of them might not be useful at all for emotion recognition. The feature selection methods simplify the task of interpretation by the classification algorithms easier. These techniques majorly eradicate the loss caused due to the curse of dimensionality and also the problem of overfitting by improving the generalization in the model, i.e., the use of less redundant data that leads to incorrect predictions increasing the accuracy of the SER system and thus, enhancing the prediction performance by decreasing the computational time and memory by the SER system. Hence, feature dimension reduction is the best way to enhance the accuracy of the SER system. The reduction of the number of features causes an uncertain loss of information and subsequently leads to instability in the performance of the SER system. To overcome this drawback and to acquire the most optimal feature sets that improve SER accuracy, many feature selection techniques are developed in machine learning. In feature selection, from the original feature set, a subset of features is selected with respect to their relevance and redundancy. It improves the prediction performance and reduces the computational complexity and storage, providing faster and cost effective models [3]. In machine learning, a feature vector is the n-dimensional vector representing the features of all samples. The space-related to these vectors is the feature space. To decrease the dimensionality of feature space, the feature selection or feature transformation methods can be used. In feature transformation, the original feature space is transformed into a different space having a distinct set of axes to reduce the dimensionality of the data.
In contrast, feature selection reduces into a subspace from the original feature space without transformation. Some examples of feature selection methods are ReliefF, Fisher Score, Information Gain, Chi Squares, LASSO, etc. Feature selection techniques can be categorized based on labelling of the data as supervised, unsupervised and semisupervised. In supervised feature selection, the data is labelled feature evaluation process. If the data is huge, labelling of the data is costly and a tedious task. Unsupervised feature selection can overcome these drawbacks of supervised approaches. But this is more difficult than supervised ones since it does not have labelled data and still its result can be good even without any prior knowledge. The evaluation of feature selection methods can be further classified into four types, i.e., filter, wrapper, embedded, hybrid and ensemble feature selection, as shown in Fig. 2.
Filter feature selection techniques use the statistical analysis to assign a distinctive feature with a score. Their score ranks the features, and later, these are retained or removed from the original feature vector set accordingly. These filter techniques mostly use a single variable in their analysis and features are considered independent of each other or dependent terms. The most commonly used filter methods are the Chi-squared test [4], variance threshold [5], information gain, etc. The fast feature selection method, i.e., Fisher feature selection is used in [6] with decision SVM for SER. The wrapper feature selection techniques consider a set of features with various combinations of the feature sub-sets. Later, these feature subsets are compared with one another as a search problem which is estimated and compared with other groups. Further, the prediction process is performed to assign the score onto each of the feature sets depending on the prediction accuracy. The process of search can be systematical such as searching the first best features, for example, hill-climbing algorithm or using heuristics. The search process can be systematic, stochastic or heuristic such as a best-first search, random hill-climbing algorithm, forward and backward passes to add and remove features. Genetic Algorithms, Recursive Feature Elimination (RFE), Sequential Feature Selection (SFS), etc. are some of the wrapper methods of feature selection. In [7], SFS and Sequential Floating Feature Selection (SFFS) are used for SER.
Embedded methods, while creating the model in the learning process, select the best features that can be used to enhance accuracy. Regularization techniques are the most commonly used embedded methods for feature selection: LASSO, FSASL, Ridge Regression, Elastic Net, etc. In [8], an L1-Norm with multiple kernel learning and embedded feature selection method are used for SER. The hybrid method is a combination of two or more feature selection methods (e.g., filter + wrapper). These methods try to acquire the benefits of both techniques by combining their corresponding strengths. It achieves improved efficiency, prediction performance and decreases computational complexity. The most widely used hybrid method is the combined feature selection with filter and wrapper approaches.
Ensemble method constructs a collection of feature subgroups and produces an aggregate result from the group. The primary goal of this method is to tackle the unpredictability problems in most of the feature selection algorithms. This method is based on various subsampling schemes in which one feature selection technique runs on many subsamples, and the resultant features are combined to attain a subset with more stability. With this, for high dimensional data, the feature selection performance is no longer dependent on any individual selected subset, thus attains more flexibility and robustness.
In [9], a sparse representation based sparse partial least squares regression (SPLSR) feature selection method is used for SER. Apart from these, feature selection techniques, feature transformation methods can also be used for feature dimension reduction in SER [10][11][12]. In [10], semi-NMF feature transformation technique with multiple kernel Gaussian process is used for feature dimension reduction. In [11], a supervised feature transformation based dimension reduction method i.e., modified supervised locally linear embedding (MSLLE) algorithm is adopted for SER. In [12], principal component analysis (PCA) is used for SER to transform the high dimensional feature space to a lower dimension.
In [13], unsupervised feature learning is carried out using k-means clustering, sparse Auto-Encoders (AE) and sparse restricted Boltzmann machines for feature mapping to obtain optimal feature set for SER. The adversarial AEs and variational AEs have the ability to encode the high dimensional feature vector to a lower dimension and also have the ability to reconstruct the original feature space. Therefore, in [14], [15], these are used as feature dimension reduction techniques for SER. In [16], a new variant of feature extraction technique i.e., deep neural network based heterogeneous model consisting of AE, denoising AE and an improved shared hidden layer AE is used to extract the features from speech signal. These layers also provide feature optimization up to some extent. But to obtain better performance for SER with the high-dimension feature set, a fusion level network with support vector machine (SVM) classifier is used.
In this paper, an SER system is proposed with unsupervised feature selection algorithms with the Support Vector Machine (SVM) classifier using Linear and Radial Basis Function (RBF) kernels. The significant contributions of this work are: i) Using the UFSOL and FSASL unsupervised feature selection algorithms for feature selection which have not yet been explored for SER. ii) To propose a Subset Feature Selection (SuFS) algorithm to improve the performance of the proposed SER system further by selecting the subset of features after UFSOL and FSASL feature selection, based on the 10-fold validation accuracy obtained by using UFSOL and FSASL algorithms, as the decisive factor for feature selection.
The rest of the paper is structured as follows: Section 2 describes the proposed SER system with UFSOL, FSASL algorithms along with a novel Subset Feature Selection (SuFS) algorithm and Section 3 discusses the performance analysis of the proposed SER system followed by Section 4 with the conclusion and future scope of the proposed work.
Proposed Speech Emotion Recognition System using Unsupervised Feature Selection Algorithms
In the proposed SER system, after the feature extraction, the unsupervised feature selection algorithms, i.e., UFSOL and FSASL are used individually to select the most prominent from the original feature set as shown in Fig. 3.
Database
In the proposed work, EMO-DB and IEMOCAP datasets are considered for the SER analysis. EMO-DB, the German database [18] is widely used in SER analysis by many of the researchers. The recording for emotional data was done in an anechoic chamber by five male and five female actors between the age group of 25-35. Totally 535 speech signals were recorded at 48 kHz with Anger, Boredom, Disgust, Anxiety/Fear, Happiness, Sad and Neutral. Later these are down-sampled to 16 kHz. The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database [19] is an acted, multimodal and multi-speaker database. Twelve hours of audio-visual data that include video, speech, text transcriptions and motion capture of the face. In this work, the speech data with emotions, anger, happiness, neutral and sadness are considered as in most of the SER works, with a total of 4490 utterances.
Pre-Processing
The speech signal is initially passed through a preemphasis filter to boost the energy in their higher frequencies which are attenuated during the speech signal production from vocal tract [20].
Feature Extraction
Feature Extraction in speech emotion recognition is the process of extracting the speech specific features that have the emotion relevant information [1]. In order to obtain the emotional contents from a speech signal, a particular set of features can be extracted by applying various signal processing techniques. In this work, INTERSPEECH 2010 paralinguistic challenge feature set and Gammatone Cepstral Coefficients (GTCC) are used as features. The INTERSPEECH 2010 paralinguistic challenge set consists of 1582 features with a four-set of features combined [21].
The Munich open Speech and Music Interpretation by Large Space Extraction (openSMILE) toolkit [22] is utilized to extract the 1582 features for the individual speech signal. The configuration file 'IS10paraling:conf" is used to obtain these features and the features, along with the description are as shown in Tab. 1.
The gammatone filter takes its name from the impulse response, which is the product of a Gamma distribution function and a sinusoidal tone centered at the frequency, being computed as [23]: where g(t) is the impulse response of gammatone filter; K is the amplitude factor; n is the filter order; f c is the central frequency in Hz; is the phase shift; B is the duration of impulse response (B = 1.019 ERB(f c )).
ERB is the equivalent rectangular bandwidth i.e., ERB(f) = 24.7 + 0.108f. The center frequency f c of each gammatone filter is equally spaced on ERB scale, i.e., The fourth order gammatone filter is similar to human auditory model, therefore n = 4. Here, f low = 62.5 Hz, f high = 3400 Hz and N is the number of gammatone filters i.e., 20. After obtaining the gammatone filter coefficients the cepstral analysis is applied to these, obtaining a total of 20 gammatone cepstral coefficients using the gammatone filter.
Unsupervised Feature Selection
The unsupervised feature selection algorithms, i.e., UFSOL and FSASL, which are not yet explored for SER so far, are used in this work. Apart from this, a novel Subset Feature Selection algorithm is modelled by the results obtained after using UFSOL and FSASL algorithms to improve the performance of the SER system further. The entire set of 1602 features is given to the feature selection algorithms to select the most prominent features, as shown in Fig. 3. The UFSOL and FSASL algorithms are discussed as below:
Unsupervised Feature Selection with Ordinal Locality (UFSOL):
Consider X = [x 1 ,…,x d ] m d as the initial feature matrix with d speech signals and m number of features. Generally the regularized regression, feature selection is formulated as [24]: where W m d 2 (m < d 2 ) is a projection matrix/ feature selection matrix; l 2,q -norm (q is typically set to 0 or 1) assures the sparseness in rows of W; Whereas, H is a label matrix in case of supervised multi-class data. In this work, the bi-orthogonal semi Nonnegative Matrix Factorization (NMF) is used to decompose H into two new matrices i.e., H ≅ UV with V 0, VV T = I and U T U = I.
If the feature set selected for original sample x i is supposed to be y i = W T x i , then Y = W T X. According to the principle of "ordinal locality preserving", given a triplet (x i , x u, x v ) comprised of x i and its neighbors x u and x v , their corresponding feature groups also form a triplet (y i , y u, y v ). Let the distance metric be denoted by dist(.,.). The feature selection holds ordinal locality preserving if the following condition is preserved: Based on this, the appropriate feature group for each data point is identical to optimizing the following ordinal locality preserving loss function over a collection of triplets as below: where N i is a set of sequence numbers indicating the k nearest neighbors of The squared Euclidean distance is used to establish each pairwise distance. The loss function of ordinal locality preserving can be written accordingly as , which has an equivalent compact matrix form: where and are scalar constants that controls the relativeness of corresponding terms.
According to half-quadratic theory, for a fixed t, there The minimization of F̂(W,U,V,R) is as shown below: i) The diagonal elements of R are updated in parallel: The algorithm to solve (7) is as below: Algorithm 1: The algorithm to solve (7) Input: sample's nearest neighbors k; Parameters d 2 , c, , and .
ii) To solve (7), (U, V) is updated for fixed W by applying orthogonal Semi-NMF on projected data i.e., fea- The optimal W comprises d 2 Eigen vectors corresponding to the smallest Eigen values of d 2 .
All the above steps are updated until convergence as summarized in Algorithm 1. W is the resultant feature selection matrix.
Feature Selection with Adaptive Structure
Learning (FSASL): In this algorithm, consider the feature set as X d m , where d is the dimension of the speech files and m is the number of features. The parameters α, β, γ, µ are considered as the regularization parameters used to balance sparsity and reconstruction error of global as well as local structure learning. Also, considering the optimized data dimension as c, the resultant optimized feature set d c . Using the general equation that guides the FSASL method [25]: where, X = Input Feature set; x = a particular row of data matrix;
Algorithm 2: FSASL Algorithm
Input: Input feature set as X m d ; d is the dimension of the speech files; m is the number of features.
Solution:
For each data sample x q , all the data points {x r } m r=1 are considered as the neighborhood of x q with probability P(q,r). S = Weight matrix of the data matrix; s = a particular row of the Weight matrix; Z = feature selection and transformation matrix.
The optimization problem in (10) derives different variables (S, P and Z(t)) into a set of sub-problems with only single variable involved and is solved as follows: 1) Solving for S by keeping P and Z as constant. For each q, update the q th column of S by solving the problem: where X' and x' are the transpose matrices of X and x.
2) Solving for P by keeping S and Z as constant. For each q, update the q th column of P by solving the problem , then the above problem can be written as: where p'(t) is the q th row of P.
3) Compute the overall graph laplacian by L = L S + (L S ), then where, D P is a diagonal matrix whose i th diagonal element is The optimal solution of Z gives the Eigen vectors corresponding to the c smallest Eigen values of generalized Eigen-problem: where Λ is a diagonal matrix whose diagonal elements are Eigen values.
Output: Sort all the d features according to ||z q || 2 (q = 1, ..., d) in descending order and select the top k ranked features.
The resultant is Z as the feature selection matrix. Both the FSASL and UFSOL algorithms rearrange the original feature set accordingly, as per their prominence with the ranks of the corresponding algorithms. Later, the rearranged feature sets are fed to the classifiers to perform emotion classification.
Subset Feature Selection (SuFS):
After the unsupervised feature selection, a novel Subset Feature Selection algorithm is introduced upon the UFSOL and FSASL algorithms. Accuracy vector a with accuracies based on ranking of various features using Feature Selection algorithm; l = number of features at which highest accuracy is obtained using UFSOL or FSASL.
Solution:
1: Initialize sub-rank (sr) with a(1) (since, first accuracy value is always > 0) 2: Initialize h = 2 for g = 0:1:l if a(g + 1) > a(g) sr(h) = r(g + 1) update h ← h + 1 end 3: for i = 0:1:len(sr) sf(g) = F(:, sr(g)) end Output: Subset of original feature vector (sf) To further reduce the dimension of the feature set without effecting the accuracy of the SER system, i.e., to obtain a better accuracy with a reduced feature set. The proposed SuFS depends on the ranking vector (i.e., prominence of the features) and the validation accuracy obtained from the features selected from UFSOL and FSASL algorithms. The ranking vector is according to d 2 smallest Eigen values of UFSOL algorithm and d smallest Eigen values of FSASL algorithm. The SuFS algorithm is discussed in Algorithm 3. The SuFS algorithm is applied to the features selected by UFSOL and FSASL to obtain sf feature vector. Further, the subset of features, i.e., features obtained from UFSOL-SuFS and FSASL-SuFS are given to the SVM classifier for both validation and testing.
Simulation Results and Discussion
In the proposed SER system, the 1602 INTER-SPEECH Paralinguistic and GTCC features are extracted from the speech signal. This huge set of features is fed to the UFSOL and FSASL algorithms for feature selection. In this paper, the support vector machine (SVM) classification technique with Linear and Radial Basis Function (RBF) kernels using Hold-Out and 10-fold Cross-Validation are used for emotion classification. Initially, the speech signal database is divided into training and testing datasets. The 80% of the dataset is considered for training and 20% for testing for hold-out validation. The k-fold cross-validation (here, k = 10) is a resampling method employed to evaluate machine learning models on a limited dataset. The dataset is randomly divided into k groups or folds of nearly equal size. The first fold is used as a validation set, and the model is fit on the remaining k -1 folds. In this work, the 10-fold cross-validation schema is used to train and test the accuracy of the proposed SER system. Hence, the entire dataset is randomly split into 10 parts, among that 9 parts are used for training the classifier (SVM), and testing is carried out on the hold-out or test data, i.e., the tenth part. This process is repeated in 10 folds, i.e., 10 times, until the entire dataset is completely trained. The performance of the proposed SER system is evaluated using the machine learning performance metric, i.e., the Classification Accuracy. In this work, the 10-fold crossvalidation and Hold-Out Validation are used to train and test the accuracy of the proposed SER system. All the simulations are carried out in a Computer with Intel(R) Xeon(R) CPU E3-1220 v3 of 3.10 GHz 64-bit processor with 16 GB RAM. To select the first prominent features which give the highest accuracy, to select initial feature set, the feature selection matrix of both UFSOL and FSASL algorithms are given to the SVM classifier as shown in Fig. 3. Figures 4 and 5 show the variation of classification accuracy with the number of features using FSASL and UFSOL feature selection for EMO-DB and IEMOCAP.
For EMO-DB, using FSASL the highest validation accuracy of 86% is obtained for 600 features and 85% validation accuracy for 500 features with UFSOL. For IEMOCAP, for 1250 features the highest accuracy of 71.4% using FSASL and 72% using UFSOL is obtained.
It is evident from Figs. 4 and 5, even with initially selected features using UFSOL and FSASL algorithms, the SER accuracy is not increasing. Therefore, still, the feature selection is possible from initially chosen features. Hence, the SuFS algorithm is applied after UFSOL and FSASL feature selection to acquire better accuracy with less number of features. The initially selected features are fed to the SuFS algorithm to reduce further the number of features acquiring the best performance. Later, the highest prominent features selected by SuFS are fed to the SVM classifier with Linear and RBF kernels for emotion classification.
The performance of the proposed SER system with different feature selection algorithms is compared with the baseline SER system (without feature selection) using SVM classifier with Linear and RBF kernels using hold-out validation and 10-fold cross-validation are as shown in Tab. 3 and 4 in terms of classification accuracy and validation (or) testing time. Tables 3 and 4 show the simulation results of the proposed SER system for EMO-DB and IEMOCAP databases with hold-out validation and 10-fold cross-validation using SVM classifier. From the results, it can be clearly understood that for EMO-DB database, better performance is achieved upon using the SVM classifier with Linear Kernel, and RBF kernel for IEMOCAP database.
From the results shown in Tab. 3 and 4, it is clear that the SVM with Linear kernel gives better classification for EMO-DB data and with RBF kernel in case of IEMOCAP data. Table 3 shows the hold-out validation results for EMO-DB and IEMOCAP database. For EMO-DB, the highest testing accuracy of 86% with the lowest computational time for training and testing, i.e., 0.165 and 0.032 seconds using FSASL-SuFS algorithm. Similarly, for IEMOCAP database, the highest testing accuracy and at lowest computational time of 14 and 2.9 seconds for training and testing is 77.5% using UFSOL-SuFS algorithm. | 5,730.2 | 2020-06-12T00:00:00.000 | [
"Computer Science"
] |
Cost-optimal single qubit gate synthesis in the Clifford hierarchy
For universal quantum computation, a major challenge to overcome for practical implementation is the large amount of resources required for fault-tolerant quantum information processing. An important aspect is implementing arbitrary unitary operators built from logical gates within the quantum error correction code. A synthesis algorithm can be used to approximate any unitary gate up to arbitrary precision by assembling sequences of logical gates chosen from a small set of universal gates, which are fault-tolerantly performable while encoded in a quantum error-correction code. However, current procedures do not yet support individual assignment of base gate cost values and many do not support extended sets of universal base gates. We study cost-optimal sequences synthesised from sets of base gates which include Clifford gates and $Z$-rotation gates from higher orders of the Clifford hierarchy, which can be performed fault-tolerantly on error-correction codes using magic state distillation protocols. The individual costs assigned are the average numbers of raw (i.e. physical level) magic states required to implement the gates fault-tolerantly. By including the $Z$-rotation gates from the fourth order of the Clifford hierarchy as base gates in addition to the canonical Clifford+$T$ gates, we find that the average cost decreases by up to $30\%$. The gate synthesis algorithm introduced in this work, based on Dijkstra's algorithm, generates cost-optimal sequences for single-qubit target gates and supports arbitrary universal sets of single-qubit base gates with individually assigned cost values. In addition, we develop an analytic model to estimate the proportion of sets of $Z$-rotation gates from higher orders of the Clifford hierarchy among gates within sequences approximating random target gates, which can be used to estimate each order's effectiveness for the purpose of gate synthesis.
For universal quantum computation, a major challenge to overcome for practical implementation is the large amount of resources required for fault-tolerant quantum information processing. An important aspect is implementing arbitrary unitary operators built from logical gates within the quantum error correction code. A synthesis algorithm can be used to approximate any unitary gate up to arbitrary precision by assembling sequences of logical gates chosen from a small set of universal gates that are fault-tolerantly performable while encoded in a quantum error-correction code. However, current procedures do not yet support individual assignment of base gate costs and many do not support extended sets of universal base gates. We analysed cost-optimal sequences using an exhaustive search based on Dijkstra's pathfinding algorithm for the canonical Clifford+T set of base gates and compared them to when additionally including Z-rotations from higher orders of the Clifford hierarchy. Two approaches of assigning base gate costs were used. First, costs were reduced to T -counts by recursively applying a Z-rotation catalyst circuit. Second, costs were assigned as the average numbers of raw (i.e. physical level) magic states required to directly distil and implement the gates fault-tolerantly. We found that the average sequence cost decreases by up to 54±3% when using the Z-rotation catalyst circuit approach and by up to 33±2% when using the magic state distillation approach. In addition, we investigated observed limitations of certain assignments of base gate costs by developing an analytic model to estimate the proportion of sets of Z-rotation gates from higher orders of the Clifford hierarchy that are found within sequences approximating random target gates.
Introduction
Quantum computing has the potential to solve many real-world problems by using significantly fewer physical resources and computation time than the best known classical algorithms. The quantum algorithms for these problems are implemented using deep quantum circuits. Thus to reliably implement these circuits, qubits within the devices require long coherence times and high precision control. Current systems consist of physical qubits that are too noisy for large scale computation. Error-correction schemes provide the ability to overcome this hurdle by entangling clusters of physical qubits in such a way that they collectively encode the information into more robust logical qubits. In principle, when physical qubits have error-rates below the error threshold of the error-correction scheme, logical qubits within the code can be made arbitrarily robust using increasing numbers of qubits. A particular error-correction scheme with relatively high physical error threshold of approximately 1% is the surface code, which is implemented over a nearestneighbour two-dimensional physical layout, making it one of the most realistically implementable schemes [1][2][3][4]. In this work, we analyse the resource costs for gate synthesis, which is used to fault-tolerantly implement arbitrary unitary gates in error-correction codes.
The surface code, among other high-threshold codes, is limited to a small set of Clifford gates over logical qubits that can be performed with relative ease. A procedure called magic state distillation can be used to perform a wider range of non-Clifford gates fault-tolerantly, such as the T := R z (π/4) gate (up to global phase), which cannot be produced using only Clifford gates [5,6]. Initially, raw magic states are surgically injected into the code and with the aid of state distillation procedures, a number of raw magic states are consumed to produce a smaller number of more robust magic states. In principle, the procedures can be recursively applied to obtain states with arbitrarily low noise, although requiring large amounts of physical resources. These purified magic states can then be consumed to fault-tolerantly perform corresponding gates using quantum teleportation circuits. Distillation procedures only exist for a subset of gates, in order to implement arbitrary unitary gates, the Solovay-Kitaev (SK) theorem can be used. The SK theorem states that a universal set of n-qubit gates generate a group dense in SU (2 n ) (Special Unitary), and the set fills SU (2 n ) relatively quickly. Hence single-qubit base gates that form a universal set can be multiplied in sequence to approximate any single-qubit gate to arbitrary precision [7,8].
A frequently used set of single-qubit universal base gates for fault-tolerant quantum computation are the Clifford+T gates, where the Clifford gates are relatively cheap to apply while the T gate requires a considerable amount of resources due to the magic state distillation procedure. This set of gates and how they can be used to synthesise arbitrary single-qubit gates is a well studied topic within the quantum compilation literature. Gate synthesis algorithms, besides brute-force [9], began with the Solovay-Kitaev algorithm [8,10]. It initially searches for a base sequence that roughly approximates a target gate and then uses a recursive strategy to append other base sequences in such a way that the new sequence approximates a gate that is closer to the target gate with distance reducing efficiently with the number of iterations. It is compatible with arbitrary single-qubit universal gate sets, provided that they include each gate's adjoint. The SK algorithm has room for optimisation with respect to lengths of resulting gate sequences since the recursive process generates strings of disjoint subsequences which are only individually optimised, rather than optimising over the entire sequence. In 2008, Matsumoto and Amano [11] developed a normal form for sequences of Clifford+T gates that produces unique elements in SU (2). Shortly after, Bocharov and Svore [12] introduced their canonical form which extends the normal form by instead producing unique elements in P SU (2) (Projective Special Unitary) which more concisely describes the space of all physical single-qubit gates by ignoring global phase. This normal form can be used to enumerate length optimal sequences of Clifford+T base gates which produce distinct gates, considerably reducing the size of the sequence configuration space for search algorithms (although still growing exponentially with respect to sequence length).
More recently, there has been significant progress on developing direct synthesis methods which are not based on search. For target single-qubit unitary gates that can be exactly produced by Clifford+T base gate sequences, a method was developed that optimally and efficiently finds these exact sequences directly [13]. This was later used as a subroutine in algorithms for optimal synthesis of arbitrary single-qubit Z-rotations [14,15]. Direct Clifford+T base gate synthesis methods for Z-rotations have since been generalised to Clifford+cyclotomic (Z-rotation by π/n) sets of base gates [16] and sets derived from totally definite quaternion algebras [17]. For arbitrary single-qubit rotations (not necessarily Z-rotations) there has been a number of other approaches developed, such as a randomised algorithm that uses the distribution of primes [18], asymptotically optimal synthesis using ancilla qubits [19], and probabilistic quantum circuits with fallback [20].
It is common within the quantum compilation literature for synthesis algorithms to optimise sequences based on minimising the total number of gates that require magic state injection. This measure is well-suited to the Clifford+T set of base gates which are standard for gate synthesis algorithms, since the T gate and its adjoint are the only gates with a significantly higher cost than the Clifford gates. However, procedures exist for performing alternative gates to the T gate that vary in implementation cost. Examples of such gates are found within the Clifford hierarchy, which is an infinite discrete set of gates that are universal and can be performed on certain error-correcting codes fault-tolerantly [21]. The resource cost of implementation typically varies between orders of the hierarchy. Thus to accurately cost optimise sequences from such sets of gates, the cost of each individual base gate should be considered. We investigate two different approaches for implementing Z-rotation gates from the Clifford hierarchy and calculating their resource costs. The first approach is based on a circuit that uses a catalyst Z-rotation state to implement two copies of its corresponding Z-rotation gate using a small number of T gates while retaining the initial Z-rotation state [22,23]. This circuit can enable the average resource costs of implementing Z-rotation gates from the Clifford hierarchy to be expressed as T -counts. Using this approach, costs could be calculated either by assuming that output gates are applied directly to target qubits or by assuming that all output gates are first applied to |+ states to form intermediate magic states, which can then be consumed to implement the corresponding gates onto target qubits at any time. As an alternative to the Z-rotation catalyst circuit approach of gate implementation, the second approach is to use the average number of raw magic states required to directly distil and implement subsets of gates belonging to the Clifford hierarchy in surface codes. The distillation costs have already been calculated by Campbell and O'Gorman [24] for various levels of precision, the accumulated costs of distilling and then implementing the gates are found within their supplementary materials. Although other factors relating to physical resources are important to consider such as qubit count, circuit depth, magic state distillation methods, and details of the error-correction implementation, the number of raw magic states can serve as a rough approximation to the cost of implementing fault-tolerant logical gates on surface codes.
We introduce an algorithm, based on Dijkstra's shortest path algorithm, that generates a database of all cost-optimal sequences below a chosen maximum sequence cost where each sequence produces distinct gates in P SU (2). The algorithm supports arbitrary universal sets of single-qubit base gates with individually assigned cost values. The database can then be searched to find a sequence approximating a specified target gate. We use this algorithm to compare the cost of costoptimal gate synthesis between the canonical Clifford+T base gate set and various sets of base gates consisting of Clifford gates and Z-rotations from higher orders of the Clifford hierarchy. Each set of logical base gates is compared by calculating how the average gate sequence cost for approximating random target gates scales with respect to reaching target gate synthesis logical error rates. When including Z-rotation base gates from higher orders of the Clifford hierarchy with T -counts assigned using the Z-rotation catalyst approach, we find that the average cost-optimal sequence T -counts can potentially be reduced by over 50% when output gates are directly applied to target qubits and by over 30% when intermediate magic states are used. When using the alternative approach of assigning costs from direct magic state distillation, we find that by including Z-rotation logical base gates from the fourth order of the Clifford hierarchy, the average cost-optimal sequence costs can be reduced by 30%. These cost reductions indicate that a significant amount of resources could be saved by adapting current synthesis algorithms to include higher orders of the Clifford hierarchy and to optimise sequences with respect to individual gate costs.
In the cases when costs are assigned using the Z-rotation catalyst method via intermediate magic states or when assigned using direct magic state distillation, we observe that there is only a small improvement to the average costs of synthesis when Z-rotations of orders higher than four of the Clifford hierarchy are included as base gates. We investigate this behaviour by developing a model to estimate the proportion of Z-rotation base gates from specified orders of the Clifford hierarchy within sequences approximating random target gates, without needing to generate the database of sequences. The proportions calculated in this manner closely fit results obtained using the sequence generation algorithm to approximate uniformly distributed random target gates. The parameters of the calculation include the maximum sequence cost and separate logical base gate costs for each order of the Clifford hierarchy, which can be readily be extended to specify costs for individual logical base gates.
Base Gates From The Clifford Hierarchy
The Clifford hierarchy is an infinite discrete set of gates that are universal for the purposes of quantum computation and can be fault-tolerantly performed on certain error-correcting codes. Each order of the hierarchy is defined as noting that C 1 = P is the set of Pauli gates, C 2 is the set of Clifford gates and C 3 includes, among others, the Pauli basis rotations by π/4 such as the T gate. Higher order gates typically correspond Figure 1: A Z-rotation catalyst circuit [22,23]. The rotations Rz(2πk/2 n ) are elements of Tn (as shown in Eq. 2) where k is an odd integer and n is a natural number. The circuit utilises a |Tn state, a |T state, three T3 gates and a Tn−1 gate to perform two Tn gates on two separate qubits while retaining the original |Tn state. The output Tn gates can either be applied directly to target qubits or |ψ0 and |ψ1 states can be first set to |+ states, so that the application of the Tn gates prepare two |Tn states which can then be used to implement Tn gates at any time and on any target qubit using teleportation circuits. However, this consumes on average an additional half a Tn−1 gate for the implementation of each Tn gate. The two sets of grouped gates (outlined by dashed lines) correspond to logical-AND computation and uncomputation circuits, which only requires a total T -count of four to implement [23]. The circuit can be recursively applied until the Rz(2πk/2 n−1 ) gate position reduces down to a T3 gate which has a cost of 1. All costs are calculated by assuming that all target gates at each recursive level of the circuit are used at some point (i.e. that no output gates are wasted).
to finer angle rotations.
In this work, we compare sets of single-qubit universal logical base gates consisting of Clifford gates and Z-rotation gates from higher orders of the Clifford hierarchy. Although only higher order Z-rotations are included, they can be readily converted to other gates in the same order of the Clifford hierarchy by multiplying gates from lower orders. In particular, by multiplying Clifford gates, other gates of the same order are generated for the same cost. For example Z.R z (π/4) = R z (5π/4) and H.R z (π/4).H = R x (π/4) up to global phase, where H is the Hadamard gate and Z is the Pauli-Z gate. These sets of logical base gates are compared with respect to the optimal resource costs resulting from gate synthesis for random target gates. Each set of Z-rotation gates from order 3 ≤ l ≤ 7 of the Clifford hierarchy, denoted T l , can be written as . . , 13, 15} , and The five sets of logical base gates used in our analysis are then constructed as Set 4 := Set 3 ∪ T 6 , and Calculating precise resource costs of implementing each gate fault-tolerantly is an extensive task that would need to consider a variety of factors such as qubit count, circuit depth, magic state distillation methods and details of the error-correction implementation. As an approximation for the cost of these logical gates we investigate two approaches of assigning costs to individual T l gates, where gates from C 1 and C 2 are assumed to be free since they can be implemented in a relatively straightforward way. The first approach can associate the costs with the T -count, which Average T -count per base gate Average T -count per base gate Table 1: The average number of T gates required to implement a single qubit Z-rotation gate from order l of the Clifford hierarchy T l using the Z-rotation catalyst approach. Table 2: The average raw magic state count required for distillation and implementation of corresponding logical base gates, obtained from the supplementary materials of [24]. Each column contains the cost of distilling and implementing a logical Z-rotation gate from order l of the Clifford hierarchy T l to below a gate error rate µ calculated using the diamond norm. The raw magic state physical level error is assumed to be 0.1%.
is used as the standard metric for measuring the costs of gate sequences within the gate synthesis literature. This can be done by using a Z-rotation catalyst circuit shown in Fig. 1, which was introduced in [23] and presented in more detail in [22]. The circuit is similar to a synthillation parity-check circuit described in [25]. It utilises a |T l state and a small number of T gates to perform two T l gates on two different qubits while retaining the original |T l state. Costs can be calculated by recursively applying this circuit, assuming that all output gates at each recursive level are resourced (i.e. that no output gates are wasted). We calculate the costs using the Zrotation catalyst approach in two ways. The first assumes that output T l gates are directly applied to target qubits. The recurrence relation for the T -counts using this method can be obtained as where Cost[T 3 ] = 1. Solving this results in the average number of T gates required to implement a T l gate to be expressed as which is enumerated in Table 1a for 3 ≤ l ≤ 7. The second method of calculating the T -count using the Z-rotation catalyst approach applies the T l gates to |+ states, creating corresponding intermediate |T l states, which are then consumed to implement the gates via teleportation circuits. The recurrence relation for these costs can be obtained as where Cost[T 3 ] = 1, resulting in the expression which is enumerated in Table 1b for 3 ≤ l ≤ 7. This second method is more expensive since the teleportation circuit that consumes the |T l state to implement the T l gate requires a T l−1 correction gate to be applied 50% of the time. However, this method is more flexible in implementation since the outputted |T l states can be used at any time to implement T l gates onto any target qubits, enabling more options when instruction scheduling. A realistic employment of the Z-rotation catalyst approach would likely benefit from a combination of both direct application of T l gates and application via their intermediate |T l states. For the second approach of assigning resource costs, we use the average number of raw magic states to implement fault-tolerant T l gates from direct magic state distillation procedures. Resource costs have already been calculated for Yrotation gates R y (2π/2 l ) from the Clifford hierarchy by searching for optimal combinations of various distillation protocols with respect to target gate synthesis error rates [24]. For integer multiples R y (2πk/2 l ), the distillation protocols can be performed identically, hence they can be assigned the same cost. To follow convention, the Y -rotation gates are converted to Z-rotation gates with the same cost using the relation R z (θ) = HS † R y (θ)SH, since H and S := R z (π/2) have zero cost due to being elements of C 2 . These resource costs vary between orders of the Clifford hierarchy and are shown in Table 2.
Sequence Generation Algorithm
In this section, a sequence generation algorithm, based on Dijkstra's algorithm, is developed that generates a database of all cost-optimal single-qubit gate sequences below some maximum cost using arbitrary sets of universal base gates which have individually assigned cost values. We use this algorithm to help study the average cost of cost-optimal gate synthesis when including Zrotation gates from higher orders of the Clifford hierarchy as base gates. Due to the flexibility of this algorithm, it could be used as a subroutine within other synthesis algorithms. For example, it could be used as the base approximation step within the SK algorithm, enabling the SK algorithm to consider individual base gate costs when synthesising target gates. The sequence generation algorithm explores the space of sequence configurations using a tree expansion as shown in Figure 2, where each node corresponds to a gate and each path from the root node to any other node corresponds to a sequence of gates. Let B n be an element of P SU (2) corresponding to the base gate of node n in the sequence tree. A combined gate S n of node n is calculated by multiplying all nodes within the branch from the root down to n, i.e. S n := B n0 · B n1 . . . B n k , where n i is the i th node from the root node such that n 0 is the root and n k is node n. The Lie algebra generator of S n in the Pauli basis is of the form of a vector α n X + β n Y + γ n Z with real coefficients and can be written as (α n , β n , γ n ). Each vector represents a point in a ball of radius π/2 over the Pauli bases X, Y and Z. Thus each point within the ball is a geometrical location corresponding to a single-qubit gate.
The pseudocode for the algorithm is shown in Algorithm 1. It works by expanding nodes in a sequence tree (see Figure 2). All leaf (end) nodes of the sequence tree are stored in a minimum heap data structure which sorts the leaf nodes based on their corresponding sequence cost in increasing order. This determines the order of nodes to expand. The tree begins as a single identity gate at the root node which is added as the first element to the leaf node heap. At each iteration, the leaf node with the lowest sequence cost, i, is taken from the heap, which for the first iteration would be the identity gate node. The vector (α i , β i , γ i ) is calculated from the combined gate of the corresponding node's sequence. Before expanding a node in the sequence tree, we check whether another node with the same combined gate vector has already been expanded, using a hashset data structure. If the vector exists in the hashset, then the node is removed from the sequence tree and the algorithm proceeds to the next iteration. This repeats until a unique vector is found. When such a vector is found, it is added to the hashset for uniqueness checking in further iterations and the corresponding node in the sequence tree is expanded by generating a child node for each base gate. Each of these child nodes are added to the leaf node heap. To save computation time, adding a child node to the sequence tree and the heap can be limited to when their corresponding vectors are unique. Since vectors of sequences with lower costs are always added to the hashset before those with higher costs, the hashset must only contain vectors corresponding to sequences with the lowest cost among all sequences that produce equivalent combined gates. Thus, whenever a vector is successfully added to the hashset, the corresponding sequence must be cost-optimal. The cost-optimal vector and sequence pair can be stored in a data structure such as a k-d tree which can be used to approximate target gates by geometrically searching for nearest neighbours in the Figure 2: An example of a sequence tree used to relate logical base gates, gate sequences and combined gates for the sequence generation algorithm. A node n corresponds to a single-qubit base gate Bn and the root node corresponds to the identity gate B0 = I. A gate sequence corresponding to n is the sequence of logical base gates along the path from B0 to Bn. A combined gate Sn is calculated by multiplying all logical base gates within the gate sequence in sequence order. In this example, B1, B2 and B3 are logical base gates where In the sequence generation algorithm, the leaf node with the lowest sequence cost is expanded by adding a child node as a new leaf node for each gate in the set of logical base gates. All non-leaf nodes of the tree correspond to cost-optimal sequences and they can be thought of as the cost-optimal sequence database generated by the algorithm. Although all leaf nodes are depicted to be at the same depth in the tree, this is not always the case. At any point during the sequence generation algorithm, a path of relatively expensive logical base gates may be much shorter than a path of relatively cheap gates. Algorithm 1 Cost-optimal sequence generation 1: procedure GenerateSequences(baseGates, maxCost) 2:
sequenceDatabase ← new KdTree Node
To store the cost-optimal sequences geometrically 3:
sequenceTree ← new Tree Node
To relate nodes, sequences and combined gates 4:
sequenceTree.SetRoot(Identity gate)
Set the root node to the identity gate 5:
sortedLeafNodes ← new MinHeap Node
To order sequence tree leaf nodes based on sequence cost 6:
space of vectors.
There is a notable further optimisation that could be implemented into Algorithm 1. During the procedure, all non-leaf nodes within the sequence tree correspond to cost-optimal sequences with unique combined gate vectors, that is, each path starting at the root node and ending at any non-leaf node is a shortest path to the sequence's unique combined gate. To see how this could be helpful, first assume that an existing sequence tree needs to grow to a new maximum cost, such that the leaf nodes need to expand multiple times along the same branch. Instead of searching through every combination of base gates as children for a leaf node, the sequence tree itself can be used as a sieve by iterating child nodes from the root that are known to be shortest paths. The tree already contains optimal paths up to a certain depth, so this information could be used to help avoid the tree branches expanding in directions that produce nonoptimal paths to unique combined gates.
In Algorithm 1, cost-optimal sequences and their corresponding vectors are stored in a k-d tree which uses the Euclidean distance on the vectors to organise the data. Due to the periodic nature of the vectors, there is a small chance of failure in the k-d tree when searching for nearest neighbours to points close to the boundary. With computational overhead, the k-d tree may be modified to help overcome this [26], or a more appropriate data structure such as a vantage point tree [27,28] may be used instead. In general, further alternative data structures may be used such as the geometric nearest-neighbour access tree [29].
Synthesis Results
Algorithm 1 was computed using the sets of logical base gates described in Eq. 3 with the assignment of costs obtained from the two approaches of implementing base gates, where values are shown in (a) Sequences with below µ = 10 −5 logical base gate error Figure 4: Cost-optimal sequence costs averaged over 5000 random target gates with respect to target gate synthesis logical error rates . The logical base gates used are specified in Eq. 3 with cost values (shown in Table 2) assigned as the average number of raw magic states required to distil and implement them to below a specified logical gate error. The synthesis logical errors are calculated using the trace distance (shown in Equation 8). Corresponding linear best fit values are shown in Table 4. The pattern of the data about the lines of best fit for each logical base gate set are similar between plots because for each of the logical base gate errors, the ratios of the base gate cost values between orders of the Clifford hierarchy are similar, hence the cost optimal sequences will be comparable. Tables 1 and 2. A database was generated that is in the form of a k-d tree of cost-optimal sequences up to some chosen maximum sequence cost. The sequences were organised in the k-d tree with respect to the vectors corresponding to their combined gates. For a given target gate G, gate synthesis was performed by searching for the lowest cost sequence among all nearest neighbours of G up to a chosen synthesis error (distance), , between their combined gates and G. The errors were computed using the trace distance defined as where S is a combined gate and G is the target gate. If such a sequence did not exist, then the database was further generated to a higher cost and the process was repeated until a sequence was found. Incrementally generating the cost-optimal sequence database in this manner helps avoid over generation.
For each set of base gates with individual costs calculated for each approach of implementing them, gate synthesis was performed on 5000 random target gates sampled from a uniform distribution for a variety of synthesis error rates (calculated using Eq. 8 with respect to the sequences' combined gates). Cost-optimal sequence T -counts calculated using the Z-rotation catalyst circuit approach for the two methods of assigning base gate costs are plotted against synthesised target gate error rates for each set of base gates in Figure 3. The corresponding linear best fit values for each set of logical base gates and corresponding cost values are shown in Table 3. We can compare the scaling factors of the fits between different sets of logical base gates to estimate changes in average sequence costs as the synthesis error approaches zero. For the Z-rotation catalyst circuit method that assumes all output gates are directly applied to target qubits (as opposed to using intermediate magic states), we find cost savings relative to Set 1 of 34 ± 3%, 42 ± 2%, 49 ± 2%, and 54 ± 3% for Set 2 , Set 3 , Set 4 , and Set 5 respectively, where uncertainties correspond to 95% confidence intervals. Data for a Set 6 that includes T 8 gates was also calculated, however no noticeable improvement was found with sequence cost values being almost identical to Set 5 resulting in a cost saving of 52 ± 3% relative to Set 1 . For the Z-rotation catalyst circuit method that assumes all output gates are applied to |+ states forming intermediate magic states before consuming them to perform the corresponding Z-rotation gate, we find cost savings relative to Set 1 of 29 ± 3%, 31 ± 3%, 31 ± 4%, and 31 ± 4% for Set 2 , Set 3 , Set 4 , and Set 5 respectively. These results show that if gate synthesis includes higher order Clifford hierarchy Z-rotation gates as base gates implemented using the Z-rotation catalyst approach, then a T -count saving of over 50% could potentially be achieved. Cost-optimal sequence raw magic state counts calculated using direct base gate distillation and implementation procedures are plotted against synthesised target error rates for each combination of base gates and cost values in Figure 4. Each of the four plots correspond to different resource costs of distilling and implementing the logical base gates with corresponding logical errors µ = 10 −5 , 10 −10 , 10 −15 and 10 −20 calculated using the diamond norm. The corresponding linear best fit values for each set of logical base gates are shown in Table 4 and corresponding cost values are shown in Table 2 (physical error rate assumed to be 0.1% in all calculations). The pattern of the data about their lines of best fit for each base gate set are similar between plots. This is because for each of the logical base gate errors, the ratios of the logical base gate cost values between orders of the Clifford hierarchy are similar, hence the cost optimal sequences will be comparable. For logical base gate errors µ = 10 −5 , 10 −10 , 10 −15 and 10 −20 , we find that Set 2 provides 23 ± 3%, 27 ± 2%, 30 ± 2% and 26 ± 3% reductions in scaling factor respectively compared to Set 1 . For µ = 10 −10 and 10 −15 , we find that Set 3 provides 30 ± 3% and 33 ± 2% reductions in scaling factor respectively compared to Set 1 , which are both approximately a further 3% savings compared to Set 2 . No further improvements are noticeable in our data for these assignments of cost values. These results show that for any error-correction scheme with distillation costs assigned according to Table 2, using Set 2 (which includes T 4 as logical base gates) instead of the standard Set 1 , reduces the average resource cost scaling factor with respect to the synthesis negative logerror, log( −1 ), by up to 30%. Additionally Set 3 can provide up to a further 3% reduction when compared to Set 2 . Each method of assigning individual base gate costs that were used in this work indicated that the resource requirements of synthesis algorithms may be considerably improved by including higher orders of the Clifford hierarchy as logical base gates and by optimising with respect to the individual costs of implementing them. Table 3: Linear best fits with a confidence level of 95% for cost-optimal sequence costs averaged over random target logical gates with respect to the negative log-error, log( −1 ), for target gate synthesis calculated using the trace distance (shown in Equation 8). The sequences are constructed using logical base gates with cost values assigned according to Table 1. The corresponding plots are shown in Figure 3. Table 4: Linear best fits with a confidence level of 95% for cost-optimal sequence costs averaged over random target logical gates with respect to the negative log-error, log( −1 ), for target gate synthesis calculated using the trace distance (shown in Equation 8). The sequences are constructed using logical base gates with cost values assigned according to Table 2. The corresponding plots are shown in Figure 4.
Modelling Gate Proportions
For the raw magic state approach of implementing base gates and the Z-rotation catalyst circuit method that uses intermediate magic states, the logical base gate sets Set 3 , Set 4 and Set 5 (see Eq. 3) were shown to provide only marginal resource savings for gate synthesis when compared with Set 2 (see Figs. 3b and 4), even though the sets contain many more logical base gates. To investigate this behaviour we develop a model in Appendix A for determining the proportion of sets of gates among all T l gates where l ≥ 3 within cost-optimal sequences approximating random target gates with specified gate costs. The proportions can provide insight into how the average sequence cost changes with respect to which T l base gates are included as logical base gates and what cost values are assigned. For logical base gates with non-zero proportion within sequences approximating target gates, we expect that by increasing their cost, their recalculated proportion will decrease and the average cost of these sequences will increase. Furthermore, for sets of logical base gates with relatively small proportions, the average sequence cost would only slightly increase if the set were to be excluded compared to sets of base gates with larger proportions. The model estimates the average proportion, p n , of T n logical base gates among all T l gates where l ≥ 3 from within cost-optimal sequences approximating random target gates to within sufficiently small synthesis errors . The construction is based on a unique canonical form [16] for sequences of logical base gates and is defined as where c and c are Clifford gates, H is the Hadamard gate, t m is the m th positioned Z-rotation gate from order three and above of the Clifford hierarchy, and M is the total number of t m gates in the sequence. This canonical form has the property that arbitrary gate sequences with distinct combined gates, where the sequences can consist of logical base gates from the Clifford gates and Z-rotations from orders three and above of the Clifford hierarchy, can be reduced to distinct sequences of this form. The gate proportion for T n , denoted p n , can be calculated by averaging the T n logical gate count over all possible sequences in this canonical form that are below a chosen maximum cost C (as detailed in Appendix A). That is, where c j is the logical base gate implementation cost for T j , k l is the number of T l gates within a particular sequence, |T l | is the number of gates within T l , and L is the order of the Clifford hierarchy to include Z-rotation gates up to. This calculation outputs values closely matching proportion results obtained using the sequence generation algorithm for random target gates, as shown in Figure 5. Figure 5a shows the summed proportions of all T 4 gates among T 3 ∪ T 4 gates over a variety of T 4 cost values for sequences consisting of Set 2 logical base gates. Figure 5b shows the summed proportions of all T 5 gates among T 3 ∪ T 4 ∪ T 5 gates over a variety of T 5 cost values for sequences consisting of Set 3 logical base gates. The other logical base gate costs are assigned values according to their distillation and implementation cost with a maximum logical base gate error of µ = 10 −15 as shown in Table 2. These results suggest that increasing the logical base gate implementation cost of a set T n drastically lowers the proportion of them found within the database of cost-optimal sequences. Thus they become less effective at reducing the average cost-optimal sequence costs since they are included within sequences less often. This is a simpler calculation compared to actually performing gate synthesis for many random target gates. The gate set proportions appears to give an indication for how useful the gate subset is among the rest of the base gates. We suspect there is potential that with some further research it could be used to help provide a quick approximation for how much the average synthesis cost reduces when including a base gate subset with specified cost values. calculated using our model. The sequence generation algorithm outputs cost-optimal sequences approximating random target gates to within = 0.03 synthesis logical gate error under the trace distance (see Eq. 8), while the model outputs the proportion of a set of logical base gates within the space of all cost-optimal sequences below a maximum cost that produce distinct combined gates. Clifford gates are ignored in the calculations since they are assumed to have zero cost. Both plots show that the model data closely fit the corresponding results from the sequence generation algorithm. The data show that increasing the logical base gate distillation and implementation cost of a particular set Tn drastically lowers the proportion of them found within the generated cost-optimal sequences. Thus the set Tn with increased costs becomes less effective at reducing the average cost-optimal sequence costs, since they are found less frequently within the sequences. Logical base gate costs are assigned according to Table 2 with a logical base gate error of µ = 10 −15 calculated using the diamond norm. The red, green and blue vertical lines (ordered left to right) indicate the logical base gate distillation and implementation costs for T3, T4 and T5 respectively. (a) The summed proportions of T4 logical base gates among T3 ∪ T4 gates for cost-optimal sequences consisting of Set2 logical base gates. Logical base gates from T3 are fixed while the cost for T4 gates vary. (b) The summed proportions of T5 logical base gates among T3 ∪ T4 ∪ T5 gates for cost-optimal sequences consisting of Set3 logical base gates. Logical base gates from T3 ∪ T4 are fixed while the cost for T5 gates vary.
Discussion
We investigated the cost of sequences produced by cost-optimal single-qubit gate synthesis using logical base gates from a combination of Clifford gates and Z-rotation gates from higher orders of the Clifford hierarchy. An algorithm, based on Dijkstra's algorithm, was used to generate a database of cost-optimal sequences from arbitrary single-qubit universal sets of logical base gates with individually assigned costs. As base gates, combinations of Clifford gates and Z-rotation gates from various orders of the Clifford hierarchy were used with two approaches of implementing them. The first uses a recursively applied Z-rotation catalyst circuit that utilises a temporary ancilla qubit, a small number of T gates and a Z-rotation state to apply two Z-rotation gates of the same angle on two separate qubits while retaining the original Z-rotation state. We calculate average T -count costs for this approach using the following two methods: all output gates of the catalyst circuits are applied directly to target qubits; and each output gate is first applied to a |+ state to form an intermediate magic state, which is then consumed to implement the corresponding gate via a teleportation circuit. The second approach of implementing base gates is through magic state distillation and implementation circuits that can assign costs as the average number of raw magic states used to implement them in error-correction codes up to specified logical error rates. After assigning base gate costs using each method, gate synthesis was performed by finding nearest neighbours within the database of cost-optimal sequences in the Pauli vector space corresponding to combined gates of sequences.
Using the Z-rotation catalyst approach with directly applied output gates to assign gate costs, we found that by including the higher order Clifford hierarchy Z-rotation gates along with the standard Clifford+T set of base gates, there was a reduction in synthesis cost when compared to only using the Clifford+T base gate set. The average cost-optimal sequence T -counts reduced by 34 ± 3%, 42 ± 2%, 49 ± 2%, and 54 ± 3% for the accumulative inclusion of the fourth, fifth, sixth, and seventh orders respectively. When using the same approach but with all output gates being applied via intermediate magic states, the average cost-optimal sequence T -counts reduced by 29 ± 3%, 31 ± 3%, 31 ± 4%, and 31 ± 4% for the accumulative inclusion of the fourth, fifth, sixth, and seventh orders respectively. Each average T -count calculated using the catalyst circuit approach assumes that every output gate of all recursive levels of the circuit are resourced such that no output gates are wasted. The procedure also assumes that there are sufficient numbers of ancilla qubits and Z-rotation catalyst states for smooth implementation of the gate sequences resulting from synthesis. A realistic employment of the approach would likely use a combination of direct application of output gates and the use of intermediate magic states. This is because direct application is cheaper with respect to T -count, however the intermediate magic states help make the implementation more flexible since they can be consumed at any time to implement the corresponding gate onto any target qubit. Nevertheless, these results show that there is potential for the average T -count to decrease by over 50% when performing gate synthesis with higher order Clifford hierarchy Z-rotation base gates that are implemented using this approach, when compared to cost-optimal synthesis using only the Clifford+T base gate set.
By instead using the magic state distillation approach with base gate costs assigned as the number of raw magic states, we found that including the fourth order Z-rotation gates from the Clifford hierarchy along with the standard Clifford+T gate set decreased the average cost-optimal sequence costs by up to 30 ± 2%. We observe a reduction of up to 33 ± 2% when additionally including the Z-rotation gates from the fifth order. No noticeable improvement is observed when additionally including higher order Z-rotation base gates up to the seventh order. Although these savings are not quite as large as what may be possible with the Z-rotation catalyst approach, the magic state distillation approach does not require an accessible collection of Z-rotation catalyst states to be stored throughout the computation. The implementation circuit for the distilled Zrotation magic state does require the application of a double angled Z-rotation gate as a correction 50% of the time. However, this correction gate can ideally be generated as it is required, so that every possible angled rotation does not need to be stored in advance. Also, the number of raw magic states is only a rough approximation for the actual resource costs of implementation. A precise calculation would be an extensive task that considers a variety of factors such as qubits count, circuit depth, magic state distillation cost and details of the error-correcting implementation.
We investigated the lack of further improvement found when including Z-rotation gates from higher than the fourth order of the Clifford hierarchy when using the direct magic state distillation approach and the Z-rotation catalyst circuit approach with output gates being applied via intermediate magic states. A model was developed that estimates the proportion of logical base gates within sequences approximating random target gates. This model assumes that each Z-rotation gate from orders three and above of the Clifford hierarchy have equal proportions when assigned equal cost values, that is, the gate operations have equal usefulness for approximating random target gates for the purposes of gate synthesis. The proportion estimations were shown to closely fit the data obtained using the sequence generation algorithm on random target gates. This suggests that the lack of observed cost reduction when using higher order logical base gates is due to there being far less numbers of them at their assigned costs within all cost-optimal sequences generated up to the chosen maximum sequence cost. Thus the frequency of the base gates being used for synthesis of random target gates is low, leading to a low level of influence over the average resource costs overall. The model provides a simple method, without needing to generate the full database of sequences, for estimating these gate proportions with each order of the Clifford hierarchy being assigned individual cost values. ). This plot indicates that the logical base gates are almost equivalently useful in approximating random target gates using cost-optimal gate synthesis.
by the data in Figure A.1. The figure shows that when each logical base gate is given equal costs, the sequence generation algorithm generates a database of gate sequences with each gate having approximately the same proportions, where the proportions slowly decrease for increasing order. We do not expect these proportions to significantly change for larger sequence costs (or smaller synthesis error thresholds ) since the logical base gate proportions are approximately constant for sufficiently large maximum sequence costs. This can be seen in Fig. A.2 for the case of T 5 logical base gates from within Set 3 generated by the sequence generation algorithm for random target gates. Assume we have a database of cost-optimal gate sequences that have been generated up to a chosen maximum cost with individually assigned implementation costs for each set of logical base gates T l where l ≥ 3. We will calculate the proportion of T n gates among all sequences within the database. For simplicity, let logical gates from any set T l for l ≥ 3 be called t gates. Using a unique canonical form [16] for sequences consisting of the Clifford gates and combinations of T l gates, arbitrary gate sequences can be reduced to the form where c and c are Clifford gates, t m is the m th positioned t gate in the sequence, and M is the t-count. For a particular sequence, let the number of t gates from T l be denoted by k l . It follows that each sequence consisting of gates from up to order L of the Clifford hierarchy satisfies (noting where c l is the cost assigned to logical gates from T l and C is the maximum cost of the database of gate sequences. It will be useful to denote the number of t gates from order l to L of the Clifford hierarchy as noting that K 3 is the t-count, M that appears in Eq. 11. The aim is to calculate the proportion of T n gates among all gates in sequences within the database. We begin by counting the total number of possible sequences that can be formed given The proportion of T5 logical base gates among T3 ∪ T4 ∪ T5 gates calculated using the combinatorial model for all cost-optimal sequences below a maximum sequence cost that produce distinct combined gates. The logical base gate cost values are assigned according to Table 2 for a logical base gate error threshold of µ = 10 −15 under the diamond norm. This plot shows that the proportion of T5 gates becomes approximately constant for sufficiently large maximum sequence costs. a set of t gate counts {k l }| L 3 . Then the total number of possible sequences can be summed by iterating through every combination of possible sets {k l }| L 3 that satisfy Eq. 12 with their assigned base gate costs. Once this expression is determined, it can be extended to calculate the number of T n gates and the total number of gates, which can then be used to calculate the proportions. For sequences of t-count K 3 , the number of permutations of k l gates within K 3 gate locations is (#Permutations(k l , K 3 )) := Let |T l | be the number of distinct Z-rotation gates within order l of the Clifford hierarchy, for example, |T 3 | = 2 since T 3 = {T, T † } (up to global phase). Then for each permutation, there are |T l | k l unique combinations of assigned T l logical base gates within the permutation. Thus, the total number of configurations for k l number of gate locations with |T l | variations in a sequence of t gate count K 3 is After assigning gates to k l locations, there are K 3 − k l locations remaining within the sequence. The strategy from here is to iteratively count the total number of configurations from l = 3 to L by updating the number of remaining locations at each step, which now updates as K l+1 = K l − k l . So for the second iteration, the number of configurations of k l+1 gates with |T l+1 | variations within remaining locations K l+1 of a given configuration from the assigned k l number of T l gates is γ(k l+1 , |T l+1 |, K l+1 ), leading to a total of γ(k l , |T l |, K l )γ(k l+1 , |T l+1 |, K l+1 ) configurations for k l and k l+1 numbers of T l and T l+1 gates respectively in sequences of t-count K l . Thus the total number of configurations for a set of t gate counts k = {k 3 , k 4 , . . . , k L } in sequences of t-count K 3 (containing t gates up to order L of the Clifford hierarchy) is To count the total number of sequences, we sum over all configurations for each assignment of k satisfying Equation 12. We begin by determining the maximum allowable values for each k l with respect to already specified lower order t gate counts {k j }| l−1 3 . The maximum possible value for k 3 is C/c 3 . Given a specified k 3 , the maximum value for k 4 is (C − c 3 k 3 )/c 4 . By continuing this pattern, given a set of t gate counts {k 3 , k 4 , . . . , k l−1 }, the maximum value for k l is So now the total number of sequence configurations with logical base gate costs c and maximum sequence cost C can be calculated as Since the number of T l logical gates within a particular sequence is k l , the total number of T l gates within all possible sequences below the maximum cost C is calculated by multiplying k l to each term in the summation, the total number of gates can be calculated in a similar way. Thus, the proportion of T n gates can be calculated as the weighted sum | 12,488 | 2020-05-12T00:00:00.000 | [
"Physics",
"Computer Science"
] |
A Seven-Dimensional Building Information Model for the Improvement of Construction Efficiency
With the fast expansion of major cities in China, increasing scale, complex, and tall buildings have been built to meet the increasing commercial and living demand. However, the efficiency of project management and investment is not always satisfactory. To solve this problem, a seven-dimensional building information model (7D BIM) is developed. To do this, a 3D BIM is firstly developed, which consists of architecture model, equipment model, steel framework model, other solid models, etc. *en, a 1D schedule management model and a 3D project management model (bidding management, enterprise quota management, and process management) have been integrated into the 3D BIM, thus forming a 7D BIM for a complex project. By providing a clear 3D vision in modeling the construction process, the proposed 7D model can be applied to help engineers/project managers carry out clash detection, structure design, modification, equipment installation, 3D project management, and maintenance after construction.*e performance of this model has been demonstrated through a case study of a complex project launched in China. *e study shows that the implementation of the 7D BIM has achieved significant cost and time saving as well as project quality and work efficiency improvement.
Introduction
With the fast expansion of major cities in China, more and more large-scale, complex, and tall buildings have been built to meet the increasing commercial and living demand [1]. Challenges of these complex projects include poor geology conditions [1,2], environmental impact and regulations [3], poor process management, budget constraints, safety assessment, and future maintenance [4]. Stakeholders from different sectors are involved in the whole process of construction, including owners, project designers, construction companies, management teams, quality and safety regulation authorities, and contractors. e complexity of project construction and cooperation among stakeholders make the efficiency of project management less satisfaction in most cases [5].
As known, BIM has been proposed to provide an effective platform for enhancing the collaboration among construction sectors, management teams, and owners, through integrating related data and information for project participants [6,7]. e close cooperation may help improve the quality of design/construction [8], reduce repetition or rework [9], and promote efficiency and quality in construction and management [10,11], thus leading to cost reduction and time savings [12,13]. BIM also enables a life cycle management of construction, including processes of primary project assessment, scheduling, design, construction (equipment installation, budget control, process management), operation management, maintenance, modification, and demolition [14,15]. us, BIM plays an important role in achieving project success. Surveys show that BIM has been widely adopted in construction projects in US [16], France and German [10], and the UK [17]. BIM is in the initial stage of development in China, facing the problems of low level of BIM software awareness [18] and non-user-friendly format [6]. Considering the challenges involved in traditional construction, Chinese government introduced regulation articles to adopt BIM as a new technological advancement in construction management in 2015, thus encouraging construction sectors to achieve success through a sustainable and high-efficient way [19]. Nevertheless, the academic research and practical experiences of higher-level BIM development and implementation in China were inadequate.
Based on the BIM technology, this study proposed a 7D BIM to solve the difficulties and achieve success in complex projects. e 7D BIM is built by integrating a traditional 3D BIM with a 1D schedule management model and a 3D project management model. e 7D BIM can be applied to help engineers/project managers carry out clash detection, structure design modification, equipment installation, 3D project management, schedule management, and maintenance after construction. e proposed 7D BIM provides a new perspective of BIM development by integrating a bunch of 3D project management models rather than a stepwise increase of model dimensions. In the 3D project management, a bunch of functions could be achieved simultaneously, including visualized bidding management, an enterprise level of quota management, and life cycle process management. In addition, this study demonstrates that the assignment of the core positions of the BIM team in the project management team structure is the key to the success of the 7D BIM implementation, which provides significant implications for future BIM management. Last but not least, this study proves that the integration of Internet of ings (IoT) into the 7D BIM makes great contributions to the facilitating of information collection and communication in the project, which should be the future direction of BIM implementation. is paper is organized into the following sections: Section 2 presents the literature review on BIM modeling; Section 3 illustrates the development of a 7D BIM, with details on each dimension and its operation; Section 4 demonstrates the implementation of the proposed 7D BIM, where a complex project launched in China is analyzed; Section 5 discusses the characteristics, merits, limitations, and performance of the 7D BIM; and Section 6 presents the conclusion of this paper.
Literature Review
Along with the development of BIM technology and software, the BIM applications have transformed from 3D based applications to nD based applications. In the early stage of 3D based applications, 3D BIMs were mainly applied in the design and operation phase to stimulate energy consumption, facility performance, evacuation procedures, operation management, and maintenance [20]. 3D models essentially facilitated collaboration between architects and structural engineers; thus, redesigns, processing, revisions, and changes were reduced [21]. Most of the academics and practitioners agreed that a 4D BIM was related to time (or planning or scheduling) [22]. Nevertheless, the applications of 4D BIMs were diverse. For instance, Zhang and Hu [23] defined 4 levels of 4D BIM: the first level was a simple combination of 3D model and schedules, then construction activities and resources (labor, materials, and machinery) were imported into the second level, and the third level was an extension of site entities, while the structural information of mechanical analysis was further augmented in the fourth level. e key benefit of the 4D BIM was the contribution to risk mitigation by improving team coordination [24]. It was proved by Sloot et al. [25] who adopted the 4D BIM to realize the function of risk mitigation strategies by evaluating design and process, checking workflow clashes and task dependencies, and optimizing construction logistics. Su et al. [26] embedded geometric information, material properties, and a construction schedule in the 4D BIM. From an environmental perspective, the 4D BIM in Jupp's study [27] included the function of construction planning, construction scheduling, production control, on-site management of safety, workspaces and waste, and environmental planning and management. Guerra et al. [28] further extended the 4D BIM to emphasize construction waste management, construction waste reuse and recycling, resource recovery, onsite reuse, and off-site recycling. Logistics planning and control at different hierarchical levels could also be carried out with the support of 4D BIMs [29].
Researchers and practitioners also had consensus about the extension of the 5D BIM to cost [21,22,30]. e 5D BIM integrates all of the cost information data such as quantity, schedules, and price, which was useful in the early design stage of a project as well as later during the construction in which changes may occur [24]. e 5D BIM could provide real-time cost advice throughout the detailed design, construction, and operational stages, which helped place the project cost manager at the top of the "value chain" for project clients [30]. e application of 5D BIM effectively improved the level of meticulous management in the construction stage, reduced project waste, and ensured construction quality [31]. However, beyond the fifth dimension, there appears to be a lack of agreement in BIMs. Sustainability, facility management, safety, health, energy, project life cycle, procurement, knowledge, and as-built operation were popular dimensions of 6D and 7D BIMs [21,22,30,[32][33][34][35][36][37].
By developing and implementing nD BIMs, various functions and benefits could be achieved in different phases of construction projects. For instance, BIM could simplify the data collection process and support dynamics life cycle assessment [26]. BIM could significantly reduce the time needed for the LCA applications due to the automatic and accurate generation of the bill of material quantities [24]. BIM could also be integrated with Internet of ings (IoT), which enabled a continuous flow of real-time data and represented a wide range of valuable information [24]. e real-time data from the IoT devices was a powerful paradigm to improve construction and operational efficiencies [33]. Zhai et al. [38] further demonstrated that the combination of BIM and IoT could overcome the barriers that hamper the possible function of BIM, including inconvenient data collection, lack of automatic decision support, and incomplete information. By providing information needed for energy performance evaluation and sustainability assessment, BIMs enable integrated design, construction, and maintenance towards Net Zero Energy buildings [39]. BIM also improved the efficiency of facility management and project management performance by sharing and exchanging building information between different applications throughout the life cycle of the facilities [40][41][42]. e integration of BIM into the project execution planning ensured greater control over the model, which helped prevent time and cost inefficiencies, facilitate the execution of relevant tasks, and make the whole process efficient including design, construction, and operation [24,43,44]. All construction activities were involved in the proposed BIM, therefore supporting the dynamic structural safety analysis and improving the project's safety performance [23].
In summary, various nD BIMs have been developed and applied in the construction industry to facilitate different functions and improve management efficiency in construction projects. However, 4D and 5D models were still the most popular and common BIMs. In addition, the experiences of 6D and 7D BIM development and application were inadequate, especially in complex projects. us, this study presents a 7D BIM to promote the efficiency of the construction process and reduce the cost during the life cycle of a complex project.
Framework of 7D BIM.
In order to achieve the best function of the BIM, a BIM management mechanism is firstly established (depicted in Figure 1). In the primary level, the BIM team is set as the coordinator and the BIM manager is assigned to directly respond to owners' enquiries. Designers, general contractors, professional subcontractors, supervisors, consultants, and other participants are connected by the BIM platform for collaboration. e second level of collaboration lies in coworking on tasks in the construction process and maintenance. In this BIM management system, the general consultant of BIM could guide and coordinate the application of BIM technology in the whole process of the project.
Under the BIM management and collaboration mechanism, the framework of the 7D BIM could be established (presented in Figure 2). It contains a traditional 3D BIM, a 1D schedule management model, and a 3D project management model (including bidding management, enterprise quota management, and process management). e 3D BIM is developed by combining architecture model, structure model, equipment model, steel framework model, and other solid models. By combining the 3D BIM with the 1D schedule management model, a 4D BIM is developed, which is consistent with previous studies. Afterwards, the 3D project management model is integrated with the 4D BIM, thus forming the 7D BIM.
3D BIM.
Consistent with other studies, the 3D BIM in this study is also a visual expression of engineering information. By applying the following Work Breakdown Structure (WBS) process, the task nodes of engineering based on 3D BIM construction model can be defined, broken down, and structuralized: (1) Each task in WBS should be described independently and would not be implemented repeatedly. (2) e state of progress and completion of each task in WBS should be quantified and be consistent with the actual work. (3) e tasks in WBS need to be gradually decomposed into subtasks, according to a certain relationship within the scope of the project.
e logical relationship among components of the 3D BIM, node tasks in WBS, and construction resources is depicted in Figure 3. As can be seen, there are cross correspondents among work tasks in WBS and construction resources.
us, WBS plays an important role in construction schedule management, resource supply, purchase, and project implementation. e 3D BIM is built by applying the Autodesk Revit 2018 software, through which independent BIMs such as architecture model, structure model, water supply model, heating system, electricity model, and general drawing model are developed according to their specialty. ese professional models are then integrated into a document for an overall analysis and application. Software solutions including AutoCAD 2014 and Autodesk Navisworks 2018 are utilized in this process for the design work (consultation and exchange of project design drawings), clash detection, visualization, dynamic simulation, and other project evaluation (shown in Table 1). e software can be replaced by other approaches if their interfaces are compatible with Revit.
Schedule Management Model.
Based on the simulation results, the 3D BIM can be integrated with the 1D schedule management process, by which the 4D BIM is built to provide a visualization expression of the construction process. e integration process imports a project schedule (MS Project, Excel schedule) into the 3D BIM by the schedule management software imbedded in the BIM system. e flow chart of this process is shown in Figure 4.
Multidimensional construction management can be realized in this 4D model. e 3D virtual construction, animation display, resource allocation (manpower, materials, mechanical equipment), cost, and safety information are integrated with the budget and working procedure. is would provide engineers/owners with essential information for the corresponding fund preparation. Meanwhile, the simultaneous visualization function enables stakeholders to view and monitor project progress, capital plans, WBS, machine plan, and other information anytime and anywhere. With real-time display of project planning progress and actual progress in the 4D BIM, problems can be detected as early as possible and the operation of the project could be facilitated. ree main phases or dimensions are covered in the package: the bidding management model, the enterprise quota management model, and the process management model (shown in Figure 2). e functions and mechanisms of the models are discussed below.
Bidding Management Model.
Traditional bid preparation and evaluation require massive document work and intensive labor input. To improve the efficiency of the bidding process, this study proposes a bidding management model by integrating project cost and schedule with project information in the 3D BIM. With visualized business and technical and price standards, the accuracy and professionalism of bidding can be significantly improved. e bidding management model and workflow chart are shown in Figure 5, and the features of steps and components are presented below.
(1) Participants: Based on the project management process and the system investigated through WBS in the 3D BIM, the bidding tasks for the general contractor, supervisors, and cost consultants of the project are specified. ose who have completed the bidding can resubmit the technical applications in BIM to other potential project participants. (2) Designing BIM bidding files: e content, format, and preparation methods for designing BIM bidding files are specified in the bidding model, including the 3D BIM, software, modeling content, and application scope. In addition, the modeling accuracy requirements for each design phase are also quantified, including 3D BIM viewpoints, BIM bidding tools, number of viewpoints, display content, remarks, roaming animation, and landmark models. (3) Basic price: In the bidding model, the basic engineering data of BIM can be investigated before the bidding process, thus increasing the data accuracy and reducing the probability of unbalanced bidding.
In the procedure of bidding evaluation, the basic price of the bid can be checked and problems corresponding to the quantity of engineering or unreasonable comprehensive price can be avoided.
Enterprise Quota Management
Model. e cost dimension (5D) of BIM is achieved by a quota management model in this study. It refers to the cost management of construction enterprises, where the cost of labor, materials, machinery, and other expenses can be calculated through pricing software. Luban Works (software) has been adopted in this study to summarize the project construction cost by applying the corresponding project norms and quantities.
e proposed quota management model is aimed at controlling the cost based on the 3D BIM, schedule management model, bidding management model, and process management model. e flow chart of the enterprise quota management model is demonstrated in Figure 6.
Note that, during the construction process, some alternations may occur and lead to price fluctuations. For example, the outbreak of COVID-19 and the correspondent lockdown policy in early 2020 caused a shortage of labor and materials in most construction projects, which led to an increase in the price of raw materials and manpower. With the help of BIM platform, the price fluctuations can be reflected in the quota management model and reported to owners and BIM managers. e owners then could carry out a contingency plan to minimize the impact induced by alternations, thus controlling the project cost through an efficient way.
Process Management Model.
is study establishes a process management model to achieve multiple functions of the integrated BIM. It is developed by establishing a BIM based process database and then connecting the database with on-site management (Figure 7). e project working procedure database is developed by specifying a standard working procedure for the decomposed working unit (tasks in WBS) with the help of the 3D visualization BIM. is means that every working procedure is predefined and modeled in the 3D BIM to secure its reliability, efficiency, and quality. Considering the fact that some special requirements would be emerging in the construction process and on-site work, the working process database can be updated or renewed by adding modifications to existing standard procedures. rough this visualized approach, the construction personnel can understand the technical details in civil engineering, mechanical work, electrical work, decoration, etc. e development of working process database in BIM platform enhances the on-site management through interrelating the standard working process to corresponding BIM components, procedure management, cost control and resource management, etc. Verified by the standard working process, the project schedule and dynamic cost management can be simulated, controlled, and modified. In addition, the site layout and management can be deduced and optimized from the process level. is also would help project owners/managers analyze the application effect of BIM process library in the end of the construction process.
Background of the Project.
A large-scale comprehensive modern project located in the western region of China, Phase X project, has been taken as an example for the demonstration of the performance of the proposed 7D BIM.
is project contains multifunctional facilities including offices, international conference center, brand business, star hotel, cultural and entertainment center, and industry park. e main structure of this complex building combines a slab-type building (5 layers, 20-meter high) and a tall tower with a height of over 200 meters, covering a total construction area of 1.4 million square meters. e encountered difficulties include complex design, comprehensive working procedures, and high project implementation risk. us, the traditional operation process and technical measures cannot meet the requirements of this complex project. e 7D BIM is adopted to improve the life cycle management of the project to achieve the objectives of low cost, high efficiency, environmental friendliness, high quality, safe and modern operation, etc.
3D BIM and Its Application.
e first step of this project is to recruit a BIM manager and his team members, so that a unified BIM working team can be assembled to work for the owner. e main structure of this BIM management team is shown in Figure 8, among which the BIM manager supervises BIM modeling team, BIM verification team, BIM application team, and BIM on-site consultant. Tasks and responsibilities for each member and subteam of the BIM management are specified. en, the 3D BIM is developed through the collaboration flow chart shown in Figure 9, including the project librarian, project technician, BIM modeling engineer, BIM audit engineer, and modeling technician. Based on the collaboration work, the 3D BIM platform of the project is established, which is depicted in Figure 10. Within this 3D BIM, submodels based on their specialties can be built in the 3D BIM platform, such as the curtain wall model, the steel structure model, the overall construction model, the mechatronic model, water supply and drainage model, weak/strong-current model, firefighting model, and heating and cooling model (presented in Figure 11). All submodels built in the 3D BIM platform can
Advances in Civil Engineering
be visualized, modified, and verified through clash detection and modeling, the procedure of which has been presented in Figure 12. In addition, the accuracy of these submodels in the BIM platform can be enhanced by giving more details for analysis and simulation, and some examples of refined submodels are shown in Figure 13. is visualized process would help reduce project cost and shorten timelines. In the design stage, it is possible to create difference scenarios in the 3D BIM by carrying out various building models, restoring design schemes, conducting real-time scheme comparison and intelligent analysis, and implementing modifications and other functions, thus improving the design efficiency. In the construction stage, the 3D BIM is connected with the BIM project management, so that the construction progress, cost, and quality can be monitored. For example, the clash detection can be carried out before construction, while the construction process can be visualized ( Figure 14).
Schedule Management.
e 4D schedule management BIM for Phase X is developed by integrating the 3D BIM with the 1D schedule management model. In the 4D BIM, virtual construction, animation display, resources (manpower, materials, mechanical equipment) allocation, cost, and safety information involved in the construction process are deeply integrated with the budget and working procedure. As shown in Figure 15, the modeling progress, capital plan, and field progress of Phase X can be visualized simultaneously in the 4D BIM. Project progress, capital plan, WBS, mechatronics plan, and other information can also be viewed, thus securing the construction schedule of this project. For better communication between all participants, the Internet of ings has been adopted in the schedule management model through installing sensors, monitoring devices, cameras, and advanced communication devices. With the help of the IoT, the on-site monitoring work can be viewed on cell phones, pads, BIM platform, and PCs anytime and anyplace. In addition, through advanced communication technology, real-time on-site construction control can be realized (Figure 15).
Bidding Management.
e bidding management model plotted in Figure 5 is adopted in this project to manage the bidding process. In the preparation stage, the BIM management team drafts BIM bidding documents for the architecture model and calls for potential designers to bid for this project. Bidding documents for other work such as construction, mechatronics installation, equipment, suppliers, and contractors are also drafted and specified in the BIM platform. In the bidding process, BIM based Advances in Civil Engineering bidding documents are prepared by potential bidders and submitted to the BIM bidding management system, according to BIM standards and requirements. In the bid opening stage, the BIM bid documents submitted by the bidders are checked by the tenderer or the bidding agent. en, these double-checked BIM bids are evaluated by BIM experts through the BIM based bid evaluation system. With the advantage of BIM visualization, experts can review the integrity and accuracy of the model by 3D roaming (shown in Figure 16) among individual and professional components in BIM bid documents. In addition, with the visualized bidding management model, BIM teams can quickly and accurately compile the bill of quantities, thus providing reliable data support for bidding accounting and reducing the risk of unbalanced bidding. In the decision stage, the Bid Committee reviews the evaluation report on BIM bid and bid documents (BIM, filed layout, cost, procedure management, process management, suppliers, reputation, etc.), based on which the bid decision would be finalized.
Enterprise Quota Management. Considering various
factors affecting the level of quota, an enterprise quota management model is developed based on the flow chart shown in Figure 6. A thorough investigation is carried out by the BIM team to make a scientific analysis of the enterprise production and operation. By analyzing and calculating all aspects of consumption data, a reasonable standard of quota is determined in the BIM system. In addition, the level and scope of the quota are balanced by adopting effective technical measures (e.g., marketing/historic investigation, surveys on similar projects, nearby construction, and suppliers) to ensure the implementation, inspection, evaluation, and count of the quota. e cost of labor, material, and machinery usage and other expenses can be calculated through Luban Works. As shown in Figure 17, the corresponding project norms and quantities can be calculated during the construction process. e total/final project construction cost can be controlled by the enterprise quota management model based on the 3D BIM, schedule management model, bidding management model, and the process management model.
Process Management.
e BIM based process database and its connection/interaction with on-site management are developed according to the demonstration in Figure 7. In the BIM based process database, every working procedure is predefined and modeled in the 3D BIM. consideration of special requirements for tall building construction, a few new working processes and modifications towards existing procedures are added to assist process management. For example, the automatic elevation system for scaffolding is taken into account in the design of the 3D structure model. Precast connections are made for each layer of the tall tower part; thus, the drilling/installation scaffolding track on the main building can be saved. Internet of ings is integrated with the process management model by installing and embedding of many advanced sensors/monitors/cameras, communication devices, and related products to the BIM platform. With the help of advance technology in information collection and real-time communication, BIM team members can check the work on site and visualize the BIM process, and the construction personnel can understand the technical details on civil engineering, mechanical work, electrical work, decoration, etc. (shown in Figure 18).
Overall Performance of the Developed 7D BIM.
e developed 7D BIM is implemented by the BIM team to assist the construction of Phase X project. After receiving the first version of the designed construction drawings on August 10, 2015, the BIM consulting team organized and evaluated the drawing integrity and modeling conditions within 20 days, and feedback was sent to the client technical director and head of the Design Institute. en the BIM team started the BIM modeling work on September 9 and finished the professional BIMs in about 3 months. e precision of important component/equipment models reached LOD 400, while the important operation/maintenance models reached LOD 500, corresponding to the requirement of construction deepening design, building operation, and maintenance management.
In the bidding process, stakeholders including construction companies, contractors, and equipment suppliers were called to submit their bids to a well-developed BIM
Construction model Curtain wall model
Steel structure model Mechatronics model Figure 13: e refinement of some 3D submodels.
Clash detection Construction process Figure 14: Clash detection and construction process modeling in 3D BIM.
bidding management system, within which models, quality, costs, process, and procedures could be visualized and compared. More than 20 bidding activities were launched during the construction process. e bidding averagely lasted for around 40 days, which achieved 30% of the bidding time savings compared to the traditional bidding process. In addition, the decisions based on BIM bidding system were more precise and comprehensive, as stated by the chief bidding expert.
In the design stage, the entire professional BIMs were virtually built. Based on BIM modeling, the defects of design and construction drawings of various specialties were detected. During the examination, 1858 defects were found, including 661 in the architecture and structure models and 1197 in the mechatronics model. Clash detection was also conducted, where around 58 clash points were found by the 3D BIM. Each collision point was shown in a 3D graphics display, namely, collision position, collision pipeline information, and the corresponding drawing position. With the efforts of defect investigation done in the design stage, many construction/installation problems were avoided in the construction process.
In the construction process, the 4D BIM procedure management model was applied to manage the WBS progress, capital plan, manpower, mechatronics plan, etc. With real-time display of project progress, the planed and
Advances in Civil Engineering
Material usage/price corresponding to a specific component/task Material usage/process in BIM system BIM modeling On-site construction the actual progress (through cell phones, PCs, a monitoring control room, etc.) were compared and visually monitored to control the project schedule. For the consideration of enhancing construction speed and quality, a progress management model was also implemented. Some special working processes were developed in the BIM process database, including the movable scaffolding system for construction, temporary elevations in the core-cylinder, tower cranes with different sizes and working radii, and steel framework between the flat-type building and the tall tower. During the construction process, cost control was realized through the enterprise quota management model based on BIM. After a comprehensive survey on labor usage, price of raw materials, and manufacture in major construction enterprises in China, a dynamic supply-price-duration quota database was built in the BIM. Based on the owners' final project report, the application of the proposed 7D BIM helped owners save around 19.8% of fees in construction management, 35% in engineering design changes, and 15.8% in bidding process. e total duration of this project was also shortened by around 7 months (10%), with good performance being achieved in construction quality, cost control, and quality control. After construction, the operation and maintenance could be conducted based on the data of the design and construction being integrated into a BIM operation system to improve the efficiency of management.
Discussion
is study proposed a 7D BIM to serve as a comprehensive BIM platform for the construction and operation of a complex project launched in China. e 7D BIM is composed of a 3D BIM, a 1D schedule management model, and a 3D project management model (bidding management model, enterprise quota management model, and process management model). As discussed in the Literature Review, academics and practitioners have come to a consensus that the 3D BIMs were visualized, 4D related to time, and 5D extended to cost [21,22,30], while the 6D and the 7D BIMs lacked agreement [34][35][36]. e 4D BIM developed in this study, which contains the 3D BIM and a 1D schedule management model, was consistent with previous studies. However, unlike the stepwise extending of the model to 7D in other studies [24,35,41], this study developed a 3D project management model to fulfil a bunch of functions including bidding management, enterprise quota management, and process management. e advantages of the extended 7D model in this study are as follows: Firstly, a bidding management model was developed to facilitate the bidding process through a visualized bidding function. Secondly, the emphasis of quota management in the 7D BIM enabled multiphase and life cycle cost control at the enterprise level, rather than only focusing on project construction cost in the traditional 5D BIM [30]. In the enterprise quota management model, the cost of design, construction, operation, maintenance, and even enterprise management could all be monitored. Lastly, the integration of the process management model with other models in the 7D BIM platform enhanced the cost and schedule management, which were vital in complex and tall building construction. e success of this project validated the management structure which puts the BIM team and the BIM manager in the center of construction and operation process (shown in Figure 1). A professional BIM team was in charge of the overall management of the 7D BIM to solve the encountered difficulties, including modification/refinement of design, supervision, bidding, clash detection, schedule control, cost control, construction process control, maintenance, and operation. In addition, the owner of this project fully supported the BIM manager and fulfilled the needs of the BIM team. ese findings implied that, apart from developing nD BIMs, the implementation and management of the model were equally important in achieving project success. e manner and importance of leadership of the BIM management team and trust of the owner in BIM application raised by Liu et al. [45] were demonstrated by this study. e experience of the project in this study also helped solve the problem stated by Herr and Fischer [15] that "most Chinese design and construction processes are highly linearized and characterized by a separation of professions with each AEC profession constructing proprietary BIMs." Internet of ings made great contributions to communication facilitating and process monitoring in this study.
e Internet of ings (IoT) was embedded in the 7D BIM, including construction process modeling, project supervision, modeling comparison, process realization, construction process control, quality control, and on-site management. Project stakeholders including owners, BIM team, construction companies, contractors, supervision teams, and suppliers could view and check the project progress, figure out problems, tap potentials, meet demands, and model/refine the project plans, through a unified BIM platform. Advanced communication technology played an important role in the project information exchange, which helped engineers/managers/owners to view and supervise the construction process through pads, cell phones, BIM system, and enterprise-level data centers. us, this study proved that the integration of BIM and IoT facilitated the collection and exchange of information for project construction, operation, and maintenance, which should become the future direction of BIM usage [33].
ere are also some limitations for the application of this comprehensive BIM. Firstly, concerning the vital and central role which the BIM team undertakes, the price the owner paid for their expertise is not cheap compared with ordinary projects. Secondly, the integration of IoT into the BIM requires investment and installation of many advanced sensors/monitors/cameras, communication devices, and related products, which also needs extra cost. e owners or BIM teams who have no access to IoT or are unwilling to pay the price of IoT may not be able to monitor their projects through this approach.
irdly, the working process or construction procedure is made based on the current Chinese construction market, where labor, managers, and BIM team members work for three shifts during weekdays.
is may be not suitable for some countries or regions.
Lastly, the construction process, biding management, cost control, and on-site supervision also have been regulated by both owners and local authorities with legislation [46,47], which may be different in other regions.
Conclusion
is paper presents a 7D BIM and its application in a complex project launched in China. e proposed BIM system is composed of a traditional 3D BIM, a 1D schedule management model, and a 3D project management model. e management structure of this project is modified to assign the BIM manager and BIM team at the top level. For the consideration of modern BIM management and efficiency enhancement, the Internet of ings is adopted to provide clear 3D vision for modeling the construction process, on-site management, and interaction between field work and related management teams/contractors through the 7D BIM platform. In addition, a project and enterpriselevel data center is established for the life cycle project management.
e application of the 7D BIM has been demonstrated with good performance being achieved in the case study, e.g., cost saving, efficiency and quality improvement, and construction period shortening. e integration of a 3D project management model with a traditional 3D BIM and a 1D schedule management model enables multiple functions of project management, including clash detection, structure design, modification, equipment installation, process simulation, facilities management, project operation, and life cycle maintenance.
is paper contributes to the body of knowledge in the following ways. First, this study proposed an innovation way of 7D BIM development by integrating a 3D project management model into the traditional 4D BIM. A bunch of new functions were achieved by the 3D project management models, including a visualized bidding management model, an enterprise level of quota management model, and a life cycle process management model. Second, the core position of the BIM team in the project management team structure in this study was the key to the success of the 7D BIM implementation.
is finding provided significant implications for future BIM management. Previous studies were more focusing on the technical aspects of BIM application, such as the software, format, dimensions, and functions of the model. e soft aspects of BIM implementation, including BIM management, BIM team authorities, and owners' trust and support, should be highlighted in future studies. Last but not least, this study demonstrated that the integration of IoT into the 7D BIM significantly facilitated information collection and communication in the project. Advanced technologies such as 4G and 5G telecommunications in data collection, information storage and exchange, and real-time communication have greatly changed the way of construction management. Although the installation of IoT as well as the 7D BIM adoption and implementation required extra costs, their contributions to the overall project cost savings and time shortening should be highlighted and validated in future studies.
Data Availability
Some or all data, models, or codes generated or used during the study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 8,572.2 | 2020-12-19T00:00:00.000 | [
"Engineering"
] |
Mathematical Modeling and Analysis of Drill String Longitudinal Vibration with Lateral Inertia Effect
Comparative analysis whether considering the lateral inertia or not, aiming at the longitudinal vibration of the drill string in drilling progress, is proposed. In the light of the actual condition, the mechanical model of the drill string about vibration is established on the basis of the theoretical analysis. Longitudinal vibration equation of the drill string is derived from the Rayleigh-Love model and one-dimensional viscoelastic model. According to the Laplace transform method and the relationships among parameters of the model, the solutions to complex impedance at the bottom of the drill string are obtained, and then the comparison results are analyzed, which is the lateral inertia effect on longitudinal vibration characteristics.The researches show that the smaller the length of the drill string, the greater the cross-sectional area of the drill string, the greater the damping coefficient of bottom hole on the bottom of the drill string, and the more evident the effect on the dynamic stiffness of the drill string with lateral inertia effect. The Poisson ratio of the drill string only has some effects on it taking account of the lateral inertia effect, and the influence is relatively small compared with the former three conditions.
Introduction
In oil and gas drilling engineering, the drill string is a slender structure connecting downhole tools and drilling rig.Its dynamic performance is one of the key factors affecting the drilling efficiency and cost.Also, the results caused by drill string vibration features include rock-breaking performance of drill bit, drilling ROP (rate of penetration), wellbore quality, and downhole tools fatigues [1][2][3][4].According to drilling condition, the drill string vibration can be divided into three forms such as longitudinal, lateral, and torsional vibration, in which the longitudinal vibration is the main factor of drilling efficiency and safety.The longitudinal vibration makes the drill bit move up and down, which results in the constant change of ROP, the service life of drill bit cutting element, or drilling footage [5][6][7].To itself, longitudinal vibration may cause the drill string fatigue fracture [8].
For the solutions of drill string vibration analysis methods with complex influence factors and boundary conditions, the related researches include taking the drill string as elastomer, or obtaining vibration equations with dynamics FEM (finite element method), or analyzing longitudinal vibration considering the coupling effects of wellbore friction or the hydrodynamic interaction of drilling fluid [9][10][11][12][13][14].However, in light of the drilling actual condition and well structure, as an elongated rod, the lateral inertia has an important influence on the longitudinal vibration of drill string.In fact, the drill longitudinal string vibration characteristics cannot be fully reflected with ignoring the lateral inertia effect [15].Moreover, with the search results of reference articles, previous research work was not found about the lateral inertia effect on drill string longitudinal vibration.Based on this, the influence of the lateral inertia effect on the drill string longitudinal vibration is discussed in this paper.According to well structure and drill string working conditions, the longitudinal vibration models whether considering the lateral inertia or not are established.Moreover, the solution method of drill string key parameter impedance at the bottomhole is obtained.
With the presented analysis models and solution equations, the influences of considering the lateral inertia effect or not on drill string longitudinal vibration are discussed.
Shock and Vibration
Meanwhile, by comparing different calculation results, some key parameters' influences on vibration features are presented, such as the drill string length, inside and outside radius, damping coefficient, and Poisson's ratio.The research method and model can provide reference for drill string vibration research.At the same time, the research results can also present theoretical and actual value for optimization of downhole tools design and safety evaluation of drilling operations.
Analysis Method and Mathematical Model
Based on the real condition of the drill string, related assumptions are proposed to simplify the research for drill string longitudinal vibration [16].The following assumptions are made: the cross-sectional surface of the drill string is equivalently circular; the drill string axis coincides with borehole axis; the drilling fluid is regarded as viscoelastic layer which can be divided into several layers; and the drilling fluids inside and outside drill string have identical characteristics; the vibration velocity of drill string is much larger than the flow rate of drilling fluid.According to drilling condition, the drill string vibration analysis model is presented as in Figure 1 wherein ℎ denotes the location layer of th drill string, is the total length of the drill string, and , respectively, denote the outer and inner radius of th drill string, − , − , respectively, denote the shear stress of drilling fluid inside and outside th drill string, is the stiffness coefficient, and means the damping coefficient of the th drill string.Usually, in the absence of inner and outer mud pressure, this vibration reduces to the classical wave vibration for longitudinal waves in a uniform rod.
Model of Drill String without Lateral Inertia Effect.
The deformation produced in a uniform drill string while it vibrates in a borehole is symmetric about its axis.For establishing the longitudinal vibration model of drill string without lateral inertia effect, regarding the drill string as a one-dimensional sticky elastomer and by using the laws of mass and momentum conservation, the vibration equation of the th drill string segment can be obtained as wherein the right-hand side of this equation denotes the mud pressures, which is substituted with axial displacements.The left-hand side of it denotes inertia force and the shear stress of drilling fluid inside and outside the drill string. denotes the longitudinal displacement of the th drill string segment, which is function of time and coordinate ; = (, ).V V , V represent the vibration velocity of drill string and the flow rate of drilling fluid, respectively.Moreover, denotes the elastic modulus of the th drill string segment, is the density, represents the cross-sectional area of th drill string segment, and the expression is given by According to the real working condition of the drilling, the relationship between V V and V is V ≪ V V and the movement of the drill string is always to the right.So the calculation formula should be denoted as ( To the shear stress of drilling fluid inside and outside th drill string, − , − , the expressions can be written as where − , − are the longitudinal shear stiffness of drilling fluid at the position of inside and outside th drill string segment.With applied Laplace transform of (1) and consistent with the assumption conditions, the following equation can be obtained: in which the Laplace transform with respect to time of = (, ) is represented by = (, ), and the transform relation is The definition of symbol is as follows: Then, the general solution of (, ) can be gained as the following form: where , denote undetermined coefficients determined by boundary conditions, respectively.During oil and gas field drilling process, the vibration of drill bit is caused partly because of the uneven bottomhole.Assuming that the force of bottomhole to the drill bit is (, ), according to the transitivity of force, the force on the bottom of the drill string can be also denoted as (, ).So the bottom and top of the drill string can be described as where the Laplace transform with respect to time of (, ) is represented by (, ).Furthermore, with ( 8) and ( 9) substituted into (5), the following equations can be obtained: Here, defining the symbol , its expression is given by Then, taking the results of , and definition of into (7), the displacement of the drill string (, ) can be obtained as follows: Moreover, assume that Then, the Laplace transform is equivalent to the unilateral Fourier transform, so the response of displacement frequency can be expressed as (, ).The complex impedance of the drill string is derived as follows: The complex impedance of the drill string bottom is equal to the complex stiffness, which can be expressed in the plural form as follows: in which the plural real component denotes the real dynamic stiffness, which reflects the drill string ability of longitudinal deformation resistance.In other words, when the drill string is pressed by the vibration force, the value of the dynamic stiffness has close association with the deformation of the drill string.On the other hand, the imaginary component denotes the dynamic damping, which reflects the drill string energy dissipation.
Model of Considering Lateral Inertia Effect.
Considering the influence of lateral inertia effect on drill string vibration characteristics with actual situation, the vibration of the th drill string segment can be described by the theory of Rayleigh-Love rod as follows: where ] denotes Poisson's ratio of the th drill string segment.It is also worth noticing that both coupling terms are proportional to Poisson's ratio.This is reasonable since this ratio describes the radial contraction of a rod subjected to axial strain.According to the drilling field situation, the analysis condition can be obtained as V − V V ≪ 0 and V > 0, and the above equation with ( 16) can be simplified to For obtaining the results of the equation, similar to the method which is used for the solution of model without considering the influence of lateral inertia effect on drill string vibration, considering the initial and continuity conditions, the vibration equation of the drill string is solved by Laplace transform, and the displacement expression of the drill string, with considering the influence of lateral inertia effect, can be obtained.
Here defining symbols , , and , their expressions are given as Then, the displacement expression of the drill string can be obtained as Moreover, the analytical solution of the complex impedance at the bottomhole of drill string can be expressed as
Numerical Example and Analysis of Key Parameters Influence
For contrastively analyzing the influence of design parameters on the characteristics of drill string longitudinal vibration, the complex impedance results are discussed by using the analysis models established above, including the models with and without lateral inertia effect.In the calculation results figures, the horizontal axis is the vibration frequency, and the longitudinal axis is the dynamic stiffness at the bottom of drill string.First of all, the material parameters of the drill string and drilling fluid, including the density, elastic modulus, stiffness coefficient, and longitudinal shear stiffness, are shown in Table 1.
The Drill String
Length.The length of the drill string is also on behalf of the well depth.At the same time, to the ratio of drill string length and diameter, the bigger ratio is corresponding to the poorer vibration stability of drill string.The influence of drill string length on its dynamic stiffness at the bottom is analyzed as in Figure 2, including taking account of the lateral inertia effect or not.The numerical example parameters are shown in Table 2, including the drill string Poisson's ratio, length, radius, and damping coefficient, and the drill string length value includes 1800 m and 2000 m.In Figure 2, the symbol denotes the length of drill string model without considering the lateral inertia effect, m.The symbol ℎ denotes the length of drill string with considering the lateral inertia effect, m.The results of drill string dynamic stiffness are presented corresponding to frequency of 20 Hz to 30 Hz, including different drill string length and models of considering the lateral inertia effect or not.The following conclusions can be obtained: with the increasing of frequency, the amplitude of drill string dynamic stiffness presents increasing trend, including the results of different length and analysis models.Meanwhile, to the lateral inertia effect, the influence on drill string dynamic stiffness can be almost negligible when the frequency is low or zero.However, with the increasing of frequency, such that the frequency is greater than 25 Hz, the influence of lateral inertia effect becomes more obvious.To the results of models considering lateral inertia effect or not, in the increasing period of dynamic stiffness, the absolute value of considering lateral inertia effect is greater than that of ignoring lateral inertia effect.To the results of drill string different lengths, the influence of lateral inertia effect is bigger when the drill string is shorter.
The Inside and Outside
Radius.The inside and outside radius of drill string are determined by the size specification.
The influence of radius change on the dynamic stiffness is analyzed as in Figure 3, including the models of considering the lateral inertia effect or not.As before, the numerical example parameters are shown in Table 3, including the drill string different inside and outside radius value.In Figures 3 and 4, the symbols and 1 denote the radius of drill string model without considering the lateral inertia effect, m.The symbols and denote the radius of drill string model considering the lateral inertia effect, m.The results of drill string dynamic stiffness are presented corresponding to frequency of 20 Hz to 30 Hz, as shown in Figures 3 and 4, including different inside and outside radius and models with lateral inertia effect or not.As before, the following conclusions can be got: with the frequency increasing, the dynamic stiffness amplitude presents increasing trend.Meanwhile, when outside radius increases or inside radius reduces, equivalent to the area increasing, the increasing trend of the dynamic stiffness becomes more notable.Also, to the results of models with lateral inertia effect or not, in the increasing period of dynamic stiffness, the value of model with lateral inertia effect is greater than that without lateral inertia effect.Meanwhile, with bigger difference between the drill string outside and inside radius, the lateral inertia effect has a more important influence on the dynamic stiffness of drill string.
The Damping Coefficient.
The damping coefficient of the bottomhole to drill string is influential to the drill string longitudinal vibration, and the lateral inertia effect is directly related to the damping coefficient.According to the drilling field conditions, the calculation parameters analyzing damping coefficient are shown in Table 4, taking the damping coefficient as 8000 and 10000 Ns/m.In Figure 5, the symbol denotes the damping coefficient of model without lateral inertia effect, Ns/m.The symbol denotes the damping coefficient of model considering lateral inertia effect, Ns/m. Figure 5 shows the influence of the damping coefficient on the drill string dynamic stiffness corresponding to frequency of 30 Hz to 40 Hz.It can be observed that the increasing trend of the dynamic stiffness becomes more notable when the damping coefficient decreased and the cyclic frequency increased, including models with lateral inertia effect or not.In a single cycle, when it is closer to the crest or trough, the difference of drill string dynamic stiffness is bigger.When considering the lateral inertia effect, the absolute value of dynamic stiffness in increasing period is bigger than that of the model without lateral inertia effect.However, the bigger damping coefficient does not mean more obvious influence in the drill string dynamic stiffness, which indicates that the damping of bottomhole to drill string can weaken the influence of lateral inertia effect on the dynamic stiffness.
Poisson's Ratio.
Poisson's ratio is the lateral deformation coefficient of material, which reflects the lateral deformation elastic constants of material.In the static force condition, Poisson's ratio change of drill string is very small, and it is generally taken as ] = 0.25.However, in the dynamic load case, the dynamic Poisson's ratio is commonly taken as ] = 0.1∼0.3.According to the analysis above, the calculated parameters analyzing Poisson's ratio are shown in Table 5, taking Poisson's ratio as 0.10 and 0.30.In Figure 6, the symbol ] denotes Poisson's ratio of model without lateral inertia effect.On the other hand, the symbol ] denotes Poisson's ratio of model considering lateral inertia effect.Figure 6 shows the influence of Poisson's ratio on the drill string dynamic stiffness in the frequency of 30 Hz to 40 Hz.It can be seen that the increasing trend of the drill string dynamic stiffness becomes more obvious when Poisson's ratio and cyclic frequency increase, both of the models with lateral inertia effect or not.However, different from other parameters results, the change of the drill string Poisson's ratio has influence on the results of model considering lateral inertia effect, while it has no influence on the results of model without considering lateral inertia effect, which indicates that the lateral inertia effect includes the effect of Poisson's ratio.Meanwhile, when considering the lateral inertia effect, the absolute value of dynamic stiffness is increasing with the drill string Poisson's ratio in the increasing period.
Natural Frequency.
Assume that the length of the drill string is 3500 m, and the boundary conditions are prescribed by the above, as well as the structure parameters of the drill string.The longitudinal vibration model of the drill string is analyzed with different conditions.The first three-order natural frequencies of the drill string longitudinal vibration are shown in Table 6 whether considering the lateral inertia effect or not.
The calculation results illustrate that the natural frequency increases with the increase of frequency order.And when there is no inertia, it is less than the situation with lateral inertia effect.When considering the inertia effect, the vibration is coupled with the longitudinal vibration, which will cause the vibration difference.According to adjusting the drilling parameters, the drill string resonance can be avoided and make drilling optimization possible.
Conclusion
According to the drilling field conditions, the analysis model of drill string longitudinal vibration is established with the lateral inertia effect, and the Laplace transform is used for the complex impedance solution of drill string bottom.By using the established equations and analyzing the results of numerical examples, the influence of design parameters on drill string dynamic stiffness is discussed, including the results of models considering the lateral inertia effect or not.Within the frequency range of drilling field conditions, these conclusions can be drawn: (1) The cyclical transformation amplitude of drill string dynamic stiffness, whether considering the radial inertia effect or not, presents increasing with frequency and the length of the drill string.Therefore, for resisting drill string longitudinal deformation and excessive vibration, the appropriate length of drill string should be considered in the well structure design.(2) For the drill string dynamic stiffness increasing with the cross-sectional area, combining the relationship of sectional area and inside and outside radius, the influence of sectional area on drill string dynamic stiffness can provide reference for the design of BHA (Bottomhole Assembly).
(3) With the damping coefficient increasing, the amplitude of drill string dynamic stiffness is reducing, which would lead to the vibration force increasing.
The results are coinciding with the drilling actual condition.Moreover, the consequences can provide basic data for the design of downhole friction reduction tools.
(4) To the influence of Poisson's ratio on drill string dynamic stiffness, the results are some difference comparing with other parameters.Based on the model without considering the lateral inertia effect, the change of Poisson's ratio has no influence on the results of drill string dynamic stiffness.However, to the model including the lateral inertia effect, the influence of Poisson's ratio on drill string dynamic stiffness cannot be ignored.According to the drilling field condition, the results are important for drilling dynamics analysis, especially in deep and ultradeep drilling or some new wells, which is the basis for improving the rock-breaking efficiency and ROP.
Figure 2 :
Figure 2: Influence of the drill string length on dynamic stiffness.
Figure 3 :
Figure 3: Influence of outside radius on dynamic stiffness.
Figure 4 :
Figure 4: Influence of inside radius on dynamic stiffness.
4 Figure 5 :
Figure 5: Influence of the damping coefficient on dynamic stiffness.
Figure 6 :
Figure 6: Influence of Poisson's ratio on dynamic stiffness.
Table 1 :
Material parameters of the drilling string.
Table 2 :
Calculation parameters of the length of drill string influence.
Table 3 :
Calculated parameters of the inside and outside radius influence.
Table 4 :
Calculation parameters of the damping coefficient influence.
Table 5 :
Calculated parameters of Poisson's ratio influence.
Table 6 :
The free vibration natural frequency in different conditions. | 4,632.6 | 2016-03-28T00:00:00.000 | [
"Materials Science"
] |
Eggerthella timonensis sp. nov, a new species isolated from the stool sample of a pygmy female
Abstract Eggerthella timonensis strain Marseille‐P3135 is a new bacterial species, isolated from the stool sample of a healthy 8‐year‐old pygmy female. This strain (LT598568) showed a 16S rRNA sequence similarity of 96.95% with its phylogenetically closest species with standing in nomenclature Eggerthella lenta strain DSM 2243 (AF292375). This bacterium is a nonspore forming, Gram‐positive, nonmotile rod with catalase but no oxidase activity. Its genome is 3,916,897 bp long with 65.17 mol% of G + C content. Of the 3,371 predicted genes, 57 were RNAs and 3,314 were protein‐coding genes. Here, we report the main phenotypic, biochemical, and genotypic characteristics of E. timonensis strain Marseille‐P3135 (=CSUR P3135, =CCUG 70327); ti.mo.nen′sis, N.L. masc. adj., with timonensis referring to La Timone, which is the name of the hospital in Marseille (France) where this work was performed). Strain is a nonmotile Gram‐positive rod, unable to sporulate, oxidase negative, and catalase positive. It grows under anaerobic conditions between 25°C and 42°C but optimally at 37°C.
| INTRODUCTION
The human gut microbiota has drawn more attention to with the advancement and development of new sequencing techniques (Gill et al., 2006;Ley, Turnbaugh, Klein, & Gordon, 2006;Ley et al., 2005). Yet, we face several limitations when using these techniques, especially when it comes to depth bias, incomplete database, and the obtention of raw material for further analysis (Greub, 2012). However, the ability to cultivate and isolate pure colonies is mandatory to describe the human gut microbiota, thus the need to develop a technique that enhances the efficiency of these two factors (Lagier et al., 2015). When talking about the human gut, stool samples are the best representatives of its microbiome since only 1 g of human stool sample might contain up to 10 11 -10 12 bacteria (Raoult & Henrissat, 2014). Before the introduction of culturomics, only 688 bacteria and 2 archaea had been recognized in the human gut . Culturomics was developed with the purpose of optimizing growth conditions of previously uncultured bacteria in order to fill the missing gaps in the human microbiome (Lagier et al., 2012a). In general, culturomics consists in culturing samples by using 18 different conditions along with isolating pure colonies for further identifications using the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (matrixassisted laser desorption/ionization time-of-flight mass spectrometry [MALDI-TOF MS]) approach and 16S rRNA gene sequencing. Any unidentified colonies are subject to 16S rRNA gene sequencing and a series of descriptive experiments targeting the phenotypic, biochemical, and genomic characteristics at the same time (Lagier et al., 2012b;Seng et al., 2009). Using this methodology, we were able to isolate a new strain Eggerthella timonensis, a member of the genus Eggerthella (Bilen, Cadoret, Daoud, Fournier, & Raoult, 2016). Eggerthella lenta, formerly known as Eubacterium lentum, is the type strain of Eggerthella genus and was first reported in 1935 by Arnold Eggerth (Eggerth, 1935;Kageyama, Benno, & Nakase, 1999;Moore, Cato, & Holdeman, 1971). Species belongs to Eggerthella genus, Actinobacteria phylum in the Coriobacteriaceae family and known for its ability to grow under anaerobic conditions (Kageyama et al., 1999). Moreover, Eggerthella species have been reported to colonize the human gut microbiome and have been correlated to several health problems such as anal abscess and ulcerative colitis (Lau et al., 2004a).
| Strain isolation
Before stool sample collection in Congo in 2015, an approval was obtained from the ethic committee (09-022) of the Institut Hospitalo-Universitaire Méditerranée Infection (Marseille, France). The stool sample was collected from a healthy 8-year-old pygmy female accordingly to Nagoya protocol. Stool samples were shipped from Congo to France in the specific protecting medium C-Top Ae-Ana (Culture Top, Marseille, France) and stored at −80°C for further study and analysis.
Samples were inoculated in a blood culture bottle (BD BACTEC ® , Plus Anaerobic/F Media, Le Pont de Claix, France) supplemented with 5% of rumen and 5% of sheep blood at 37°C. Bacterial growth and isolation was assessed during 30 days on 5% sheep blood-enriched Columbia agar solid medium (bioMérieux, Marcy l'Etoile, France).
MALDI-TOF MS was used for colonies identification. When the latter fails to identify tested colonies, 16S rRNA gene sequencing was used (Lagier et al., 2012b;Seng et al., 2009). On average, 10,000 colonies have been tested for each stool sample.
| MALDI-TOF MS and 16S rRNA gene sequencing
Using a MSP 96 MALDI-TOF target plate, bacterial colonies were spotted and identified by the means of MALDI-TOF MS using a Microflex LT spectrometer as previously described (Seng et al., 2009). In case of MALDI-TOF's identification failure due to lack of a reference strain in the database, 16S rRNA sequencing was used for further analysis using the GeneAmp PCR System 2720 thermal cyclers (Applied Biosystems, Foster City, CA, USA) and the ABI Prism 3130xl Genetic Analyzer capillary sequencer (Applied Biosystems) (Morel et al., 2015).
Sequences were assembled and modified using CodonCode Aligner software (http://www.codoncode.com) and finally blasted against the online database of National Center for Biotechnology Information (NCBI) database (http://blast.ncbi.nlm.nih.gov.gate1.inist.fr/Blast. cgi). Once blasted, a sequence similarity of less than 98.65% with the closest species was used to define a new species and 95% for defining a new genus (Kim, Oh, Park, & Chun, 2014
| Phylogenetic analysis
16S rRNA sequences of strain's closest species were obtained from the database of "The All-Species Living Tree" Project of Silva (LTPs121) ("The SILVA and 'All-species Living Tree Project (LTP)' taxonomic frameworks," n.d.), aligned with Muscle software and phylogenetic inferences were done using FastTree with the approximately maximum-likelihood method (Price, Dehal, & Arkin, 2009).
Moreover, Shimodaira-Hasegawa test was adapted in order to compute the support local values shown on the nodes. Bad taxonomic reference strains were removed along with duplicates using phylopattern (Gouret, Thompson, & Pontarotti, 2009). This pipeline was done using the DAGOBAH software (Gouret et al., 2011), which comprises Figenix (Gouret et al., 2005) libraries.
| Growth conditions
In order to obtain the optimal growth conditions, the strain was cultured under several conditions in terms of temperature, atmosphere, pH, and salinity. First, the strain was cultured and incubated under aerobic, anaerobic, and micro-aerophilic conditions on 5% sheep blood-enriched Colombia agar (bioMérieux) at the following temperatures: 28°C, 37°C, 45°C, and 55°C. Bacterial growth under anaerobic and microaerophilic environment was tested using the GENbag anaer and GENbag microaer systems (Thermofisher Scientific, Basingstoke), respectively. Furthermore, salinity tolerance was tested by assessing growth at 37°C under anaerobic condition using different NaCl concentrations (0, 5, 10, 50, 75, and 100 g/L NaCl). As well, optimal pH for growth was evaluated by testing multiple pH: 6, 6,5, 7, and 8.5
| Morphological and biochemical assays
In order to biochemically describe strain Marseille-P3135; different API tests (ZYM, 20A and 50CH, bioMérieux) were used. Sporulation ability of this bacterium was tested by exposing a bacterial suspension for 10 min to a thermal shock at 80°C, and then cultured on COS media. Moreover, the motility of the strain was detected using a DM1000 photonic microscope (Leica Microsystems, Nanterre, France) under a 100× objective lens. Also, a bacterial suspension was fixed with a solution of 2.5% glutaraldehyde in 0.1 mol/L cacodylate buffer for more than 1 hr at 4°C for observation under the Morgagni 268D (Philips) transmission electron microscope. Finally, Gram staining results and images were obtained by DM1000 photonic microscope (Leica Microsystems) using a 100× oil-immersion objective lens.
| Fatty acid methyl ester (FAME) composition of strain Marseille-P3135
Using gas chromatography/mass spectrometry (GC/MS), Cellular FAME analysis was performed. Harvested from several culture plates, two samples were made with <1 mg of bacterial biomass per tube, and then FAME and GC/MS were done as previously described .
| DNA extraction and genome sequencing
To extract the genomic DNA (gDNA) of strain Marseille-P3135, FastPrep BIO 101 (Qbiogene, Strasbourg, France) was used for a mechanical treatment with acid-washed beads (G4649-500 g Sigma).
Then, samples were incubated with lysozyme after 2 hr and a half at 37°C and EZ1 biorobot (Qiagen) was used for DNA extraction according the to manufacture guidelines. Qubit was used for DNA quantification (69.3 ng/μl).
As for genome sequencing, MiSeq Technology (Illumina Inc, San Diego, CA, USA) was used with mate-pair and paired-end methods. Also, Nextera XT kit (Illumina) and Nextera Mater pair kit (Illumina) were used for samples barcoding. The DNA of the strain was mixed with 11 pairedend projects and 11 mate-pair projects. Pair-end libraries were prepared by using 1 ng of gDNA, which was fragmented and tagged. Twelve PCR amplification cycles accomplished the tag adapters and added dual-index barcodes. Subsequently, purification was done using AMPure XP bead (Beckman Coulter Inc, Fullerton, CA, USA), and libraries' normalization was done as described in Nextera XT protocol (Illumina) for pooling and sequencing on MiSeq. A single run of 39 hr in 2 × 250-bp was done for paired-end sequencing and clusters generation. This library was loaded on two flowcells. Total information of 6.5 and 4.3 Gb was obtained from a 685 and 446 k/mm 2 cluster density with a cluster quality 95.1% and 94.8% (12,615,000 and 8,234,000 passed filtered clusters). Index representation for strain Marseille-P3135 was determined to be of 4.57% and 3.83%. The 576,647 and 315,481 paired-end reads were filtered based on the quality of the reads.
As for mate-pair libraries preparation, 1.5 μg of gDNA were used with Nextera mate-pair Illumina protocol. Mate-pair junction adapter were used to tag fragmented gDNA, and Agilent 2100 BioAnalyzer passing filtered paired reads). Index representation of the studied strain was determined to be of 8.53% and 9.24%. The 867,401 and 511,563 paired reads were trimmed and then assembled.
Genome assembly, annotation, and comparison were made with the same pipeline as previously discussed in our previous work (Elsawi et al., 2017).
| Strain Marseille-P3135 identification
After comparing the 16S rRNA gene sequence of the present strain with other organisms, it was found that it exhibited a sequence similarity of 96.95% with E. lenta (DSM 2243; AF292375), its phylogenetically closest species with standing in nomenclature (Figure 1).
The phylogenetic analysis clearly supports that the studied strain is a member of the Eggerthella genus. Having more than 1.3% sequence divergence with its closest species, we can suggest that the isolate represents a new species named E. timonensis (Bilen et al., 2016).
| Phenotypical and biochemical analysis of strain Marseille-P3135
The strain is a nonmotile Gram-positive rod, unable to sporulate, oxidase negative, and catalase positive. It grows under anaerobic conditions between 25°C and 42°C but optimally at 37°C. As for acidity tolerance, this strain was able to survive in media with pH ranging between 6 and 8.5 and could sustain only a 5 g/L NaCl concentration.
Colonies have a smooth appearance with a mean diameter of 0.5 mm. Examined traits using API20A, API50CH, and APIZYM are detailed in supplementary Table 1. A comparison of some biochemical features was done in Table 1 between the studied strain and the literature data of closely related species (Lau et al., 2004b;Würdemann et al., 2009).
Composed of two scaffolds, the genome of strain Marseille-P3135 is 3,916,897 bp long with 65.17 mol% G+C content. When analyzing the detected 3,371 predicted genes, 57 were RNAs (2 genes are 23S rRNA, 2 genes are 5S rRNA, 2 genes are 16S rRNA, and 51 genes are tRNA genes) and 3,314 were protein-coding genes. Moreover, 2,524 F I G U R E 2 Electron micrographs of Eggerthella timonensis strain Marseille-P3135 generate wit Morgagni 268D (Philips) transmission electron microscope operated at 80 keV. Scale bar = 200 nm F I G U R E 3 Gel view comparing mass spectra of Eggerthella timonensis strain Marseille-P3135 to other species by displaying the raw spectra of different species in a pseudo-gel like arrangement. The x-axis represents the m/z value and the left y-axis correspond to the running spectrum number deriving from subsequent spectra loading. Intensities of the peaks are represented by with a gray scale. Also, the correlation between the peak color and its intensity is represented in the right y-axis with arbitrary units. Species shown for this analysis are noted on the left Table 3.
| Comparison of genome properties
The draft genome sequence of the present new species was compared to with G. pamelaeae (FP929047) which is close but outside the Eggerthella genus and E. lenta (ABTT00000000) as the closest species and alone member of the genus for which the genome is available. The draft genome sequence of our strain was larger than that of G. pamelaeae and E. lenta (3,608 and 3,632 Mb, respectively). The G+C content was larger too (64% and 64.2%, respectively). The gene content was larger than that of G. pamelaeae and E. lenta (2,027 and 3,070, respectively). The functional classes' distribution of predicted genes of the present genome according to the COGs of proteins is shown in Figure S3. The latter showed an identical profile for the three compared strains.
Subsequently, DNA-DNA hybridization values between E. timonensis and other species with standing in nomenclature was of 43.6 with E. lenta, 21.2 with G. pamelaeae (Table 5). Interestingly, these data show that the genome of the strain was closer than E. lenta one and further of G. pamelaeae supporting the hypothesis that strain Marseille-P3135 as a unique species which is close to species of the Eggerthella genus (Kim et al., 2014;Tindall, Rosselló-Móra, Busse, Ludwig, & Kämpfer, 2010;Wayne et al., 1987).
| CONCLUSION
In conclusion, culturomics helped us in the isolation of a new species previously uncultured from the human gut normal flora and its description using a taxonogenomics approach. Given its 16S rRNA gene sequence divergence higher than 1.3% with its phylogenetically closest species with standing in nomenclature, we propose a new species E. timonensis, type strain Marseille-P3135 (=CSUR P3135, =CCUG 70327).
| E. timonensis sp. nov. description
E. timonensis (ti.mo.nen′sis, N.L. fem. adj., with timonensis referring to La Timone, which is the name of the hospital in Marseille (France) where this work was performed).
It is a nonmotile Gram-positive rod, unable to sporulate, oxidase negative, and catalase positive. It grows under anaerobic conditions optimally at 37°C. Colonies have a smooth appearance with a mean T A B L E 1 Differential characteristics of Eggerthella timonensis strain Marseille-P3135 with Eggerthella lenta strain NCTC 11813 (Kageyama et al., 1999), Eggerthella sinensis (Lau et al., 2004b), and Gordonibacter pamelaeae strain 7-10-1-b (T) (Würdemann et al., 2009) diameter of 0.5 mm. Moreover, cells had a length of 0.7-1.6 μm when seen under electron microscope and an average diameter of 0.4 μm.
It is able to produce esterase C4, esterase lipase C8, acid phosphatase, and naphtol-AS-BI-phosphohydrolase. As well, it can | 3,376.8 | 2018-06-13T00:00:00.000 | [
"Biology"
] |
Research on Feature Extraction and Chinese Translation Method of Internet-of-Things English Terminology
,
Introduction
Feature extraction and Chinese translation of the Internetof-ings English terms are the basis of most natural language processing [1][2][3][4][5]. Its main task is to extract rich semantic information from unstructured text, which is more convenient for the computer to further calculate and process and meet more follow-up requirements [6][7][8][9][10][11][12]. NLP stands for natural language processing and is an important branch of deep learning. Its main function is to extract the required information from the text data le and realize the correspondence between text and semantic information. NLPbased tasks [13][14][15][16][17][18][19] are under normal circumstances, text semantic feature extraction provides a solid foundation for text understanding, and Chinese translation of English terms is based on semantic feature understanding. Language conversion and correspondence are carried out on the basis of semantic feature understanding and information design text comprehension methods. As far as the current application scope of NLP is concerned, the feature extraction and Chinese translation methods [20] [22][23][24][25][26][27][28][29] of Internet-ofings English terms have great potential value.
e method based on text feature extraction has a wide range of applications and has di erent uses for di erent scenarios. e method in this study is mainly aimed at the method of feature extraction of the English terminology of the Internet of ings, and the object-oriented object is the Internet of ings, which can be said to be a subset of the former. Text semantic feature extraction is the basis for realizing text understanding. e quality of semantic text feature extraction directly a ects the accuracy of the text semantic understanding model. Semantic text feature extraction is to extract the key semantic information in the text so that the computer can process natural text data quickly and without ambiguity. Speci cally, the relationship among words is extracted by mapping the words in the text to the appropriate semantic feature space. Although there are many ways to solve these problems, there are still serious problems. When the text semantic feature extraction method based on these methods is used for semantic understanding, there are di erent problems in understanding from di erent perspectives among words that seem to have a semantic similarity. is is because the text semantic feature extraction method of word bag or word vector is to count the frequency or probability distribution of text words and does not include contextual semantic information between words, and its semantic understanding method cannot solve the problem that words in the text depend on context. With the advent of knowledge graphs and perceptrons, discretized and highly semantically concentrated texts are transformed into semantic representations that machines can understand and compute. erefore, on the basis of traditional semantic feature extraction, each dimension element in the extracted semantic features has a clear meaning by designing a more effective semantic text feature extraction method. e marked English text corpus is trained by the method of deep learning; the words are mapped to specific knowledge concepts, the semantic features of the words and their concepts in the text are extracted, and the contextual concept dependencies of the words in the text are mined to solve the text semantic feature extraction. is method is used to solve the problem of text semantic feature extraction and sparse word semantic features.
Most of the current text semantic feature extraction methods mainly use neural network models to generate text representations [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45]. Most of these models use the frequency or probability distribution of words in the statistical text to represent English professional vocabulary in the form of semantic space to construct a text semantic representation model. However, these methods have two problems in the feature extraction process of English terminology of the Internet of ings. One is that the common vocabulary and the direction of the Internet of ings use the same vocabulary to express different meanings; that is, the same vocabulary will have ambiguity [46][47][48][49][50][51][52]. Second is, generally speaking, English feature extraction and Chinese translation of the Internet of ings are two steps, which are to extract the English terms of the Internet of ings and convert the English terms of the Internet of ings to Chinese [52][53][54][55][56][57][58]. Usually, two network models are used to realize this function. e structure of the model is complex, and the actual operation is difficult. To solve this problem, this study proposes a feature extraction and translation network for IoT English terminology based on LSTM, which can basically correctly extract and translate IoT English terminology vocabulary.
is study proposes a feature extraction and Chinese translation vocabulary of IoT English terms based on LSTM, which directly realizes the process of IoT English term feature extraction and Chinese translation at one time, avoiding the complicated design and migration process in the middle, and can effectively guarantee the accuracy of feature extraction of Internet-of-ings English terminology meets the requirements, and the time series-based feature extraction and learning of the model is realized by using the LSTM structure.
IoT English Terminology.
e Internet of ings is an emerging field of science and technology in recent years, and the professional vocabulary in this field has the characteristics of typical scientific and technological texts. e vocabulary it uses has strong computer professional characteristics. Professional vocabulary and terminology in the direction of the Internet of ings are becoming more and more complex. Difficult vocabulary, inconvenient reading and writing, difficult memory, and a high repetition rate of abbreviations are the characteristics of Internet-ofings English terminology. Abbreviations in the computer field are often used in the Internet of ings, such as IoT, NFC, and other words; however, the abbreviations of these words may have multiple meanings. Usually, these words are difficult to understand correctly through translation software. Users with high computer expertise can correctly understand the meaning of words.
English Term Feature Extraction.
English term feature extraction is the basis of many natural language processing applications. Its main function is to extract rich phonetic information of English terms from unstructured text so as to facilitate further computer processing and human understanding. English term feature extraction provides a solid foundation for IoT English term understanding and builds rich text semantic features. Most recent English term feature extraction methods use neural network language models to generate English term textual representations. ese models use statistics on the frequency or a probability distribution of English term words in the text and represent the word and word frequency or probability distribution in the form of semantic space to construct text semantic representation features. However, when these traditional text semantic feature representation models are used to understand text semantics, they are easily affected by the context and the vocabulary will be ambiguous.
Chinese Translation of Internet-of-ings Terms in
English.
e Internet of ings is a branch of the computer profession. A large part of the Internet-of-ings English terms are consistent with computer terms, or the composition of these terms is similar to that of computer terms. erefore, by referring to the translation of computer terms, some Internet-of-ings English terms are analogized. Firstly, terms, reliability, and accuracy of the results obtained in this way are relatively high, which can ensure the internal consistency and practicability of the translated terms and basically meet the basic requirements for the use and translation of the Internet-of-ings terms. Secondly, the category of Internet-of-ings English terminology and technical English should reflect the characteristics of scientific and technological English when translating Internetof-ings English terms; that is, the translated vocabulary should have a professional vocabulary and rigorous logic.
According to whether there is a standardized translation of the Internet-of-ings terms, the English terms of the Internet of ings are roughly divided into two categories, which are the standardized English terms of the Internet of ings and the unregulated English terms of the Internet of ings. Determine the corresponding Chinese translation method. e already standardized Internet-of-ings English is mainly divided into three categories, namely, acronyms, compound words, and semitechnical words. For this type of IoT English terminology, its translation is basically determined, and it has been widely followed and used in the industry. e focus is to summarize this type of method from the normative translation to ensure the accuracy of the translation. For unregulated IoT English terms, the translation situation is more complicated, and it is necessary to combine the user's IoT expertise, standardized translation methods, and academic discussions to jointly ensure the certainty, accuracy, and reliability of IoT English readability.
Network Models
e long short-term memory network (LSTM) is an improved recurrent neural network commonly used at present. It can not only solve the problem that recurrent neural networks cannot handle long-distance dependencies but also solve the common model gradient disappearance or gradient explosion problem in neural networks. It is very important to deal with sequence data. is study adopts the network structure based on LSTM and CNN to realize the functions of feature extraction and Chinese translation of Internet-ofings English terms. e purpose of constructing based on the semantic network is to establish the connection between the multiunderstanding IoT English term text and the additional knowledge, that is, the knowledge base or semantic background knowledge. e knowledge base includes concepts, entities, and connections among entities. When the relational network is rich enough, a rich Internet-of-ings English term feature network can be formed. Usually, the text feature extraction network is generally divided into three steps: word segmentation, academic word part-of-speech tagging, and belonging word recognition, and each step uses a new model for disambiguation in each step. Since Google released the pretrained model BERF, this NLP-based network model has been pretrained and fine-tuned to achieve excellent results on a variety of natural language processing tasks. e BERT network model requires unsupervised training on large-scale data and then fine-tuning on different types of more specialized datasets according to different natural language processing tasks. e idea of the network model we proposed is basically similar to that of BERT. It is also trained on a large natural language processing dataset to obtain a pretrained network model and then fine-tuned on the specific small dataset in this study. On the one hand, it is more suitable for the task of feature extraction and Chinese translation of Internet-of-ings English terms in this study, so as to ensure that the model has a better training effect; on the other hand, debugging on a small dataset can effectively reduce the time and cost of model training computing resources.
LSTM Cell Structure.
e full name of LSTM is long short-term memory, which is a neural network with the ability to memorize long-and short-term information. With the rise and development of deep learning, a more systematic and complete LSTM framework has been formed, and it has been widely used in many fields. LSTM introduces a gating mechanism gate to control the circulation and loss of features to solve the long-term dependence of RNN. is study uses the most basic LSTM network structural unit and does not consider its variants.
e core structure of LSTM is shown in Figure 1. e LSTM network structure in Figure 1 is a two-layer distribution, and the structure diagram is the data transmission direction of multiple LSTM units. An LSTM cell has three gates: forget gate, input gate, and output gate. e final output of the LSTM cell is h t and c t , and its input is c t−1 , h t−1 , and x t : where f t is called the "forget gate," which means that the features of C t−1 are used to calculate C t . Sigmoid is a vector whose value range is between [0, 1]. Usually sigmoid is used as the activation function, and the output of sigmoid is a value in the interval [0, 1]. ⊗ is the most important gate mechanism of LSTM, which represents the unit multiplication relationship between f t and C t−1 : where C t represents the unit state update value, which is obtained from the input data x t and the hidden node h t−1 through a neural network layer, and the activation function of the unit state update usually uses tanh. i t is called the input gate, and its value threshold is a vector between [0, 1], which is also calculated from the input data x t and the hidden node h t−1 through the activation function sigmoid: Among them, in order to calculate the predicted value y t and generate the complete input of the next time slice, the output h t of the hidden node needs to be calculated. h t is obtained from the output gate o t and the cell state C t , where o t is calculated in the same way as f t and i t .
LSTM-Based Network
Model. RNN, termed a time-series network, can store historical information, but there will be a problem of gradient disappearance when the sequence is too long. As a special form of RNN, LSTM can effectively deal with this problem. e network structure based on LSTM is shown in Figure 2. e above network structure includes an LSTM network with two hidden layers. At a single time T, it is an ordinary backpropagation neural network, but after expanding along the time axis, the hidden layer information trained at T �1 will be passed to the next. At time T � 2, there are five rightward arrows in Figure 2, indicating that the state information of the hidden layer is transmitted on the time axis. Multiple time-series lines represent the values of the two inputs and the values of the three outputs in the LSTM structure, which are embodied in Section 3.1.
ere are many ways to understand text features, but generally, there are four types: input layer, hidden layer, output layer, and time series. e main function of the input Computational Intelligence and Neuroscience 3 layer is to represent each word of the text or IoT English term vocabulary with the word vector of the pretrained model. e hidden layer is to continuously learn the characteristics of the professional vocabulary of the Internet of ings through the established neural network structure and to control the transmission and flow of the characteristics of the intermediate model. e output layer is to output the vocabulary and relations of the table according to the requirements of the model and the format of the output label. e time series mainly deals with the representation of words in time series, focusing on learning the relationship between words.
Feature Extraction of Internet of ings English Terminology and Neural Network for Chinese Translation.
In this study, the feature extraction and Chinese translation neural network structure of the Internet of ings English terminology are shown in Figure 3. e input data in this paper are the feature dimension x; the length of the vector after the vocabulary is encoded. ere are two layers in the middle hidden layer in the network, and the feature dimension of each layer; that is, the number of neurons in the hidden layer is 5. In the structure of the neural network that we designed, a bidirectional recurrent neural network is used. When using LSTM, both forward propagation and backpropagation have output feature data. e output dimension of bidirectional LSTM is twice the number of hidden layer features. e input layer is to represent each word of the text and question with a pretrained word vector. e attention layer uses a bidirectional LSTM attention mechanism to process the time seriesbased features. e decoding layer is the output of vocabulary and relations and calculates the output probability for the vocabulary and input. e probability of each word being output at the current position is the sum of the probability of being selected in the vocabulary and the probability of being copied in the input. CNN uses ResNet-50 to extract the language features of time series.
e ResNet series adopts the basic bottleneck module, which improves the learning ability of features by continuously reducing the input feature size of the network model and increasing the feature dimension. Computational Intelligence and Neuroscience e LSTM-based neural network model does not depend on a specific framework. In this study, we use the LSTMbased encoding and decoding framework. e encoding framework is an overall model for feature extraction, and its main function is to solve the task of feature extraction for Internet-of-ings English terms. First, briefly introduce the encoding and decoding model, such as the feature extraction task of Internet-of-ings English terminology, which is essentially a multilabel classification problem and can be expressed in the form of <sentence, relation label>. e task goal is to generate a sentence of a given Internet English term and generate the label of the specific relationship of the lexical sentence through the encoder-decoder model. In this study, the sentence is regarded as a given resource, and the (4) Among them, w 1 , w 2 ,. . ., and w m represent the word sequence contained in the current sentence and r 1 , r 2 , . . . , r n represent the relation sequence. In the encoding part, the input sentence source is encoded; that is, the intermediate hidden semantic representation E is obtained through nonlinear transformation: e decoding part, whose goal is to select the desired relation according to the intermediate semantic representation E and the relation, lists e neural network model based on LSTM proposed in this study is mainly used for the task of feature extraction and Chinese translation of English terminology in the Internet of ings. It solves two problems. One is the statistical language model, which is necessary to calculate a certain probability distribution of vocabulary or technical terms; another problem is the expression of word vectors concerned by the vector space model, that is, the problem of text representation. By adopting the continuous word vector assumption and smooth probability distribution model of the previous work and by modeling the probability distribution of words in the text sequence in a continuous space, the LSTM-based neural network model framework simultaneously obtains the word vector of the word expression and the probability distribution, thereby alleviating the problem of gradient disappearance or gradient explosion. And because of the continuous vector representation method, the data-sparse problem has been alleviated to a certain extent. e main reference object we set this unit is the prediction accuracy of the model. We have set a different number of units, but the setting of 5 balances the accuracy and speed of the model.
Dataset and Related Settings.
In the experiment, we use the Wikipedia corpus for training to obtain word vectors and use the Twitter phrase text dataset and the established IoT English term dataset for training and testing. e results of each type of experiment are different mainly because the indicators corresponding to different characters are different. In order to compare this study, this study designs a unified comparison index. e precision rate P, recall rate R, and F1 values used in the study are used as the evaluation indicators of the model, and their calculation formulas are as follows: Computational Intelligence and Neuroscience proposed in this paper are shown in Figure 4. e median of the word vector in the extracted data is basically the same as the original label, and the extraction of each IoT English term is relatively accurate, which basically meets the extraction requirements of English term words. In order to prove the effectiveness of the network model proposed in this study in learning the features of the Internet-of-ings English term features with time series, we learned the word features with time series, and the experimental results are shown in Figure 5. Among them, A, B, C, and D represent four types of IoT professional terms, which are abbreviations, standard words, literal translations, and ellipsis.
ese different types of IoT English terminology professional vocabulary are manually annotated, and the data input to the network is the text data containing these features. By comparing these words, we can comprehensively evaluate the actual performance of the model. rough these labeled words, the performance of the model is evaluated from four aspects: abbreviations, standard words, literal translations, and ellipsis. e recall rate, F1 value, and accuracy P of the model are shown in A, B, and C in Figure 6. e result of its change is Computational Intelligence and Neuroscience 7 mainly the text data currently collected and sampled. ese three parameters are mainly used to describe the performance of the network model. e x value in the figure represents the number of times the network model is trained, that is, the continuous training process of the network model. e change process is mainly affected by the number of model training times; that is, the model adjusts and improves the model weights and values of the entire network in the continuous learning process so that the learning effect continues to be promoted. Figure 7 shows the change in recall of images. On the whole, with the increase of the number of Internet-of-ings English term keywords, the recall rate of the model tends to increase, and with the continuous increase, the recall rate of the model also decreases. We can indeed provide some useful information after artificially increasing the confidence information of words, and with the increase of the number of keywords, the characteristics of the model will continue to improve to a certain level. As the number of words increases and lexical confidence information increases, the network model exhibits improved recall. Figure 8 shows the change of the F1 value of the model. It is mainly affected by the number of keywords in the English terminology of IoT and the corresponding time series. e main variable under these conditions are the number of keywords in the English terminology of the Internet of ings and the corresponding time-series length. It can be clearly seen that the F1 value of the model has obvious periodic changes. e change determines the length of the model's processing time series. Figure 9 shows the variation of the accuracy of the model. It is mainly affected by the word count and corresponding sampling rate of IoT English terms. e main variable conditions are the number of words in IoT English terms and the corresponding sampling rate. To a certain extent, the prediction accuracy of the model can be effectively improved by increasing the sampling rate and the number of words of the model. After a certain range is exceeded, the performance of the model will decrease accordingly. Generally speaking, a moderate sampling rate and the number of words of the model should be maintained. Figure 10 shows the confusion matrix of model recognition, IoT English term feature extraction, and Chinese translation. e value of the diagonal line represents the accuracy of recognition, and the larger the value, the higher the accuracy of recognition. At the same time, from the matrix, we can find that there is a recognition error, and the word relationship 1 is recognized as 2. In the experiments in this study, we mainly verify the actual prediction accuracy of the network model. erefore, we divide the classification level into 5 categories, which are correct, similar, general, different, and wrong. Corresponding to each category, we quantitatively score it with numerical values, which shows that the effect of our network model can meet the requirements as a whole.
Summary
e Internet-of-ings English term representation model needs to convert the English term text into a form that can be processed by computers, and this form preserves the semantic information and the relationship between the Computational Intelligence and Neuroscience vocabularies between the English texts on the time series to the greatest extent. English term keywords are extracted and translated. is study proposes a neural network based on LSTM for feature extraction and Chinese translation of English terminology in the Internet of ings. e method proposed in this study basically achieves a relatively accurate prediction, which can meet the basic requirements of feature extraction and Chinese translation of Internet-of-ings English terms, and there is still a lot of room for improvement in the subsequent development process. In future work, we will make some improvements to the above problems and design some new methods, such as introducing common sense knowledge and connecting various network models, so that the feature extraction and Chinese translation of IoT English terminology will be more pragmatic and refined direction of penetration.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e author declares that there are no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper. | 5,761 | 2022-04-28T00:00:00.000 | [
"Computer Science"
] |
Joint probabilistic-logical refinement of multiple protein feature predictors
Background Computational methods for the prediction of protein features from sequence are a long-standing focus of bioinformatics. A key observation is that several protein features are closely inter-related, that is, they are conditioned on each other. Researchers invested a lot of effort into designing predictors that exploit this fact. Most existing methods leverage inter-feature constraints by including known (or predicted) correlated features as inputs to the predictor, thus conditioning the result. Results By including correlated features as inputs, existing methods only rely on one side of the relation: the output feature is conditioned on the known input features. Here we show how to jointly improve the outputs of multiple correlated predictors by means of a probabilistic-logical consistency layer. The logical layer enforces a set of weighted first-order rules encoding biological constraints between the features, and improves the raw predictions so that they least violate the constraints. In particular, we show how to integrate three stand-alone predictors of correlated features: subcellular localization (Loctree [J Mol Biol 348:85–100, 2005]), disulfide bonding state (Disulfind [Nucleic Acids Res 34:W177–W181, 2006]), and metal bonding state (MetalDetector [Bioinformatics 24:2094–2095, 2008]), in a way that takes into account the respective strengths and weaknesses, and does not require any change to the predictors themselves. We also compare our methodology against two alternative refinement pipelines based on state-of-the-art sequential prediction methods. Conclusions The proposed framework is able to improve the performance of the underlying predictors by removing rule violations. We show that different predictors offer complementary advantages, and our method is able to integrate them using non-trivial constraints, generating more consistent predictions. In addition, our framework is fully general, and could in principle be applied to a vast array of heterogeneous predictions without requiring any change to the underlying software. On the other hand, the alternative strategies are more specific and tend to favor one task at the expense of the others, as shown by our experimental evaluation. The ultimate goal of our framework is to seamlessly integrate full prediction suites, such as Distill [BMC Bioinformatics 7:402, 2006] and PredictProtein [Nucleic Acids Res 32:W321–W326, 2004].
inferring many diverse types of features, see e.g. Juncker et al. [1] for a review.
A key observation, often used to improve the prediction performance, is that several protein features are strongly correlated, i.e., they impose constraints on each other. For instance, information about solvent accessibility of a residue can help to establish whether the residue has a functional role in binding other proteins or substrates [2], whether it affects the structural stability of the chain [3], whether it is susceptible to mutations conferring resistance to drugs [4], whether it occurs within a flexible or disordered segment [5], etc. There are several other examples in the literature. http://www.biomedcentral.com/1471-2105/15 /16 Researchers have often exploited this observation by developing predictors that accept correlated features as additional inputs. This way, the output is conditioned on the known value of the input features, thus reducing the possible inconsistencies. It is often the case that the additional input features are themselves predicted. Highly complex prediction tasks like 3D protein structure prediction from sequence are typically addressed by splitting the problem into simpler subproblems (e.g., surface accessibility, secondary structure), whose predictions are integrated to produce the final output. Following this practice, multiple heterogeneous predictors have been integrated into suites (see e.g. Distill [6], SPACE [7] and PredictProtein [8]) providing predictions for a large set of protein features, from subcellular localization to secondary and tertiary structure to intrinsic disorder.
However, existing prediction architectures (with a few specific exceptions, e.g. [9] and [10]) are limited in that the output feature can't influence a possibly mis-predicted input feature. In other words, while feature relations establish a set of mutual constraints, all of which should simultaneously hold, current predictors are inherently one-way.
Motivated by this observation, we propose a novel framework for dealing with the integration and mutual improvement of correlated predicted features. The idea is to explicitly leverage all constraints, while accounting for the fact that both the inputs, i.e., the raw predictions, and the constraints themselves are not necessarily correct. The refinement is carried out by a probabilistic-logical consistency layer, which takes the raw predictions as inputs and a set of weighted rules encoding the biological constraints relating the features. To implement the refiner, we use Markov Logic Networks (MLN) [11], a statisticalrelational learning method able to perform statistical inference on first-order logic objects. Markov logic allows to easily define complex, rich first-order constraints, while the embedded probabilistic inference engine is able to seamlessly deal with potentially erroneous data and soft rules. We rely on an adaptation of MLN allowing to include grounding-specific weights (grounding specific Markov Logic Networks) [12], i.e. weights attached to specific instances of rules, corresponding in our setting to the raw predictions. The resulting refining layer is able to improve the raw predictions by removing inconsistencies and constraint violations.
Our method is very general. It is designed to be applicable, in principle, to any heterogeneous set of predictors, abstracting away from their differences (inference method, training dataset, performance metrics), without requiring any changes to the predictors themselves. The sole requirement is that the predictions be assigned a confidence or reliability score to drive the refinement process.
As an example application, we show how to apply our approach to the joint refinement of three highly related features predicted by the PredictProtein Suite [8]. The target features are subcellular localization, generated with Loctree [13]; disulfide bonding state, with Disulfind [14]; and metal bonding state, with MetalDetector [15].
We propose a few simple, easy to interpret rules, which represent biologically motivated constraints expressing the expected interactions between subcellular localization, disulfide and metal bonds.
The target features play a fundamental role in studying protein structure and function, and are correlated in a non-trivial manner. Most biological processes can only occur in predetermined compartments or organelles within the cell, making subcellular localization predictions an important factor for determining the biological function of uncharacterized proteins [13]; furthermore, co-localization is a necessary prerequisite for the occurrence of physical interactions between binding partners [16], to the point that lack thereof is a common mean to identify and remove spurious links from experimentally determined protein-protein interaction networks. Disulfide bridges are the result of a post-translational modification consisting in the formation of a covalent bond between distinct cysteines either in the same or in different chains [17]. The geometry of disulfide bonds is fundamental for the stabilization of the folding process and the final three-dimensional structure by fixing the configuration of local clusters of hydrophobic residues; incorrect bond formation can lead to misfolding [18]. Furthermore, specific cleavage of disulfide bonds directly controls the function of certain soluble and cell-membrane proteins [19]. Finally, metal ions provide key catalytic, regulatory or structural features of proteins; about 50% of all proteins are estimated to be metalloproteins [20], intervening in many aspects of of the cell life.
Subcellular localization and disulfide bonding state are strongly correlated: a reducing subcellular environment makes it less likely for the protein to form disulfide bridges [21]. At the two extremes we find the cytosol, which is clearly reducing, and the extra-cellular environment for secreted proteins, which is oxydizing and does not hinder disulfide bonds, with the other compartments (nucleus, mitochondrion, etc.) exhibiting milder behaviors. Similarly, due to physicochemical and packing constraints, it is unlikely for a cysteine to link both another cysteine (or more than one) and a ligand; with few exceptions, cysteines are involved in at most one of these bonds [15]. This is the kind of prior knowledge we will use to carry out the refinement procedure. We note that all these constraints are not hard: they hold for a majority of proteins, but there are exceptions [21]. In the following, we will show that that different predictors offer http://www.biomedcentral.com/1471-2105/15/16 complementary advantages, and how our method is able to integrate them using non-trivial constraints, resulting in an overall improvement of prediction accuracy and consistency.
Overview of the proposed method
In this paper we propose a framework to jointly refine existing predictions according to known biological constraints. The goal is to produce novel, refined predictions from the existing ones, so as to minimize the inconsistencies, in a way that requires minimal training and no changes to the underlying predictors. The proposed system takes the raw predictions, which are assumed to be associated with a confidence score, and passes them through a probabilistic-logical consistency layer. The latter is composed of two parts: a knowledge base (KB) of biological constraints relating the features to be refined, encoded as weighted first-order logic formulae, which acts as an input to the second part of the method; and a probabilistic-logical inference engine, implemented by a grounding-specific Markov Logic Network (gs-MLN) [15]. For a graphical depiction of the proposed method see Figure 1.
An example will help to elucidate the refinement pipeline. For simplicity, let's assume that we are interested in refining only two features: subcellular localization and disulfide bonding state. The first step is to employ two arbitrary predictors to generate the raw predictions for a given protein P. Note that disulfide bonding state is a percysteine binary prediction, while subcellular localization is a per-protein n-ary prediction; both have an associated reliability score, which can be any real number. For a complete list of predicates used in this paper, see Table 1.
Let's assume that the predictions are as follows: where ! stands for logical negation. The first four predicates encode the fact that protein P is predicted to reside in the nucleus with confidence 0.1, in the cytosol with confidence 1.2, etc. The remaining three predicates encode the predicted bonding state of three cysteines at positions 11, 20 and 26: the first cysteine is free with confidence 0.2, the remaining two are bound with confidence 0.8 and 0.6, respectively. In this particular example, the protein is assigned conflicting predictions, as the cytosolic environment is known to hinder the formation of disulfide bridges. We expect one of them to be wrong. Given the above logical description, our goal is to infer a new set of refined predictions, encoded as the predicates IsLoc(p,l) and IsDis(p,n). To perform the refinement, we establish a set of logical rules describing the constraints we want to be enforced, and feed it to the inference engine. For a list of rules, see Table 2.
First of all, we need to express the fact that the raw predictions should act as the primary source of information for the refined predictions. We accomplish this task using the input rules I1 and I2. These rules encode how the refined prediction predicates IsDis and IsLoc depend primarily on the raw predicates PredDis and PredLoc. The weight w is computed from the estimated reliability output by the predictor, and (roughly) determines how likely the refined predictions will resemble the raw ones.
Next we need to express the fact that a protein must belong to at least one cellular compartment, using rule L1, and, as normally assumed when performing subcellular localization prediction, that it can not belong to more than one, using rule L2. In this example, and in the rest of the paper, we will restrict the possible localizations to the nucleus, the cytosol, the mitochondrion, and the extracellular space. The two above rules are assigned an infinite weight, meaning that they will hold with certainty in the refined predictions.
The last two rules used in this example are DL1 and DL2, which express the fact that the cytosol, mitochondrion and nucleus tend to hinder the formation of disulfide bridges, while the extracellular space does not. In this case, the weights associated to the rules are inferred from the training set, and reflect how much the rules hold in the data itself.
Once we specify the raw predictions and knowledge base, we feed them to the gs-MLN. The gs-MLN is then used to infer the set of refined predictions, that is, the IsLoc and IsDis predicates. The gs-MLN allows to query for the set of predictions that is both most similar to the raw predictions, and at the same time violates the constraints the least, taking in account the confidences over the raw predictions and the constraints themselves. See the Methods section for details on how the computation is performed. In this example, the result of the computation is the following: IsLoc(P,Ext), IsDis(P,11), IsDis(P,20), IsDis(P,26). The protein is assigned to the second most likely subcellular localization, "extracellular", and the cysteine which was predicted as free with a low confidence is changed to disulfide bonded.
It is easy to see that this framework allows to express very complicated rules between an arbitrary number of features, without particular restrictions on their type (binary, multi-label) and at different levels of detail (perresidue or per-protein). Furthermore, this approach minimizes the impact of overfitting: there is only one learned weight for each rule, and very few rules. To assess the performance of our refiner, we experiment with improving subcellular localization together with disulfide bonding state and metal bonding state. The knowledge base used for localization and disulfide bridges was introduced in this section. As for metals, the information is input using rule I3, and we model the interaction with disulfide bonds through rule DM, which states that the two types of bonds are mutually exclusive.
Related work
There is a vast body of work dedicated to the issue of information integration, and in particular to the exploitation of correlated protein features. In many cases, the proposed methods are limited to augmenting the inputs http://www.biomedcentral.com/1471-2105/15/16 using correlated features (either true or predicted) as additional hints to the predictors. In this setting, a work closely related to ours is [22], in which Savojardo and colleagues propose a prediction method for disulfide bridges that explicitly leverages predicted subcellular localization [23]. As in the other cases, the authors implement a one-way approach, in which a predicted feature (localization) is employed to improve a related one (disulfide bonding state). The protein prediction suites briefly mentioned above (Distill [6], SPACE [7] and PredictProtein [8]) provide another clear example of one-way architectures. Prediction suites are built by stacking multiple predictors on top of each other, with each layer making use of the predictions computed by the lower parts of the stack. In this case, the main goal is the computation of higher-level features from simpler ones. Note however that the issue of two-way consistency is ignored: these architecture do not back-propagate the outputs of the upper layers to the bottom ones. On the other hand, our approach allows to jointly improve all predictions by enforcing consistency in the refined outputs.
Another popular way to carry out the prediction of correlated features is multi-task learning. In this setting, one models each prediction task as a separate problem and trains all the predictors jointly. The main benefit comes from allowing information to be shared between the predictors during the training and inference stages. These methods can be grouped in two categories: iterative and collective.
Iterative methods exploit correlated predictions by reusing them as inputs to the algorithm, and iterating the training procedure until a stopping criterion is met. This approach can be found in, e.g. Yip et al. [10], which proposes a method to jointly predict protein-, domain-, and residue-level interactions between distinct proteins. Their proposal involves modeling the propensity of each protein, domain and residue to interact with other objects at the same level as a distinct regression task. After each iteration of the training/inference procedure, the most confident predictions at one level are propagated as additional training samples at the following level. This simple mechanism allows for information to bi-directionally flow between different tasks and levels. Another very relevant work is [9], in which Maes et al. jointly predict the state of five sequential protein features: secondary structure (in 3 and 8 states), solvent accessibility, disorder and structural alphabet. Also in this case, distinct predictors are run iteratively using the outputs at the previous time slice as additional inputs. Collective methods instead focus on building combinations of classifiers, e.g., neural network ensembles, using shared information in a single training iteration. As an example, [24] describes how to maximize the diversity between distinct neural networks with the aim of improving the overall accuracy. However most applications in biology focus on building ensembles of predictors for the same task, as is the case in Pollastri et al. [25] for secondary structure.
The main differences with our method are the following: (a) There exist a number of independently developed predictors for a plethora of correlated features. It would be clearly beneficial to refine their predictions in some way. Our goal is to be able to integrate them without requiring any change to the predictors themselves. The latter operation may be, in practice, infeasible, either because the source is unavailable, or because the cost of retraining after every change is unacceptably high. All of the methods presented here are designed for computing predictions from the ground up; our method is instead designed for this specific scenario. (b) Our method allows one to control the refinement process by including prior knowledge about the biological relationships affecting the features of interest; furthermore the language used to encode the knowledge base, first-order logic, is well defined and flexible. The other methods are more limited: any prior knowledge must be embedded implicitly in the learning algorithm itself. (c) The weights used by our algorithm are few, simple statistics of the data, and do not require any complex training. On the other hand, all the methods presented here rely on a training procedure, and have a higher risk of incurring in overfitting issues.
Data preparation
We assessed the performance of our framework on a representative subset of the Protein Data Bank [26], the 2010/06/16 release of PDBselect [27]. The full dataset includes 4,246 unique protein chains with less than 25% mutual sequence similarity.
Focusing only on proteins containing cysteines, we extracted the true disulfide bonding state using the DSSP software [28], and the true metal bonding state from the PDB structures using a contact distance threshold of 3 Å.
Metals considered in this experiment are the same used for training MetalDetector, a total of 33 unique metal atoms and 75 molecular metals. See Passerini et al. [29] for more details.
Subcellular localization was recovered using the annotations in DBSubLoc [30] and UniProt [31]; we translated between PDB and UniProt IDs using the chain-level mapping described by Martin [32], dropping all proteins that could not be mapped. To increase the dataset coverage, we kept all those proteins whose true localization did not belong to any of the classes predicted by Loctree (which for animal proteins amount to cytosol, mitochondrion, nucleus and extracellular -secreted), was ambiguous or missing, and marked their localization annotation as "missing". Loctree is also able to predict proteins in a fifth, composite class, termed "organelle", which includes http://www.biomedcentral.com/1471-2105/15/16 the endoplasmic reticulum, Golgi apparatus, peroxysome, lysosome, and vacuole. The chemical environment within these organelles can be vastly different, so we opted for removing them from the dataset, for simplicity. Subcellular localization prediction requires different prediction methods for each kingdom. The preprocessing resulted in a total of 1184 animal proteins, and a statistically insignificant amount of plant and bacterial proteins; we discarded the latter two. Of the remaining proteins, 526 are annotated with a valid subcellular localization (i.e. not "missing"). The data includes 5275 cysteines, of which 2456 (46.6%) are half cysteines (i.e., involved in a disulfide bridge) and 458 (8.7%) bind metal atoms.
We also have two half cysteines that bind a metal (in protein 2K4D, chain A); we include them in the dataset as-is.
Evaluation procedure
Each experiment was evaluated using a standard 10-fold cross-validation procedure. For each fold, we computed the rule weights over the training set, and refined the remaining protein chains using those weights. The rule weights are defined as the log-odds of the probability that a given rule holds in the data, that is, if the estimated prediction reliability output by the predictor is r, the weight is defined as w = log(r/ (1 − r)). Given the weights, we refine all the raw features of proteins in the test set. If the subcellular localization for a certain protein is marked as "missing", we use the predicted localization to perform the refinement. In this case, the refined localization is not used for computing the localization performance, and only the disulfide and metal bond refinements contribute to the fold results, in a semi-supervised fashion.
For binary classification (i.e., disulfide and metal bonding state prediction) let us denote by T p , T n , F p and F n the number of true positives, true negatives, false positives, and false negatives, respectively, and N the total number of instances (cysteines). We evaluate the performance of our refiner with the following standard measures: The accuracy Q, precision P and recall R are standard performance metrics. The F 1 score is the harmonic mean of precision and recall, and is useful as an estimate balancing the contribution of the two complementary measures. We report the average and standard deviation of all above measures taken over all folds of the cross-validation.
For multi-class classification (subcellular localization) we compute the confusion matrix M over all classes, where each element M ij counts the number of instances whose true class is i and were predicted in class j. The more instances lie on the diagonal of the confusion matrix, the better the predictor.
We note that, in general, it is difficult to guarantee that our test set does not overlap with the training set of the individual raw predictors. This may result in an artificial overestimate of the performance of the raw predictors. However, training in our model consists in estimating the rule weights from the raw predictions themselves. As a consequence, the results of our refiner may be underestimated when compared with the inflated baseline performance. We also note that, since our model requires estimating very few parameters, i.e., one weight per rule, it is less susceptible to overfitting than methods having many parameters which rely on a full-blown training procedure.
Raw predictions
We generate the predictions for subcellular localization, disulfide bridges, metal bonds and solvent accessibility using the respective predictors. All predictors were installed locally, using the packages available from the PredictProtein Debian package repository [33], and configured to use the default parameters. For all protein chains predicted in the "organelle" class, we marked the prediction as "missing", for the reasons mentioned above.
For Disulfind and MetalDetector, we converted the per-cysteine weighted binary predictions into two binary predicates for each cysteine, PredDis/3 and PredMet/3, using as prediction confidence w the SVM margin.
For Loctree, we output four PredLoc/3 predicates for each protein, one for each possible subcellular localization, and computed the confidence by using a continuous version of the Loctree-provided output-to-confidence mapping. The raw predictor performance can be found alongside with the refiner performance in Tables 3, 4, 5, 6.
Alternative refinement pipelines
In order to assess the performance of our method, we carried out comparative experiments using two alternative refinement architectures. Both architectures are based on state-of-the-art sequential prediction methods, namely Hidden Markov Support Vector Machines (HMSVM) [34] and Bidirectional Recurrent Neural Networks (BRNN) [35]. Both methods can naturally perform classification over sequences, and have been successfully applied to several biological prediction tasks. http://www.biomedcentral.com/1471-2105/15/16 The alternative architectures are framed as follows. The predictors are trained to learn a mapping between raw predictions and the ground truth, using the same kind of pre-processing as the MLN refiner. Cysteines belonging to a protein chain form a single example, and all cysteines in an example are refined concurrently. The input consists of all three raw predictions in both cases.
The two methods were chosen as to validate the behavior of more standard refinement pipelines relying on both hard and soft constraints. In the case of HMSVMs, the model outputs a single label for each residue: a cysteine can be either free, bound to another cysteine, or bound to a metal. This encoding acts as a hard constraint on the mutual exclusivity between the two labels. In the case of BRNNs, each cysteine is modeled by two independent outputs, so that all four configurations (free, disulfide bound, metal bound, or both) are possible. The BRNN is given the freedom to learn the (soft) mutual exclusivity constraint between the two features from the data itself.
Pure sequential prediction methods, like HMSVMs, are at the same specialized for, and limited to, refining sequential features, in our case disulfide and metal bonding state. Therefore, we can't use the HMSVM pipeline for localization refinement. As a result, the alternative pipeline is faced with a reduced, and easier, computational task. While BRNN are also restricted to sequential features, more general recursive neural networks [36] can in principle model arbitrary network topologies. However, they cannot explicitly incorporate constraints between the outputs, which is crucial in order to gain mutual improvement between subcellular localization and bonding state predictions. As experimental results will show, these alternative approaches already fail to jointly improve sequential labeling tasks.
We performed a 10-fold inner cross-validation to estimate the model hyperparameters (regularization tradeoff for the HMSVM, learning rate for the neural network), using the same fold splits as the main experiment. The results can be found in Table 3 through 6.
True subcellular localization
As a first experiment, we evaluate the effects of using the true subcellular localization to refine the remaining predictions, i.e., we supply the refiner with the correct IsLoc directly, while querying the IsDis and IsMet predicates. The experiment represents the ideal case of a perfect subcellular localization predictor, and we can afford to unconditionally trust its output.
The experiment is split in four parts of increasing complexity.
• In the 'Dis. + Met.' case we refine both IsDis and IsMet from the respective raw predictions, using only the DM rule (see Table 2) to coordinate disulfide and metal bonding states; the localization in this case is ignored. The experiment is designed to evaluate wheter combining only disulfide and metal predictions is actually useful in our dataset. • In the 'Dis. + Loc.' case we refine IsDis from the raw disulfide predictions and the true localization, using the DL1 and DL2 rules. The results can be found in Table 3. Three trends are apparent in the results. First of all, we find subcellular localization to have a very strong influence on disulfide bonding state, as expected. In particular, in the 'Dis. + Loc.' case, which includes no metal predictions, the accuracy and F 1 measure improves from 0.804 and 0.811 (raw) to 0.857 and 0.856 (refined), respectively. The change comes mainly from an increase in precision: the true subcellular localization helps reducing the number of false positives.
The interaction between metals and disulfide bonds is not as clear cut: in the 'Dis. + Met.' case, which includes no subcellular localization, the refined disulfide predictions slightly improve, in terms of F 1 measure, while the metal predictions slightly worsen. The latter case is mainly due to the drop in recall, from 0.827 to 0.739. This is to be http://www.biomedcentral.com/1471-2105/15/16 expected, as the natural scarcity of metal residues makes the metal prediction task harder (as can be seen observing the differential behavior of accuracy and F 1 measure). As a consequence the confidence output by MetalDetector is lower than the confidence output by Disulfind. In other words, in the case of conflicting raw predictions, the disulfide predictions usually dominate the metal predictions. Finally, in 'Dis. + Met. + Loc.' case, both disulfide and metal bonds improve using the true subcellular localization compared to the above settings. In particular, metal ligand prediction, while still slightly worse than the baseline (again, due to class unbalance, as mentioned above) sees a clear gain in recall (from 0.739 in the 'Dis. + Met.' case to 0.783). This is an effect of using localization: removing false disulfide positives leads to less spurious conflicts with the metals.
The two alternative pipelines behave similarly. They both manage to beat the Markov Logic Network on the easier of the two tasks, disulfide refinement, while performing worse on the metals. We note that the HMSVM and BRNN, contrary to our method, both have a chance to rebalance the raw metal predictions with respect to the disulfide predictions during the training stage, learning a distinct bias/weight for the inputs. Nevertheless, they still fail to improve upon our refined metals.
Predicted subcellular localization
This experiment is identical to the previous one, except we use predicted subcellular localization in place of the true one. Similarly to the previous section, we consider three sub-cases. In the 'Dis. + Loc.' case, we refine localization and disulfide bonding state, while in the 'Dis. + Met. + Loc.' case we refine all three predicted features together. The results can be found in Table 4. The 'Dis. + Met.' case is reported as well for ease of comparison.
Here we can see how our architecture can really help with the mutual integration of protein features. In general, we notice that refined disulfide bonds are enhanced by the integration of localization, even if less so than in the previous experiment. At the same time, localization also benefits by the interaction with disulfide bonds, as can be seen in the 'Dis. + Loc.' case. The biggest gain is obtained for the ExtraCellular and Nucleus classes, which are also the most numerous classes in the dataset: several protein chains are moved back to their correct class. The introduction of metals improves directly disulfide bonds and indirectly localization, even though its effect is relatively minor.
On the downside, refined metal predictions worsen in all cases. This is due, again, to the unbalance of the small number of metal binding residues found in the data, and to the difference between the confidences output by Disulfind and MetalDetector.
Surprisingly, the alternative pipelines are not as affected by the worsening of the localization information: their performance is on par as with the true localization. This is in part explained by the simpler task the alternative methods carry out, as it does not involve refinement of the raw localization itself. It turns out that using predicted localization itself, the alternative methods manage to perform better than us also for metal refinement. In the following, we will show an improvement to our pipeline to address this issue. http://www.biomedcentral.com/1471-2105/15/16
Predicted subcellular localization with predictor reliability
The previous experiment shows that our refiner performs suboptimally on the metal refinement task due to class unbalance. A common way to alleviate this issue is to re-weight the classes according to some criterion. In our case, the positive metal residues are dominated by the negative ones, making the overall accuracy of MetalDetector higher than that of Disulfind. Our method naturally supports the re-weighting of predictors with different accuracy: the weight assigned to a Pred predicate can be strengthened or weakened depending on our estimate of the predictor accuracy.
To implement this strategy, we add an intermediate proxy predicate, weighted according to the actual predictor performance over the training set. The proxy predicate mediates the interaction between the raw prediction (the Pred predicate) and the refined prediction (the Is predicate). The actual proxy predicates are ProxyLoc, ProxyDis and ProxyMet, used by rules I1P to I3P, and PX1 to PX3. See Tables 1 and 2 for the details. The http://www.biomedcentral.com/1471-2105/15/16 results can be found in Table 6. For completeness, we also include the proxy results for true subcellular localization in Table 5.
The proxy helps the MLN refiner: the refined metal predictions are on-par with the raw ones, while at the same time improving the disulfide bonds. The effects are especially clear when comparing the 'Dis. + Met.' cases of Tables 3 (true localization, no proxy) and 5 (true localization, with proxy), with F 1 scores changing from 0.833 and 0.713 for bridges and metals, respectively, to 0.838 and 0.739. We note that our method is the only one able to recover the same performance as MetalDetector while also improving the other two refined features. On the contrary, the alternative pipelines tend to favor one task (disulfide bridges) over the other, and fail in all cases to replicate the baseline performance.
The down-side is that localization refinement is slightly worse: the raw Nucleus predictions are less accurate than the Cytosol ones, leading to the Cytosol being assigned a higher proxy weight. Since both compartments prevent disulfide bonds, the MLN refiner tends to assign chains with no half cysteines to the latter.
Conclusions
In this paper we introduced a novel framework for the joint integration and refinement of multiple related protein features. The method works by resolving conflicts with respect to a set of user-provided, biologically motivated constraints relating the various features. The underlying inference engine, implemented as a grounding-specific Markov Logic Network [12], allows to perform probabilistic reasoning over rich first-order logic rules. The designer has complete control over the refinement procedure, while the inference engine accounts for potential data noise and rule fallacy.
As an example, we demonstrate the usefulness of our framework on three distinct predicted features: subcellular localization, disulfide bonding state, metal bonding state. Our refiner is able to improve the predictions by removing violations to the constraints, leading to more consistent results. In particular, we found that subcellular localization plays a central role in determining the state of potential disulfide bridges, confirming the observations of Savojardo et al. [22]. Our method however also allows to improve subcellular localization in the process, helping to discriminate between chains residing in reducing and oxydizing cellular compartments, especially nuclear and secreted chains. We also found that disulfide predictions benefit from metal bonding information, although to a lesser extent, especially when used in conjunction with localization predictions. On the other hand metals, which are in direct competition with the more abundant disulfide bonds, are harder to refine. We presented a simple and natural re-weighting strategy to alleviate this issue.
The task would be further helped by better localization predictions, which tend to improve the distribution of disulfide bridges, as shown by the experiments with true subcellular localization.
We compared our refinement pipeline with two alternatives based on state-of-the-art sequential prediction methods, Hidden Markov Support Vector Machines and Bidirectional Recursive Neural Networks. These methods have two fundamental advantages: they are run through a full-blown training procedure, and are only asked to refine the two sequential features, a task for which they are highly specialized. However, the results show that they tend to favor the easier task (disulfide bridges) over the other, struggling to achieve the same results of the baseline on the harder task (metals). On the contrary, our method is more general, and does not favor one task at the expense of the others.
Our framework is designed to be very general, with the goal of refining arbitrary sets of existing predictors for correlated features, such as Distill [6] and PredictProtein [8], for which re-training is difficult or infeasible. As a consequence, our framework does not require any change to the underlying predictors themselves, only requiring that they provide an estimated reliability for their predictions.
Predictors
Disulfind [14] is a web server for the prediction of disulfide bonding state and binding geometry from sequence alone. Like other tools for the same problem, Disulfind splits the task in two simpler sub-problems as follows. First an SVM binary classifier is employed to independently infer the bonding state of each cysteine in the input chain. The SVM is provided with both local and global information. Local information includes a window of position-specific conservations derived from multiple alignment, centered around each target cysteine. Global information represent global features of the whole chain, such as length, amino acid composition, and average cysteine conservation. Then a bidirectional recursive neural network (BRNN) is used to collectively refine the possibly incorrect SVM predictions, assigning a revised binding probability to each cysteine.
Finally, the predictions are post-processed with a simple finite-state automaton to enforce an even number of positive disulfide bonds. For the technical details, see Vullo et al. [37].
MetalDetector [29] is a metal bonding state classifier, whose architecture is very similar to Disulfind. It is split in two stages, an SVM classifier for local, independent per-residue Loctree [13] is a multiclass subcellular localization predictor based on a binary decision tree of SVM nodes. The topology of the tree mimics the biological structure http://www.biomedcentral.com/1471-2105/15/16 of the cellular protein sorting system. It is designed to predict the subcellular localization of proteins given only their sequence, and uses multiple input features: a multiple alignment step is performed against a local, reduced redundancy database of UniProt proteins, and makes use of a stripped, specially tailored version of Gene Ontology vocabulary terms to improve its performance. It also uses psort 3.0 [38]. The predictor incorporates three distinct topologies, one for each of the considered kingdoms: prokaryotes, eukariotic plants (viridiplantae), eukariotic non-plants (metazoa).
First-order logic background
For the purpose of this paper, first-order logic formulae are used to construct a relational representation of the features of interest, their mutual constraints, and to perform probabilistic-logical reasoning on them. Some definitions are in order.
A formula can be constructed out of four syntactical classes: constants, which represent fixed objects in the domain (e.g., "PDB1A1IA"); variables, which are placeholders for constants (e.g., "protein"); functions, which map a tuple of objects to another object (not needed in our case); and predicates, which describe properties of objects (e.g., " IsDis(p,n)"), or relations between objects. Constants, variables are terms, and so are predicates applied to a tuple of terms. If a term contains no variables, it is said to be ground.
Predicates are assigned a truth value (True or False) which specifies whether the property/relation is observed to hold or not. An atom is a predicate applied to a tuple of terms. A formula is recursively defined as being either an atom, or as a set of formulae combined through logical connectives (negation !, conjunction ∧, disjunction ∨, implication ⇒, and equivalence ⇔) or quantifiers (existential ∃ or universal ∀). A formula F containing a reference to a variable x can be used to build ∀x.F, which is true iff F is true for all possible values of x in the domain, and ∃x.F, which is true iff F is true for at least one value of x. A formula F whose variables have all been replaced by constants is called a grounding of F.
An interpretation or (possible) world is an assignment of truth values to all ground atoms. A collection of implicitly conjoined formulae KB = i F i is a knowledge base, and can be seen as a single big formula. Logical inference is the problem of determining whether a knowledge base KB entails a given formula Q, written KB |= Q, which is equivalent to asking whether the formula Q is true in every interpretation (world) where KB is true.
Whenever any two formulae in a KB are in contradiction, the knowledge base admits no interpretation at all. This is an issue when reasoning over conflicting facts taken from unreliable information sources, as is often the case for biological information.
Grounding-specific Markov logic networks
A Markov Logic network (MLN) [11], is a method to define a probability distribution over all possible worlds (truth assignments) of a set of formulae allowing to perform reasoning over possibly wrong or conflicting facts.
A MLN consists of a finite domain of objects (constants) C and a knowledge base KB of logical rules. Each formula F i in KB is associated a real-valued weight w i , representing the confidence we have in that rule. Weights close to zero mean that the formula is very uncertain, while larger weights mean that it is likely to hold (if positive) or not (if negative). Contrarily to pure FOL, in Markov Logic the formulae in the KB are explicitly fallible; as a consequence, Markov Logic admits interpretations that don't satisfy all the constraints.
Instantiating all the formulae in KB using all possible combinations of constants in C leads to a grounding of the knowledge base. As an example, if C consists of three objects, a protein P and two cysteines at position 4 and 19, and the knowledge base consists of the formula DM = !(IsDis(p,n) ∧ IsMet(p,n)), then the grounding will be the set of ground formulae: {DM(P,4), DM(P,19)}. A possible world is a truth assignment of the grounding of KB. Markov Logic defines a way to assign to each possible world a probability, determined by the weight of the formulas that it satisfies.
A MLN defines a joint probability distribution over the set of interpretations (i.e. truth assignments) of the grounding of KB. In the previous example, if the formula DM has a positive weight, then the assignment DM(P,4) ∧ DM(P,19) will be the most likely, while ! DM(P,4)∧! DM(P,19) will be the least likely, with the other possible worlds standing in between. In addition, if an assignment satisfies a formula with a negative weight, it becomes less likely.
Given a set of ground atoms x of known state, and a set of atoms y whose state we want to determine, we can define the conditional distribution generated by a MLN as follows: Here n i (x, y) counts how many times the formula F i is satisfied by groundings of world (x, y), and Z(x) is a normalization term. In other words, the above formula says that the probability of y being in a given state is proportional to the weighted number of formulae in KB that the interpretation (x, y) satisfies. We can query a MLN for the most likely state of the unknown predicates y from the known facts x by taking the truth assignment of y that maximizes the above conditional probability. See Richardson et al. [11] for a full-length description. http://www.biomedcentral.com/1471-2105/15 /16 An issue with standard Markov Logic is that distinct groundings of the same formula F i are assigned the same weight w i . This is not the case for our raw predictions, which are specific for each protein (e.g. subcellular localization) or each residue within a protein (e.g. metal or disulfide bonding state).
To overcome this issue, we make use of groundingspecific Markov Logic Networks (gs-MLN), introduced in Lippi et al. [12], an extension that adds the ability of specifying per-grounding weights. The idea is to substitute the fixed per-formula weight w with a new function ω that depends on the particular grounding. The conditional distribution is modified to be of the form: Here the variable g ranges over all satisfied groundings of formula F i , and the function ω evaluates the weight of the given grounding g according to a set of per-formula parameters θ i . | 9,975 | 2014-01-15T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Optimizing the Sensor Placement for Foot Plantar Center of Pressure without Prior Knowledge Using Deep Reinforcement Learning
We study the foot plantar sensor placement by a deep reinforcement learning algorithm without using any prior knowledge of the foot anatomical area. To apply a reinforcement learning algorithm, we propose a sensor placement environment and reward system that aims to optimize fitting the center of pressure (COP) trajectory during the self-selected speed running task. In this environment, the agent considers placing eight sensors within a 7 × 20 grid coordinate system, and then the final pattern becomes the result of sensor placement. Our results show that this method (1) can generate a sensor placement, which has a low mean square error in fitting ground truth COP trajectory, and (2) robustly discovers the optimal sensor placement in a large number of combinations, which is more than 116 quadrillion. This method is also feasible for solving different tasks, regardless of the self-selected speed running task.
Introduction
The definition of foot plantar pressure is the distribution of force between the foot's sole and the support surface. Plantar pressure measurement systems have been used in several applications, such as sports performance analysis and injury prevention [1], gait monitoring [2], and biometrics [3]. In the literature, various sensor placement patterns, based on modified from the foot anatomical area or filled with mesh-like sensors array, were discussed by Razak et al. [4]. The design approach filled with mesh-like sensors increases the measuring accuracy but also increases the prices. However, reducing the number of sensors and achieving acceptable accuracy is challenging. Usually, finding a sensor placement pattern is determined by a human expert. By contrast, this paper proposes another new design approach for plantar sensor placement based on plantar pressure data and deep reinforcement learning (DRL) [5,6] algorithm. This approach uses the center of pressure (COP) trajectory to evaluate the sensor placement quality. Using this mechanism, we are trying to find new placement patterns that human knowledge has not yet discovered.
Reinforcement learning (RL) [7] is an algorithm that consists of an environment and an agent and trains the agent's policy through feedback from the environment. In many complex domains, reinforcement learning is the only feasible way to train a program to perform at high levels. Furthermore, deep reinforcement learning (DRL) merges deep learning (DL) [8] and reinforcement learning (RL) algorithms. Deep learning is a branch of machine learning which uses an artificial neural network to extract information from high dimensional data and has led to breakthroughs in computer vision [9][10][11][12] and speech recognition [13,14]. Within DRL, it uses a deep neural network as a function approximator [15], which not only allows this algorithm to extract information from high dimensional data but also scale-up the ability to solve more complex problems. Moreover, DRL has accomplished many achievements, such as mastering the game of go without human knowledge [16] and winning the world champions in a multiplayer real-time strategy game [17], in modern machine learning. For the sensor placement problem, many combinations for sensor placement solving by brute force are not feasible, so we adopted a DRL to this problem.
We have organized the rest of this paper in the following way: first, we describe the data collection for self-selected speed running plantar pressure videos and the preprocessing. Then, we propose the environment and the reward system for designing the sensor placement. This environment and reward system aim to optimize the sensor placement for COP accuracy and adapt to DRL. Third, we briefly illustrate the Soft Actor-Critic Discrete (SAC-Discrete) [18], a discrete version of the Soft Actor-Critic (SAC) RL algorithm [19], and apply it for the sensor placement task with some simple testing data, which we created. Fourth, we utilize the Population-Based Training (PBT) [20] method to tune the hyperparameter using SAC-Discrete. Applying this method enhances training stability and performance in our sensor placement task. Finally, we feed the plantar pressure videos to the sensor placement environment and then present the results and conclusions.
Experimental Protocol
Each subject needs to run for three minutes with their self-selected speed on the treadmill. The data logger is triggered by an external trigger button when a subject is comfortable with the treadmill's current speed. Furthermore, they are wearing the same model of shoes with their proper size.
Self Selected Speed Plantar Pressure Video Collection
Plantar pressure video is recorded with the F-Scan [21] system by Tekscan, which receives plantar pressure with an insole pressure sensor array. This system contains a pair of resistive sensor [22] sheets placed on top of the insole and applying double-sided tape to avoid sensor sheets slipping during recording. The pressure range of this sensor sheet for this experiment is 1-150 psi (approximately 7-1000 kPa). The F-Scan system's recording software version is 7.50-07, and, before recording, the sensor sheet is calibrated by this software. The maximum spatial resolution of this F-Scan hardware/software system is up to 750 Hz. In this experiment, we set the acquisition frequency at 100 Hz, so the recording video's output frame rate is also at 100 Hz. Since the plantar pressure video is obtained from the F-Scan software, which has done the calibration, this video's spatial resolution is 21 × 60, and the unit of each pixel value is kPa. The F-Scan system starts to record the data when a subject is comfortable with the treadmill's current speed and finishes a record after three minutes. This experiment is illustrated in Figure 1.
Data Preprocessing
Plantar pressure videos collected from the F-Scan system are preprocessed to construct a data set; for each episode, the sensor placement environment will randomly select a plantar pressure video within this data set to calculate rewards. The preprocessing steps are as follows:
1.
A gait cycle consists of the stance phase and the swing phase; during the swing phase, the F-Scan system will not receive any pressure information. Thus, we remove the swing phase within a three-minute plantar pressure video by splitting it into many stance-phase plantar pressure videos.
2.
To reduce the amount of stance-phase plantar pressure videos, we divide those videos into five equal groups with the time sequence and randomly choose one video from each group.
3.
The stance-phase plantar pressure videos are cropped to remove the white border, which is a row or column that does not receive any pressure within this plantar pressure video.
4.
After cropping the stance-phase plantar pressure videos, each video presents different spatial resolution. Thus, we downsample each video to 7 × 20 by the pressure formula P = F/A.
For each subject, this experiment collects two three-minute plantar pressure videos; one is the left foot, and the other is the right foot. After the preprocessing, data collected from a subject produces ten 7 × 20 stance-phase videos, as Figure 2 shows. Since fifteen subjects join this experiment, there are 150 stance-phase plantar pressure videos using in the sensor placement environment. The green and yellow videos represent the stance-phase plantar pressure video; each video's total frame count depends on its stance-phase duration. The pink videos represent the stance-phase plantar pressure video that randomly selected from five equal groups. (b) In step three and step four, we use one of the chosen videos for a demonstration; the image beside each video is its pixel-wise accumulated image, which utilizes to visual the cropping and resampling processes. The purple video represents the cropped video, and the orange video is the final result, which has a 7 × 20 spatial resolution.
Sensor Placement Environment and Reward System
Reinforcement learning is an algorithm that consists of an agent and an environment. For each time step, the environment provides the state's information for the agent and then the agent using it to select an action. After the action has taken, the environment updates its state and then offers the next state's information and reward to the agent. These interactions between the environment and the agent produce a serial of state-action pairs. The length of this state-action pairs depends on the environment's termination condition and could also be infinite. By using those state-action pairs, the RL algorithm reinforces the agent's policy to maximize the environment's accumulative reward. To optimize the sensor placement for the COP trajectory during the self-selected speed running task, we present a sensor placement environment and a reward system.
Sensor Placement Environment
At the initial state, the sensor placement environment gets a plantar pressure video, which will be used to calculate the reward, and provides an empty 7 × 20 board information to the agent. Figure 3 shows the plantar pressure video. The agent owns eight sensors at the beginning, which can be placed on this empty board. For each time step, the agent places one of its sensors on the board. It does not matter whether another sensor is placed on a position where other sensors already exist; in other words, having multiple sensors on the same position is allowed. The terminating condition for the sensor placement environment is that the agent finishes placing all of its sensors. When this episode terminates, the agent will receive the only reward given by the environment; it means that the agent will only receive a zero as the reward until it reaches the terminate state. The agent's main objective is to place the sensors in the crucial positions to get the maximum reward at the end of this episode. Figure 4a illustrates the interaction between the agent and the environment, and Figure 4b shows the reward given by the environment. separately, and the number represents the sensors count on this position. The notation S T represents the terminate state. S 3 demonstrates the situation when the agent selects a position where another sensor already existed. (b) Due to the reward and the next state provided by the environment simultaneously, the first reward starts at S 1 . Without reward redistribution, the agent only receives the reward in the terminate state. (c) The redistributed reward is the current and previous accumulative reward difference. Since S 3 has the same masked position as S 2 , they get the same accumulative reward in this episode; the redistributed reward at S 3 is zero. It is also possible to get a negative reward, as shown at S 2 .
Reward System
In reinforcement learning, planning a reward system is essential. The positive reward given by the environment encourages the agent to do more actions that can receive this reward. With the sensor placement environment scenario, it encourages the agent to arrange sensor positions for fitting the COP trajectory to get a higher reward at the terminate state. The environment calculates a reward using a plantar pressure video and the final sensor positions, which the agent determines. For each episode, the environment can use different plantar pressure videos to calculate the reward. It means that, even if the agent places all of its sensors in the same positions in each episode, the reward could be changed. To calculate the reward, we first introduce the COP trajectory formula for a plantar pressure video as follows: The COP trajectory is a serial of points lying on the 2D plane, and the length of this series is the video frame count. In this formula, n is the index for the frame number, the pressure n represents a pixel value within the n-th frame, and coordinate x and coordinate y are the relative position for that pixel. Now, we describe how to calculate the reward with the sensor positions given by the agent. The environment can use the sensor positions as a pixel-wise mask for the plantar video to get two various COP trajectories. One is the COP trajectory calculated from the original plantar pressure video; another is the COP trajectory calculated from the masked plantar pressure video. Optimizing the sensor positions for fitting the COP trajectory can achieve by minimizing the distances between COP positions calculated from the original and the masked ones for each frame, as Figure 3 shown. Thus, the reward function is defined as follows: where (ĈOP n,x ,ĈOP n,y ) denotes the COP position calculated from the plantar pressure video, (COP n,x , COP n,y ) denotes the masked version, and N + 1 is the total frame count. The distance between two COP positions is normalized to [0, 1] by divided by max distance. Using one minus the normalized distance as the reward, when the distance is zero, the agent will get the maximum reward, which is one. The exponent 0.4 in this equation is used to increase the precision for a smaller distance and encourages the agent to get a better score. Finally, average the reward over each frame by summing up the reward for each frame and then dividing by N + 1.
Reward Redistribution
Training an agent in a delayed rewards environment is a challenging problem in RL. First, since the agent can not immediately notice if its action is good or bad, reinforcing its policy becomes harder. Second, it also takes time to propagate the delayed rewards to the current state, which means it takes a much longer time for training. To solve this problem in the sensor placement environment, we used the concept proposed in the RUDDER algorithm [23]. RUDDER's idea is to distribute the delayed reward to those actions that cause this delayed reward to happen and can be implemented by the following steps: 1.
Using a Long Short-term Memory (LSTM) model to construct a sequence-to-sequence supervised learning task [24]. The serial of state-action pairs as the input and the delayed reward as the label.
The output sequence of this model can be treated as the accumulative reward at each state.
2.
After this supervised learning is finished, the redistributed rewards for each state will be calculated by differencing the current and previous state accumulative reward.
3.
Replacing the original reward with the redistributed rewards then trains the agent with any feasible RL algorithm.
Since the accumulative reward in the sensor placement environment can be calculated with Equation (2) for each state, we can skip the first step. The rewards after the RUDDER algorithm are shown in Figure 4c.
Soft Actor-Critic Discrete
Various deep RL algorithms have been proposed in recent years, like Asynchronous Advantage Actor-Critic (A3C) [25], Proximal Policy Optimization (PPO) [26], and Soft Actor-Critic (SAC) [19]. We chose the discrete version of Soft Actor-Critic (SAC-Discrete) [18] for the following reasons: First, the SAC-Discrete objective function optimizes the agent's policy while also maximizing its policy entropy. This objective function increases training stability and encourages the agent to discover the environment. Second, SAC-Discrete is an off-policy RL algorithm; it increases the data reusability so that it can reduce training time. In the sensor placement environment, different sensor placement patterns can get the same reward at the end of the episode, using SAC-Discrete can discover all of those patterns. In this section, we first introduce notation, followed by the maximum entropy reinforcement framework, and finally the SAC-Discrete algorithm.
Notation
An RL problem can be mathematically formulated by a Markov Decision Process (MDP). An MDP P consists of five tuples, P = (S, A, R, p, γ), where S is a set of states s (random variable of S t at time t), A is a set of actions a (random variable of A t at time t) and R is a set of rewards r (random variable of R t+1 at time t). P has transition-reward distributions as follows p(S t+1 = s , R t+1 = r|S t = s, A t = a) conditioned on state-action pairs at time t. γ ∈ [0, 1] is a discount factor that ensures an MDP will converge. We often equip an MDP P with a policy π. A given policy π(a t |s t ), ρ π (s t ) denotes the state marginals of transition-reward distributions, and ρ π (s t , a t ) denotes the state-action marginals of transition-reward distributions.
Maximum Entropy Reinforcement Framework
The maximum entropy reinforcement framework varies the standard RL objective function ∑ t E (s t ,a t )∼ρ π [γ t r(s t , a t )]; this framework maximizes the expected sum of rewards, while maximizing its policy entropy as the following equation: where π * is the optimal policy, T is the number of time steps, and H(π(.|s t )) is the entropy of π at state s t . The temperature parameter α is a hyperparameter within the equation, determining the relative importance of the entropy term versus the reward. Thus, it also can be tuned during training time; when α is close enough to 0, this equation falls back to the standard RL objective function.
To reinforce the policy in the RL algorithm is to alternate between the policy evaluation and the policy improvement. The discrete setting of soft policy iteration for the maximum entropy reinforcement framework is presented in [18]. First, the policy evaluation is as follows: In the discrete action setting, policy outputs the probability for each possible action π ∈ [0, 1] |A| , Q(s t ) is the soft Q-function that outputs the Q-value for each action Q(s t ) : S → R |A| . V(s t ) is the state-value function defined as the dot product of the action probabilities and the Q-values with entropy turn. Then, the policy improvement is achieving by a policy gradient method [27]; its objective function is as follows: The subscript φ and θ represent the parameters of the policy neural network and the Q-function neural network separately. Training data s t is sampled from a replay buffer D since the maximum entropy reinforcement framework is an offline learning algorithm.
SAC-Discrete Algorithm
SAC-Discrete uses the maximum entropy reinforcement framework to train an agent and uses a clipped double-Q trick to avoid Q-value overestimate [28]. We added a bar on top of the notation to denote a target network, and the target network smoothly updates with Polyak averaging using a hyperparameter τ. This hyperparameter τ is between 0 and 1, τ ∈ [0, 1]. SAC-Discrete is given by Algorithm 1.
13:
Update Q-functions by one step of gradient decent using
14:
Update policy by one step of gradient decent using 15: To apply SAC-Discrete, we need to design the policy and soft Q-function network. The policy network and the soft Q-function network input an 7 × 20 image as the state information and output each position's logit or Q value dependent on the network type. Since both networks share the same input and output shapes, we used the same design structure, as Figure 5 shows.
Testing Sensor Placement Environment with Created Video
We created a testing video with a simple pattern, as shown in Figure 6a, in order to test the sensor placement environment. Meanwhile, we set up various temperature hyperparameters for this experiment. Temperature hyperparameter affects the final training reward and the convergence time. We tested ten temperatures from 1 × 10 −3 to 10 × 10 −3 . When the temperature is too low, like alpha equals 1 × 10 −3 , the agent lacks discovery and training stability and performs the worst, as shown in Figure 6b. Using a higher temperature increases training stability. However, the convergence time increases as the temperature value increases as well, as shown in Figure 6c. The result showed that selecting a proper temperature hyperparameter is critical, not only increases the training stability and final reward but also decreases the training time like alpha equals 4 × 10 −3 . Figure 6. Illustration of created testing video and episodic rewards. (a) The first and last frames are empty images without any pressure. Using a simple increase and a decrease patterns to generate the rest frames, (b,c) are the episode reward using different temperature parameters from 1 × 10 −3 to 10 × 10 −3 and filtered by a moving average filter with a window size 1000.
Tuning Temperature with Population Based Training
To select a proper temperature hyperparameter, we utilized the Population Based Training (PBT) method [20]. This method combines the parallel search and sequential optimization hyperparameters tuning method. First, the PBT method initializes the population with some agents, which have various hyperparameters. After a training period, it exploits agents whose performance in the top 20% of the population to replace the bottom 20%. Meanwhile, perturbing the hyperparameters to explore the hyperparameter space. Keep repeating the exploit-and-explore process to tune the hyperparameters. For a population P with N training models , the PBT method is given by Algorithm 2.
Applying the PBT method for the sensor placement task, we created a population with a size of 15 and only allowed the PBT method to optimize the temperature parameter. The temperature parameter is initialized with a log scale uniform distribution between 1 × 10 −1 to 1 × 10 −3 . Functions that invoked in the PBT method are described in the following: • Step: Each training iteration updates by the gradient descent with Adam optimizer [29], the learning rate is set to 3 × 10 −4 . • Eval: We evaluate the current model with averaging the last 10 episodic rewards.
•
Ready: A member of the population is considered ready to go through the exploit-and-explore process when the agent elapsed 5 × 10 5 agent steps since the last time that it was ready.
• Exploit: First, we rank all the members of the population using the evaluation value. If the current member is in the bottom 20% of the population, we randomly sample another agent from the top 20% of the population and copy its parameters and hyperparameters. • Explore: We randomly perturb the hyperparameters by a factor of 0.8 or 1.2.
The whole training process runs for 10 M agent steps, which is 1.25 M episodes since each episode takes eight agent steps until terminated. All agents have learned the optimal policy in the testing video's sensor placement environment; as Figure 7a shows, the maximum episodic reward in the sensor placement environment that can be obtained is 1. Moreover, the PBT method adjusts a hyperparameter during a training process, as shown in Figure 7b.
Results
To optimize the sensor placement for the foot plantar center of pressure without any prior knowledge, we proposed the sensor placement environment and solved it with the SAC-Discrete algorithm. Using the reward redistributed trick to make the training process feasible, as mentioned in Section 2.2.3, and the PBT method to tune the temperature hyperparameter makes the training process more stable and better performing, as mentioned in Section 2.4.3. In the testing video task, this mechanism achieves the optimal sensor placement for the COP trajectory, as shown in Figure 7a; this experiment demonstrates the robustness of the training process.
For the self-selected speed running task, we fed 150 stance-phase plantar pressure videos to the sensor placement environment. Hyperparameters setup for SAC-Discrete in this experiment can be found in Appendix A Table A1, and the PBT setup for tuning temperature parameter can be found in Appendix B Table A2. We ran this experiment for 17 M agent steps, which is 2.125 M episodes. The best agent within the population gets an average reward of 0.7986 in the final 1000 episodes, as Figure 8a shows. Rewards start to converge around 0.8 M episodes, and so does the temperature hyperparameter, as Figure 8b shows. The final designed sensor placement position is presented in Figure 8c. The difference of the COP trajectory between the F-Scan System and the designed eight-sensor setting is shown in Figure 8d. We compared our designed eight-sensor setting with the placement design using the concept of WalkinSence [30], as Table 1 shows. Table 1 clearly shows that the performance of our method obtains a higher average reward. The sensor placement design for the WalkinSense can be found in Appendix C Figure A1.
Discussion
Although this study proposed a method that can find a sensor placement within a large number of combinations, we only applied it for finding an eight-sensor placement for self-selected speed running tasks. Applying this method for a different task is to replace the plantar pressure video from self-selected speed running tasks to others. Since the objective of this optimization is to reduce the average distance for the COP distance for each video frame, this method is putting more effort on the region where the COP is dense, as Figure 9a shows. This is the reason why our method placed two sensors in the toe region, and it also increases the accuracy in the toe-off phase, as Figure 9b,c shows. Due to the small number of participants in this experiment, the sensor placement result may not be general enough. However, it shows that this method can be applied to more than one subject and performing better in the COP trajectory accuracy. On the other hand, applying this method for only one subject can create a personalized custom sensor placement design. Using a different number of sensor counts can be studied in the future work, by increasing or decreasing the environment's number of sensors.
Conclusions
This paper presented a sensor placement environment, which can be applied for the SAC-Discrete, a deep RL algorithm, to find the optimal sensor position for self-selected speed running tasks without any prior knowledge of the foot anatomical area. Furthermore, this work introduced a reward redistribution trick to make the training process feasible and the PBT method to tune the temperature hyperparameter making the training process more stable and better performing. The final sensor placement, determined by the best agent, achieved 0.7986 rewards for average within the environment. In summary, the sensor placement environment can find an excellent sensor position for fitting the COP trajectory without any prior knowledge of foot anatomical area, and the performance surpassed the human-designed sensor placement. Appendix C. Walkinsense Sensor Placement Figure A1. WalkinSense R sensor placement. | 6,031.2 | 2020-09-29T00:00:00.000 | [
"Computer Science"
] |
Adaptive Switched Linear Systems with Average Dwell Time to Compensate External Disturbances
Given a linear switched system composed of both Hurtwiz stable and unstable subsystems, and with additive unknown constant disturbance, a switching law with average dwell time between Hurwitz stable and unstable subsystems is proposed, together with an adaptive algorithm to compensate the unknown additive disturbance. Using Lyapunov theory, exponential stability of a desired degree of the plant state is assured and bounded-ness of the adaptive dynamic too.
Introduction
There exist many practical problems where the available choice is to switch among a finite set of subsystems [1], [2], [3]. Also, in some practical problems [4], the only way to incorporate a logic-based decision system is by switching among a family of subsystems. Switched systems are of variable structure or multi-modal class [5]. According to [6], a switched system can be viewed as a hybrid dynamical system which is composed of a family of continuous-time subsystems along with a switching law among them. In comparison to sliding mode systems (a type of variable structure one), which is an approach recognized as an efficient tool to design robust systems, switching systems present chattering behavior. This behavior, in some situations, would not be considered acceptable [7]. Although there exist solutions to reduce this phenomenon [8] (including the well known super-twisting algorithm [9]), dwell switching design can be viewed as an alternative tool of switching laws without chattering. Moreover, dwell average switching systems do not exhibit Zenoness and beating effects. These effects can appear on hybrid dynamical systems [10]. Adaptive control provides adaptation to adjust a system with parametric uncertainties [11]. Motivated by the adaptive control law presented in [12] to face with additive constant disturbance, we propose a switching law with average dwell time along with an adaptive algorithm to compensate it. This scheme is applied to linear switched systems consisting of both Hurtwiz stable and unstable subsystems. Exponential stability of a desired degree is guaranteed for the states of the plant. Bounded-ness of the adaptive dynamic is assured too. Although in [6] it is proposed a dwell average switching law for linear systems in the presence of a class of nonlinear additive perturbation satisfying a kind of Lipschitz condition, the constant additive case is not considered. This class of constant additive perturbation is presented in some engineering problems [12]. The remainder of this article is organized as follows. Section 2 gives the main theory and notation on switched systems with average dwell time. Section 3 states our main result. A numerical example supporting our theory is given in Section 4. Finally, Section 5 gives the conclusions.
Switching design with average dwell time
Consider the linear switched system with additive constant disturbance given by: where x(t) ∈ R n is the state, t 0 is the initial time, d ∈ R n is the additive constant disturbance, and x 0 is the initial state. The piecewise constant function of time . See Figure 1. The matrices A i (i ∈ I N ) are assumed of appropriate dimension, and N > 1 represents the number of subsystems. It is also assumed the switched system (1) is composed of both Hurwitz stable and unstable subsystems.
The control objective consists to find a switching law with average dwell time, along with a dynamic estimationφ(t) to d, such that the plant states goes to the equilibrium point as time goes to infinity exponentially with a stability degree, while keeping bounded the esti- . Given a switching signal σ(t) and any t > τ ≥ 0, the number of switchings of σ(t) on the interval (τ, t), denoted by N σ (τ, t), satisfies: for a given constants N 0 and τ a . The constant N 0 is named after the chatter bound, and τ a is called the average dwell constant time [4], [6]. Basically, there may exist consecutive switchings separated by less than τ a , but the average time interval between consecutive switchings is not less than τ a [6] 1 . Let us introduce the notation S a [τ a , N 0 ] to denote the set of all switching signals satisfying (2). Definition 1 [4], [6]. The state of the switched system (1) is said to be globally exponentially stable with stabil- and where the matrices P ′ i s are symmetric and positive definite.
Main result
To begin with the main result, consider the next switching law: (S 1 ) Switching law [6]: Determine the switching signal σ(t) so that holds for any given initial time t 0 . Theorem 1 Under the switching law (S 1 ), there is a finite constant τ * a such that the switched system (1), x(t) is globally exponentially stable with stability degree λ over S a [τ a , N 0 ], for any average dwell time τ a ≥ τ * a , any chatter bound N 0 ≥ 0 , and withφ(t) a solution to: where β is a given positive constant and P σ(t) are the corresponding solutions to the Lyapunov equations (3) and (4), applied over the switching law (S 1 ). Moreover, ϕ(t) remains bounded (i.e., there exist a constant value v such that ∥φ(t) ∥≤ v ∀t ≥ t 0 ). Furthermore, if: where λ M (·) and λ m (·) denote the largest and smallest eigenvalues of a symmetric matrix, then . (8) Proof. Similarly to the proof of (Theorem 1, [6]) but using the Lyapunov function: A sketch is as follows. 1) Each V i in (9) is continuous and its time derivative along the solutions of the corresponding subsystems and (6), satisfies:V 2) There exist constant scalars α 2 ≥ α 1 > 0 such that: ∀x ∈ R n and ∀i ∈ I N .
88
Adaptive Switched Linear Systems with Average Dwell Time to Compensate External Disturbances ∀x ∈ R n and ∀i, j ∈ I N . For a discussion on properties 2) and 3), see [6] (in fact, a value for µ is given by (7) [6]). So, for any piecewise constant switching signal σ(t), and any t > t 0 , let t 1 < t 2 < ... < t N σ(t 0 ,t) be the switching points of σ(t) over the time interval (t 0 , t), and suppose that the p-th subsystem is activated during [t i , t i+1 ) [6]. Then, for any ζ ∈ [t i , t i+1 ), and from (11), we have: According to [6], and under the Theorem 1 conditions, the above means that: where α = N0ln(µ) 2 with N 0 ≥ 0 being an arbitrary value. This implies that the plant state (x(t)) converges exponentially with a prescribed degree. Using this fact, and from (11), we conclude thatV (t) ≤ 0 meaning thatφ(t) is bounded. This conclude the sketch of our proof. ♢
Conclusions
Essentially, in this paper, we extended the results presented in [6]. This extension involves the inclusion of additive constant perturbation to the switched system instead of dealing with additive nonlinear perturbation [6]. According to the given numerical experiments, for the slow time varying disturbance case, our propose goes well too. | 1,681.4 | 2013-01-01T00:00:00.000 | [
"Mathematics"
] |
A mechanism for heating electrons in the magnetopause current layer and adjacent regions
Taking advantage of the string-of-pearls configuration of the five THEMIS spacecraft during the early phase of their mission, we analyze observations taken simultaneously in the magnetosheath, the magnetopause current layer and the magnetosphere. We find that electron heating coincides with ultra low frequency waves. It seems unlikely that electrons are heated by these waves because the electron thermal velocity is much larger than the Alfv én velocity (Va). In the short transverse scale ( k⊥ρi >> 1) regime, however, short scale Alfv́en waves (SSAWs) have parallel phase velocities much larger thanVa and are shown to interact, via Landau damping, with electrons thereby heating them. The origin of these waves is also addressed. THEMIS data give evidence for sharp spatial gradients in the magnetopause current layer where the highest amplitude waves have a large componentδB perpendicular to the magnetopause and k azimuthal. We suggest that SSAWs are drift waves generated by temperature gradients in a high beta, large Ti/Te magnetopause current layer. Therefore these waves are called SSDAWs, where D stands for drift. SSDAWs have large k⊥ and therefore a large Doppler shift that can exceed their frequencies in the plasma frame. Because they have a small but finite parallel electric field and a magnetic component perpendicular to the magnetopause, they could play a key role at reconnecting magnetic field lines. The growth rate depends strongly on the scale of the gradients; it becomes very large when the scale of the electron temperature gradient gets below 400 km. Therefore SSDAW’s are expected to limit the sharpness of Correspondence to: P. Robert<EMAIL_ADDRESS>the gradients, which might explain why Berchem and Russell (1982) found that the average magnetopause current sheet thickness to be∼400–1000 km ( ∼500 km in the near equatorial region).
Introduction
Intense ULF electromagnetic waves (0.2-5Hz) have been regularly observed by spacecraft crossing the magnetopause and adjacent layers (Anderson et al., 1982;Rezeau et al., 1986).To determine the nature of these fluctuations, Rezeau et al. (1993) made correlations between the two ISEE spacecraft and showed that the short-lived, intense, electromagnetic fluctuations observed near the magnetopause (∼few nT, corresponding to δB/B up to 1-15 %) correspond to non-linear kinetic Alfvén wave (KAWs) structures moving along the magnetopause.A similar conclusion was drawn by Stasiewicz et al. (2000), who investigated the electric and the magnetic signatures of these waves and suggested that they correspond to kinetic Alfvén waves affected by a large Doppler shift.Using Themis data taken near the magnetopause Chaston et al. (2008) have estimated the perpendicular wave lengths; they showed that the wavelengths are very small (k ⊥ ρ i > 1) and that these Alfvénic fluctuations can transport magnetosheath plasma through the magnetopause at about the Bohm rate.There is as yet no consensus about the process that generates KAWs in the magnetopause layers.Stasiewicz et al. (2000) suggested that these fluctuations Published by Copernicus Publications on behalf of the European Geosciences Union.
A. Roux et al.: A mechanism for heating electrons in the magnetopause current layer and adjacent regions are coupled via drift effects to strong density gradients developing at the magnetopause.Belmont and Rezeau (2001) proposed a mechanism where KAWs are produced by mode conversion, in the density gradients of the magnetopause and adjacent layers, of fast modes carried by the solar wind.Chaston et al. (2008) suggest that KAWs are one of the by-product of magnetic reconnection.
The AMPTE-UK spacecraft has regularly observed heated electron at the magnetopause (Mp) and adjacent layers.According to Hall et al. (1991) this heating could be due to wave activity in the ULF range.In some of the events, heated electrons were found to be counter-streaming in parallel and antiparallel directions.Thus both electron heating and enhanced wave activity are known to occur at the Mp and adjacent layers but the potential relation between them remains to be investigated.As indicated above, data analysis suggests that the observed fluctuations correspond to Alfvén waves.Yet Alfvén waves are expected to move at parallel phase velocities on the order of the Alfvén velocity (V a ∼ 100 km s −1 ), which is much lower than the electron thermal velocity.Thus, a resonant interaction between electrons and Alfvén waves is not expected to occur.Therefore electrons are not (a priori) expected to be heated by the waves.Recently, however, Howes et al. (2008) have ran kinetic simulations showing that the phase velocity of KAWs gets much larger than V a , when k ⊥ ρ i 1.In this regime KAWs can efficiently interact with electrons for instance via Landau damping.Using interferometric methods Sahraoui et al. (2008) have analyzed the k-spectra of magnetic fluctuations measured in the solar wind by the four Cluster spacecraft.They also interpreted the observed fluctuations as small scale (k ⊥ ρ i 1) KAWs, and showed that they can have scales so small that they can interact via Landau or cyclotron damping with solar wind electrons.In their interpretation small-scale KAWs are produced by a cascade (down to the electron scale) of large scale Alfvén waves generated near the Sun.In the magnetopause current layer (MpCL), however, strong perpendicular gradients will tend to inhibit the evolution towards a fully developed cascade.
Here we investigate a direct generation mechanism via a linear instability driven by steep gradients.After a short description of the instruments and the context (Sect.2), we give evidence for the link between intense waves observed near the MpCL and electron heating (Sect.3).The characteristics of the observed electromagnetic fluctuations are described in Sect. 4. In Sect. 5 we seek to identify the free energy source that drives the observed waves unstable.Conclusions are presented in Sect.6.
Instrumentation and context
Here we use data from four instruments on board the five THEMIS spacecraft Tha, Thb, Thc, Thd and The).Descriptions of these instruments and early results can be found in Angelopoulos et al. (2008), Auster et al. (2008), Bonnell et al. (2008), Mc Fadden et al. (2008), Roux et al. (2008), Le Contel et al. (2008).Unfortunately the commissioning of EFI, the Electric Field Instrument, was not finished when the event we selected occurred; only Thc E-field data could be obtained.Electric field measurements being available on Thc, the spacecraft potential could be calculated and used to remove photo electron contamination for this spacecraft.Thus the electron temperature T e was measured during the whole period on Thc.On Thb and Thd, however, the lack of E-field measurement made it impossible to remove photoelectron effects while these s/c were in the low density, hot magnetospheric plasma.Yet when the density was large, as in the MpCL, or in the magnetosheath (MSh), T e could be measured.Thc electric and magnetic filter bank data (Cully et al., 2008) are available in 6 logarithmically-spaced frequency bands from 0.1 Hz to 4 kHz, with one measurement every 4 s.
20 May 2007 was selected because on that date the five spacecraft were near the dayside magnetopause and were in a string-of-pearls configuration.It was therefore possible to measure simultaneously the properties of the plasma (i) outside the magnetopause, in the adjacent magnetosheath (MSh), (ii) inside the magnetopause current layer (MpCL), and (iii) inside the adjacent magnetosphere (MSp).While crossing the magnetopause and adjacent regions, the five spacecraft have recorded the signatures of a flux transfer event (FTE).This FTE and the overall characteristics are described by Sibeck et al. (2008).Here it is sufficient to keep in mind that the 5 spacecraft were essentially aligned along the XGSE direction and located in the afternoon sector, as shown on Fig. 1.
Because Tha and The were very close and give similar results, we only display data from Tha, and skip data from The.The same is true for Thc and Thb.Thus on Figs. 2 and 3 we have display data from Tha (located in the adjacent MSh), Thd (remaining for a long time in the MpCL), and Thb (MSp).As electric data are only available on Thc, we use Thc (Fig. 5) to estimate the δE/δB ratio, and identify electron temperature gradients.(22:02:00-22:02:30), we observe electrons with lower energies than in the adjacent MSp, but much larger fluxes, and the magnetic signature of the FTE mentioned earlier.
Electron heating
-Before 21:59:15 Thd is also located in the MSp.It penetrates in the MpCL and returns to the MSp after 22:04:00.Yet Thd remained much longer (about 5 mn) in the MpCL than Thb, and gathered data, not only during the crossing of the FTE, but also in the MpCL.In the corresponding region (also bracketed by pink vertical lines), the average magnetic field, measured onboard Thd is weak and the fluctuations are intense, particularly after crossing the FTE.Note that energy distribution of the electron flux is the same as in the bracketed region of Thb.Inside the bracketed regions of Thb and Thd (the MPCL), T e ≈ 60 eV -Before 22:01:45 and after 22:03:15 UT, Tha was located in the MSh, as indicated by a large flux of very low-energy electrons with a temperature of ∼35 eV (see Fig. 5) and confirmed by a negative B z .As the FTE moved along the magnetopause (Mp), the MpCL thickened and Tha got closer to it (see the model by Sibeck et al., 2008).This close approach corresponds to the region bracketed in pink in which the electron spectrogram gives evidence for (i) a very low flux of energetic (magnetospheric) electrons and an increase in the temperature of low energy (magnetosheath) electrons.Their temperature (∼60 eV) is almost twice that of the adjacent free MSh; they are identified as heated MSh electrons.Hence electrons are efficiently heated as they approach/penetrate the MpCL.Electrons observed by Tha, Thb and Thd in the pink bracketed regions are very similar in terms of fluxes, temperatures and densities (see Fig. 3).These densities are the same as in the MSh but temperatures are enhanced, suggesting that electrons observed in the MpCL at Thb and Thd are heated MSh electrons, as confirmed by data shown in Fig. 3, for densities and Fig. 5 for temperatures.
Note that the 3 bracketed regions, in which high fluxes of heated electrons are observed coincide with enhanced wave intensities (typically in a frequency range 0.2-5 Hz), which leads to suspect that these intense waves (δB ∼ 1 nT corresponding to δB/B ∼ 10 %) heat electrons.
Polarization
It is convenient to use the same coordinate system for the 5 spacecraft whether they are far from or close to the magnetopause.The TPN coordinate system used here is defined geometrically; it has two axes tangent to a paraboloid (with the symmetry of revolution) passing by the spacecraft and by the subsolar point defined from the Tsyganenko 89 model, taken for the Kp of the day of the event.A representation of the TPN is given in Fig. 4. Unlike the MVA the TPN is not influenced by the crossing of the FTE.To make a meaningful comparison between MVA and TPN, we determined the MVA frame from the analysis of FGM data for the period preceding the FTE: 21:50 to 22:02 UT (with a 12 s average); for Thd which remains longer in the MpCL than its companions.The matrix from MVA to GSE and the matrix from MVA to TPN are given in shows the results of MVA carried out on the waves magnetic component measured by the fluxgate magnetometer aboard Thd, between 22:02:45 and 22:03:30.This period was chosen because the averaged dc magnetic field (Bdc) does not vary too much, it is therefore possible to relate the directions of variance to the direction of Bdc and thus determine the polarization.During this period the plasma velocity is accelerated; this acceleration corresponds to the largest wave intensities.The maximum variance of the waves is along N; the intermediate variance is along P; and the minimum variance is along T. For the sake of verification MVA has also been applied to search coil data filtered between 0.8 and 4 Hz, the maximum frequency covered by the instrument during this event.The normal (Nw) obtained by MVA applied to both instrument is almost the same, as can be seen from Table 2 (bottom).The waves propagate along the magnetopause in the azimuthal direction.
Thus the fluctuations measured by Thd have a large component normal to the Mp, and they propagate essentially in the azimuthal direction.Between 22:02:45 and 22:03:30, Bdc is essentially along P.
Nature of the waves
As shown in the previous subsection, the dominant component of the wave is along N (maximum variance), with a smaller (compressional) component along P, which is close to the direction of Bdc froml 22:02:45-22:03:30.The wave number k (minimum variance) is tangent to the magnetopause, essentially in the azimuthal direction, and therefore it is almost perpendicular to Bdc.Thus we conclude that k ⊥ k // .In such a situation a compressional/magnetosonic wave would have: δB//Bdc while we find that the dominant δB (maximum variance) is perpendicular to Bdc, which suggests that the observed waves are Alfvénic.How then can we explain the interaction with electrons?A resonant interaction between electrons and a classical Alfvén wave is (commissioning).Electric field data were already available on Thc, however.Thc filterbank data are shown in Fig. 6.Panel (a) gives the dc magnetic field, and panel (b) shows the δE/δB ratio for different frequencies ranging from 3 to 192 Hz, together with V a .We find that δE/δB V a at all frequencies over the entire period; for 3 Hz, (δE/δB)/V a is at least 5, and much larger for higher frequencies (in fact smaller scales as will be explained later).For an Alfvén wave δE/δB ∼ ω/k // ; therefore δE/δB V a implies ω/k // V a .In addition to data from Thc, Howes et al. ( 2008) ran gyrokinetic simulations showing that the phase velocity of kinetic Alfvén waves (KAWs) gets much larger than V a when k ⊥ k // and k ⊥ ρ i 1.In this regime short (transverse) scale Alfvén waves (SSAWs) have a finite E // and can therefore interact efficiently with electrons via Landau damping, as will be discussed later.Sahraoui et al. (2008) analyzed magnetic field fluctuations measured in the solar wind by the Cluster spacecraft.They interpreted fluctuations/waves observed in the solar wind as Doppler-shifted SSAWs and gave evidence for a change in the slope of the spectrum at frequencies corresponding to the electron scale, which confirms that small transverse scale fluctuations (k ⊥ ρ i 1) can be damped via a collisionless process involving electrons, namely Landau or cyclotron damping.Given the large amplitude (∼1 nT) of the waves observed in the MpCL, a turbulent cascade could occur and explain the generation of SSAWs.It is not clear, however,r that a turbulent cascade can develop in a spatially limited region such as the MpCL.
In Sect.5, it will be shown that the sharp gradients observed at the MpCL can generate SSAWs through a linear in- stability, which does not preclude a non linear cascade from occurring if the wave amplitude gets large enough to overcome the effect of the inhomogeneity.SSAWs observed by Sahraoui et al. (2008) in the solar wind are strongly Doppler shifted.In the case of present observations the ion velocity V d is comparable to the ion thermal velocity, but the Doppler shift is still large because k ⊥ ρ i 1. Therefore the Doppler shift normalized to the proton gyrofrequency can be larger than unity: Thus the Doppler shift is likely to largely exceed the frequency in the plasma frame (see discussion section).Let us now investigate the effect of SSAWs on electrons.
Heating of particles by waves
In Fig. 3 the duration of the density ramp crossed while Thd penetratres the MpCL or gets out of it (at ∼21:59 and ∼22:04) is ∼12 s.The relative velocity between Thd and the magnetopause, measured along the normal (V N ), is ∼60 km s −1 , as estimated from Fig. 6.Hence the thickness of the density jump associated with the magnetopause is estimated to be L N ∼720 km, which corresponds to about 7 ion Larmor radii of 400 eV ions.Given that we consider short wavelengths (k ⊥ ρ i 1) we can, in first approximation, ignore the effect of the inhomogeneity and calculate the damping rate from using WHAMP software Rönnmark (1982).The role of the inhomogeneity will be further investigated in Sect. 5. Figure 7 shows the results of WHAMP for T i = 400 eV, T e = 35 eV, N = 4 cm −3 (and hence β p ∼ 6, β being the ratio of the kinetic to the magnetic pressure), as estimated from data, for 1 < k ⊥ ρ i < 100, 0.05 < k // ρ i < 0.15, and B 0 = 10 nT.The real part of the dispersion relation obtained from WHAMP fits the expression given by Howes et al. (2008): As expected, the imaginary part corresponds to damping, as illustrated in Fig. 7; the damping rate (red curve) becomes large as k ⊥ ρ i increases.From the dispersion relation we find that ω/k // V i ∼ 1, (where V i is the ion thermal velocity) is obtained for k ⊥ ρ i ∼ 7, which corresponds to a very small ion Landau damping rate, according to Fig. 7. On the other hand, ω/k // V e ∼ 1 (V e is the electron thermal velocity) is obtained for k ⊥ ρ i ∼ 90, where the damping rate is very large.This rough comparison shows that ion Landau damping is weak while electron Landau damping is large, which explains why electrons are much more efficiently heated by SSAWs than ions.
The electron Landau damping tends to increase T e// , the parallel electron temperature.Figure 8 gives evidence for an enhanced T e// as intense waves are observed, and it shows the electron distribution measured by Thd in the MpCL.As instrument is in the fast-survey mode, a full 3-D distribution needs ∼100 s. to be built.Yet the selected period does bracket the period in which intense waves are observed by Thd. Figure 8 shows an increase in the parallel temperature as compared to T e⊥ and T e// in the MSh, which confirms that electrons are primarily heated by waves in the parallel (and anti-parallel) directions.
Drift wave instabilities
No relationship was found between temperature anisotropies and enhanced wave activity.On the other hand, the wave intensity increased as the spacecraft approached the magnetopause.The electromagnetic energy E w is plotted on Fig. 6, panel (c).Figure 6, panels (c) and (d), gives evidence for the link between sharp T e gradients (Fig. 6d) and wave energy bursts (Fig. 6c).As expected, E w maximizes between 22:01:30 and 22:02:30 while Thc is inside the FTE.Secondary maxima (∼21:59:45 and ∼22:00:00, 22:02:50 and 22:03:45) are observed outside the FTE; they correspond to sharp gradients in T e , as pointed out by arrows, in Fig. 6c and d.Weak gradients in T e are not associated with peaks in E w .Thus, the peaks in wave energy observed by Thc correspond to sharp gradients in T e or to the FTE itself.It is therefore tempting to relate wave generation to drift effects associated with gradients.A wide variety of drift wave instabilities exists (Mikhailovskii, 1992).Driven by observations that give evidence of large magnetic components and large β, we consider electromagnetic instabilities in a high β plasma.Hasegawa (1971) showed that magnetosonic waves can be destabilized in high β plasmas in the presence of a density gradient and of a sufficient number of cold electrons.Because magnetosonic waves can have large ω/k // , they could be Landau damped by electrons and heat them.Yet a magnetosonic wave with k ⊥ k // , should have δB along B, which does not fit present observations.In a high β plasma, instabilities driven by strong temperature gradients can develop; see a discussion by Aydemir et al. (1971) on drift instabilities in a high β plasma.
As pointed out above, THEMIS observations suggest that the observed waves are Alfvénic, propagate azimuthally, and have k ⊥ k // and k ⊥ ρ i 1.This situation was considered by Mikhailovskii (1992, p. 100-102) who described the effect of temperature gradients on the dispersion relation of drift (kinetic) Alfvén waves with short perpendicular wavelengths.Let us analyze the dispersion relation of short scale drift Alfvén waves (SSDAs) and determine the instability threshold.Mikhailovskii (1992) obtained the real and imaginary part of the dispersion relation of SSDAWs in a high β plasma for k ⊥ ρ i 1 and a small k // ρ i .The key parameters are β; T i /T e ; L N the scale of the density gradient; the ratio between the gradients in T i and T e and the density gradients (η i = ∇T i /∇N, and η e = ∇T e /∇N), and α = ∇B/∇N , the ratio between the magnetic field and the density gradients.For simplicity we assume that electrons and ions are isotropic during at least the initial stage of the instability.The conservation of the total pressure is assumed, which provides a relation between these parameters.
For the sake of simplification Mikhailovskii took η i = η e .With this assumption he found that an instability develops for η < −1, that is for a temperature gradient steeper than the density gradient and in the opposite direction, as expected for the MSh/MSp interface.Given the large ratio between ion and electron Larmor radii, we expect steeper gradients in the electron temperature than in the ion temperature.Thus we have solved numerically the dispersion relation given by Mikhailovskii (1992) without assuming η i = η e .We restrict the analysis to parameters fitting present observations namely, temperature and density gradients in opposite directions (η i and η e < 0), and α < 0 (the latter comes from the conservation of the total pressure).The real part of the dispersion relation is given by (see Mikhailovskii, 1992, p. 32, formula 2.67): where 2) has only two roots; we select the one that corresponds to ω = ω * pe in the limit where the second drift term is dominant.Similarly, the imaginary part is classically determined by γ = −iD (1) /(∂D (0) /∂ω)), where D (1) is given by Mikhailovskii (1992, p. 32, formula 2.68): with and The real and imaginary parts of the frequency are computed numerically for parameters fitted with the data namely: T i ∼ 400 eV, T e ∼ 35 eV, β = 6, B 0 = 10 nT, N = 4.10 6 cm −3 , L N = 10 6 m, as functions of k ⊥ ρ i and for 0.05 < k // ρ i < 0.125.We limit ourselves to ∇T e , ∇T i , and ∇B inward (earthward), and ∇N outward (hence α < 0).The corresponding results are presented in Fig. 9.For small values of k // (k // ρ i = 0.05) the SSDAW is found to be unstable for −1 < η i < 0 and η e < −2.5.For a larger value of k // ρ i there is still an instability but the threshold is more negative; for instance when k // ρ i = 0.1 instability takes place for (η e ) min < −3, as indicated by the top RHS panel in Fig. 9.
Frequencies are typically lower than or comparable to the proton gyrofrequency.Thus the electron mode is unstable for an electron temperature gradient steeper than the ion temperature gradient and oriented in a direction opposite to the density gradient.Given the large ratio between electron and ion Larmor radii, such a situation is expected to occur at the MP.
Discussion
The growth rate of drift waves is very sensitive to the scale of the gradients.For example if one divides the scales of these gradients by a factor of 2, the growth rate will be multiplied by a larger factor (> 2).The corresponding fast growth will lead to non-linear smoothing of the temperature profile.We therefore expect the drift wave to limit the sharpness of the profile to values on the order of 1000 km for For η e > −2.5, waves are damped, whatever k // ρ i .For η e < −2.5 large growth rates are obtained, especially for the smaller values of k // ρ i .
L N and 2.5 times less for L Te (L Te ∼ 400 km).It turns out that this scale (400 km) is comparable to the ion Larmor radius of dominant ions (400 eV), with ρ i ∼ 200 km.These estimates suggest that the MpCL thickness should have a lower bound ∼400-1000 km.Thus, spatial diffusion will tend to establish a MpCL thickness at equilibrium.This does not prevent Landau damping from occurring, however.As long as the linear growth rate of the drift instability (γ D ) exceeds the modulus of the Landau damping (|γ L |), the wave will grow until non linear effects stabilize the drift insta-bility.Assuming that quasi-linear theory applies, the spatial diffusion coefficient Dxx is proportional to E 2 .For a steady state, the wave energy balance gives an equilibrium electric field E 2 c proportional to (γ D − |γ L |)/(k ⊥ ρ i ) 2 .Note that γ D grows faster with k ⊥ ρ i than |γ L |. Thus for a sufficiently large k ⊥ ρ i , γ D will always exceed γ L .The most unstable waves have k ⊥ ρ i 1, which is consistent with the assumptions made to calculate the Landau damping (homogeneity) and the local approximation for drift-wave growth rate calculations (λ ⊥ L Te and L Ne ).We did not consider the possibility that electrons (and ions) bounce along (reconnected) field lines.This is valid as long as a normal component has not yet developed; otherwise, damping would be related to bounce effect instead of the classical Landau damping.Similarly, curvature effects should be taken into account in the calculation of the drift wave growth rate.
It is interesting to evaluate the Doppler shift for the parameters used above.For k ⊥ ρ i = 50 and k // ρ i = 0.1, we find from the dispersion relation (1) that ω/ H ≈ 0.72, which corresponds to a maximum Doppler shift: where we have assumed that V d max ≈ V i , an assumption consistent with Fig. 5 panel (e).Thus the Doppler shift can be up to 70 times ω in the plasma frame for the parameters given above, which means that the wave instruments are measuring a k spectrum and are therefore probing scales.This is why the "spectrum" extension is so broad.
Conclusions
Intense electromagnetic waves observed in the magnetopause current layer and adjacent regions have been identified as short transverse scale Alfvén waves with parallel phase velocities much larger than V a .We have shown that these waves can be driven unstable by a sharp earthward gradient in the electron temperature; we refer to them as short (transverse) scale drift Alfvén waves (SSDAWs).SSDAWs have k⊥ k // and therefore large parallel phase velocities and a small but finite E // were shown to be Landau damped by electrons.This damping is a likely explanation for the observed preferential heating of electrons in the presence of intense waves.SSDAWs have a dominant magnetic component normal to the MP and propagate azimuthally.They can break the frozen in condition and initiate turbulent magnetic reconnection.SSDAWs grow at the expense of gradients; they tend to reduce the sharpness of these gradients.Very large growth rates are obtained when the typical scale of those gradients becomes quite steep.For instance, when ∇T e ,∇T i and ∇B are earthward and ∇N sunward (as expected for the MpCL), a large growth rate is found for an electron temperature gradient steeper than the density gradient.When L N ∼ 10 3 km and L Te ∼ 4.10 2 km, for example, SSDAs are strongly unstable, and can therefore produce fast spatial diffusion that levels out the gradients.This might explain why Berchem and Russell (1982) found an average magnetopause current layer thickness ∼500 km.Note that most of the magnetopause crossings used by Berchem and Russell (1982) in their statistics correspond to large angles (> 30 • ) between magnetic fields inside and outside the magnetopause as it is the case in the present paper.
Figure 10, which summarizes our results, is a sketch describing wave generation and the effect of these waves on electron heating.
Figure 2 Fig. 1 .
Figure 2 shows the 3 components of the magnetic field measured by the fluxgate magnetometer, magnetic fluctuations from the search-coil, and an electron spectrogram (up to about 20 keV), from Thb, Thd and Tha.Below we describe data gathered by each of the spacecraft.-During most of the time interval Thb (top 3 panels) was located in the MSp, as indicated by the positive B z component, and by a modest flux of energetic (few keV) electrons.Inside the region bracketed by pink lines
Figure 3 Fig. 2 .
Figure3is in the same format as Fig.2, but for ions.Ion densities and temperatures are plotted for Thb, Tha and Thd, together with the magnetic field and an ion spectrogram.The weak fluxes of energetic ions and the low densities (∼0.1-0.2 cm −3 ) observed on both sides of the bracketed regions of Thb and Thd confirm that they were in the MSp before and after crossing the MpCL.As for electrons, enhanced fluxes of ions are found inside the pink bracketed periods, by Thb and Thd.On Tha, however, the ion spectrogram observed during the bracketed period is almost the same as in the adjacent MSh on both sides of the bracketed region.The last panel of Fig.3confirms that the ion temperature (T i ) only weakly
Fig. 3 .
Fig. 3. Composite showing (i) the magnetic field, (ii) the ion density, (iii) the ion temperature, and (iv) an ion spectrogram, for Thb, Thd, and Tha respectively.As Tha approaches the MpCL the ion temperature variation remains small (see last panels).The ion temperatures at Thb and Thc in the MpCL (between vertical lines) are almost the same as the ion temperature measured by Tha in the free MSh.Note that the ion density inside the bracketed regions (which corresponds to MpCL) of Thb and Thd is about the same as in the MSh (Tha).
Fig. 4 .
Fig. 4. Sketch showing the TPN (tangent, perpendicular, normal) coordinate system.T and P are tangent and N normal to a paraboloid passing by the spacecraft and by the subsolar point defined from the Tsyganenko 89 model taken for the Kp of the day of the event.
Fig. 6 panel (d) shows T e from Tha. Electric field measurements were not yet available on Tha.Yet because Tha was mostly in the MSh and MpCL (dense plasmas), T e could be measured safely.We observe that T e ∼ 35 eV in the MSh and T e ∼ 60 eV in the MpCL.Furthermore T e (Thc) ≈ T e (Tha) ≈60 eV in the MpCL, which confirms that magnetosheath electrons are heated in the MpCL. Figure 6 panel (e) displays an electron spectrogram from Thc.It shows how photoelectrons are removed, allowing a precise estimate of T e in the various regions (see Fig. 6 panel d).
Fig. 5 .
Fig. 5. Similar to Fig.3, but TPN coordinates are used to display the magnetic field and the velocity.Notice that B N and V N are small on both sides of the event, as expected.The ion velocity is maximum on both sides of the FTE.Here red is for T, green for P and blue for N, the normal component.
Fig. 6 .
Fig. 6.This figure shows mainly data from Thc.From top to bottom: (a) the 3 components of the magnetic field, (b) δE/δB for different frequencies (3 Hz in red, 12 in green, 48 in cyan and 192 Hz in black) and V a (dotted line), (c) Ew the electromagnetic energy, (d) T e for Thc (green) and Tha (magenta).The bottom panel shows a spectrogram from Thc; the spacecraft moves back and forth from the MSp to the MpCL.The thin vertical lines and superposed arrows point out the coincidences between sharp gradients in T e and wave bursts.
Fig. 8 .
Fig. 8. Iso contours of the electron phase space density from Thd.At medium energies (green) evidence is given for T e// > T e⊥ .Disregard the central region (red) which is influenced by photoelectrons.
λ
De is the Debye length.For ω * pe and ω * pi null (no drift) we recover the dispersion relation of small transverse scale Alfvén wave (SSAWs).The first two terms describe the coupling with the drift.For a large β (here β ∼ 6) the second term is generally larger than the first.Neglecting the last term we get two obvious solutions: ω = ω * pi and ω = ω *
Fig. 10 .
Fig.10.Sketch summarizing our results.From the left to the right: appropriate gradients generate drift waves with small transverse scales.In turn these SSDAWs heat electrons.
Table 1 .
The normal component of the TPN coordinate system almost coincides with the normal obtained via MVA.This normal is close to be in the XYgse plane.The T component (of the TPN) is close to the M component (of LMN), whereas P (of TPN) is close to L (of LMN).Figure5shows magnetic fields and flow velocities in the TPN coordinate system, along with ion densities.In the MSp, B T is positive but B P is negative, as expected.Conversely, B T is negative and B P positive in the MSh.Note also that B N fluctuates around zero in the MpCL (Thd).On both sides of the event (in the MSp), the B N component is almost null for Thb and Thd which illustrates the value of the TPN coordinates for describing magnetopause crossings.Because Thd spent a relatively long time in the MpCL and crossed the FTE, it is interesting to compare the ion velocity in both cases.We find maxima of the ion drift velocity |V d | well before and well after the FTE.During these two periods |V d | reaches 350 km s −1 , well above MSh values (∼250 km s −1 ).Table2(top)
Table 1 .
Top: directions (in GSE) of the axis of variance obtained from MVA applied to flux gate data during Mp crossing.Bottom: direction (in MVA frame) of the TPN axis (see Fig.4and text).∼2000 km s −1 , T e ∼ 30 eV) is much larger than the wave phase velocity (ω/k // ∼ V a ∼ 100 km s −1 , for Bo ∼ 10 nT and N o ∼ 4 cm −3 ).Electric field measurements can help resolve this paradox.As indicated earlier Thd E-field measurements were not yet available
Table 2 .
Top: directions of the axis of variance obtained from MVA applied to Thd flux-gate data filtered between 0.8 and 2Hz (the maximum frequency covered during this event).Bottom: same for search-coil data filtered between 0.8 and 4 Hz.Results are given in TPN coordinates (see Fig.4and text).Lw, Mw and Nw are the LMN components of the waves. | 8,114.4 | 2011-12-23T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
The simplest wormhole in Rastall and k-essence theories
The geometry of the Ellis-Bronnikov wormhole is implemented in the Rastall and k-essence theories of gravity with a self-interacting scalar field. The form of the scalar field potential is determined in both cases. A stability analysis with respect to spherically symmetric time-dependent perturbations is carried out, and it shows that in k-essence theory the wormhole is unstable, like the original version of this geometry supported by a massless phantom scalar field in general relativity. In Rastall's theory, it turns out that a perturbative approach reveals the same inconsistency that was found previously for black hole solutions: time-dependent perturbations of the static configuration prove to be excluded by the equations of motion, and the wormhole is, in this sense, stable under spherical perturbations.
Introduction
Black holes and wormholes are remarkable predictions of the General Relativity theory (GR). The detection of gravitational waves emitted by merging of compact objects [1] and the recent image of a supermassive object at the center of the galaxy M87 [2] have brought black holes to the status of astrophysical objects whose existence in nature leaves little doubt. On the other hand, wormholes remain a hypothetical prediction of GR. In its simplest configuration, a wormhole is composed of two asymptotically flat Minkowskian space-times connected by a kind of tunnel. The two flat asymptotic regions are usually considered as different universes that are connected by a throat. One of the problematic aspects of wormhole configurations is the necessity of having negative energy, at least in the vicinity of the throat, in order that they could exist. Negative energy, which implies violation of the standard energy conditions, brings two main problems: the configuration can be unstable; or generally, the throat may not be traversable, in the sense that tidal forces may be huge, and possibly only pointlike objects may cross it from one universe to the other, except for some special cases. For a pedagogical description of wormhole properties, see Ref. [3].
The Ellis-Bronnikov (EB) wormhole [4,5] is one of the simplest solutions of GR leading to a structure of two flat asymptotics connected by a throat. As a matter content, the EB wormhole solution uses a free massless scalar field with negative energy. Such field is normally denoted as a phantom scalar field. The configuration is, as could be expected, unstable due to the repulsive nature of the scalar field, see, e.g., [6,7] and references therein. Studies of static, spherically symmetric configurations in the presence of scalar fields have a long history, see [8,9] for the first seminal works on these lines. In parallel, there has been much effort to obtain wormhole solutions which, besides being traversable, would be stable and do not require exotic matter. However, it is hard to fulfill these requirements in the context of GR and even in its extensions for a simple reason: in order to cross the throat by coming from one region and arriving in the other, the geodesics must first converge and later diverge, and this property requires repulsive properties of matter which should thus violate at least some of the standard energy conditions. Still in the framework of GR there are, on the one hand, an example [10] of a stable wormhole supported by some kind of phantom matter, and, on the other hand, examples of phantom-free rotating cylindrically symmetric wormholes whose stability properties are yet unknown [11,12].
It is well known that a given metric may be a solution of the field equations of different theories of gravity or even in a single theory with different matter sources. An example is [10] where the EB wormhole in GR is supported by a particular kind of phantom perfect fluid instead of a scalar field as in [4,5]. In the case of different theories, the matter content should naturally depend on the theory under consideration. In this paper we explore the EB wormhole metric in two different theories. The first one is Rastall's theory of gravity [13] that abandons one of the cornerstones of GR, the usual conservation law for matter fields. The second one is the k-essence theory [14] which modifies the matter sector by introducing non-canonical forms for the kinetic term of a scalar field. The k-essence proposal may be connected with some fundamental theories inspired by quantum gravity. In both cases our goal is to verify if it is possible to avoid the usual difficulties in wormhole construction and to obtain stable solutions.
Previously, both Rastall and k-essence theories with a self-interacting scalar field have been studied in attempts to obtain static, spherically symmetric black hole solutions [15,16]. The solutions turned out to be quite exotic, mainly due to the asymptotic properties at infinity. A stability analysis has shown that those k-essence solutions were unstable [17]. However, surprisingly, the perturbation analysis of the Rastall solutions was shown to be inconsistent, and the stability issue remained unclear [18]. It has been speculated that this property of the Rastall solutions is connected with the absence of a Lagrangian formulation of this theory. A curious aspect of these k-essence and Rastall solutions is that they share some duality properties, in spite of quite different structures of the theories themselves [19].
Here we show that the EB wormhole metric can be a solution of both Rastall and k-essence theories under the condition that the potential describing the self-interaction of the scalar field is nonzero. We determine the form of this potential in each case. In the k-essence theory we use a power-law expression of the kinetic term, as in Ref. [16]. We perform a perturbation analysis of these solutions using a gauge-invariant approach, and we find that the k-essence solution is unstable. Unlike that, in Rastall gravity the inconsistency found previously for black hole solutions re-appears here, and no time-dependent spherically symmetric perturbations can exist. Thus the EB metric in this framework may be said to be stable under such perturbations, but the existence of nonperturbative time-dependent solutions cannot be excluded, to say nothing of possible instabilities under less symmetric perturbations.
The paper is organized as follows. In Section 2 some general expression to be used in the calculations are settled out. In Section 3, the EB wormhole solution in GR is reproduced for comparison. In Section 4, the corresponding wormhole solution and the stability issue is presented for Rastall gravity. A similar analysis is carried out in k-essence theory in Section 5. In Section 6 we present our conclusions.
General relations
The goal of the present section is to give some general relations that will be used in the rest of the paper. We assume spherical symmetry but not necessarily static. This allows us to easily consider a static configuration which we will call the background and linear perturbations around it. Spherical symmetry can be described by a metric of the form where dΩ is the metric on a unit 2-sphere. If the configuration besides being spherically symmetric is also static, the metric coefficients α, β and γ depend only on the radial coordinate x. There is freedom to reparametrize the radial coordinate, and its particular choice can be made by postulating a condition connecting the coefficients α, β and γ . For the metric (1) the components of the Ricci tensor and expression for the d'Alambertian operator acting on a scalar field are given by where dots denote / . t . and primes / . x . . In the case of a static space-time, all time derivatives disappear. However, in the study of small time-dependent perturbations around a given static solution at linear order, the linear terms with time derivatives become relevant.
In what follows we will discuss wormhole configurations in GR, Rastall's theory of gravity in the presence of a scalar field, and k-essence theories. In all these cases, the gravitational field equations can be written as the Einstein equations with appropriate stress-energy tensors T ν µ , or alternatively, where we are using units in which (in usual notations) c = 8πG = 1. These expressions are also valid in Rastall gravity, under a suitable redefinition of the stress-energy tensor.
3 Wormhole solution in GR with a free scalar field
The (anti-)Fisher solution and the simplest wormhole
Let us begin with recalling a derivation of the Ellis-Bronnikov wormhole solution in the context of GR. The equations in the presence of a free massless scalar field φ are given by where the parameter ǫ indicates if the scalar field is of ordinary (canonical) (ǫ = 1) or phantom (ǫ = −1) type. The Einstein equations rewritten in the form (5) read Let us consider the static metric (1) and a scalar field φ = φ(x). The set of equations (6) and (7) is then most conveniently solved using the harmonic coordinate condition α = 2β + γ [5] (under which we will denote the radial coordinate by u). Indeed, under this condition, the scalar field equation (7) and two independent equations among (8) (specifically, R 0 0 = 0 and R 0 0 + R 2 2 = 0) take the form (the prime stands here for d/du). All of them are immediately integrated giving (where two other integration constants are suppressed by choosing the scale of t and the zero point of φ), and where one more integration constant is suppressed by choosing the zero point of u. The solution of (11) depends on the sign of k : which can be jointly written as Lastly, substituting (10) and (11) into the 1 1 component of Eqs. (6) (which is an integral of other components), we obtain a relation between the integration constants: The metric takes the form The constants m and C have the meaning of the Schwarzschild mass and the scalar charge, respectively. The coordinate u is defined (without loss of generality) at u > 0, and u = 0 corresponds to spatial infinity (since there r(u) ≡ e β → ∞), at which the metric is asymptotically flat. Equations (10), (14) and (15) give a joint representation of all static, spherically symmetric solutions to Eqs. (6), (7): Fisher's solution [8] of 1948 (repeatedly rediscovered afterwards) corresponding to ǫ = 1 (hence k > 0) and all three branches of the solution for ǫ = −1 [9] according to the signs of k (sometimes called anti-Fisher solutions). Detailed descriptions of the corresponding geometries can be found, e.g., in [5,6,20,21]. Note that the instability of Fisher's solution under small radial perturbations was shown in [22], and that of anti-Fisher solutions in [6,7].
Our interest here is with wormhole solutions, which form the branch ǫ = −1, k < 0: in this case, we have two flat spatial infinities at u = 0 and u = π/|k|. The solution looks more transparent after the radial coordinate transformation which brings the solution to the form where x is the so-called quasiglobal coordinate corresponding to the "gauge" α + γ = 0 in terms of the metric (1). The simplest configuration is obtained in the case of zero mass, m = 0: It is this solution that is called the Ellis wormhole [4], or the Ellis-Bronnikov (EB) wormhole, since this and more general scalar-vacuum and scalar-electrovacuum configurations were obtained and discussed in [5]. In terms of the metric (1), we have in (19) α
Ellis wormhole instability in GR
Consider now linear time-dependent spherically symmetric perturbations of the EB wormhole, described by additions δα, δβ , δγ and δφ of the corresponding static (background) quantities, characterized by some smallness parameter ε. Following [6,21,22], we choose the perturbation gauge δβ = 0. Then the perturbation equations following from (7) and (6) in the order O(ε) can be written as These equations are written with an arbitrary radial coordinate x in the background static metric but with a particular choice (δβ = 0) of the perturbation gauge fixing the reference frame in perturbed space-time. We see that Eq. (26) can be integrated in t giving with an arbitrary function ξ(x); we will put ξ(x) ≡ 0 since only time-dependent perturbations are of interest. For the Ellis wormhole solution (19), (20), such that γ = α = 0 and ǫ = −1, the remaining equations read Subtracting equations (29) and (30) and using (27), we obtain Knowing δα and δγ ′ , or equivalently using directly (31), we can eliminate the metric perturbations from the scalar field equation, which results in the following master equation: Assuming the time dependence of δφ as a single spectral mode, δφ ∝ e iωt , Eliminating δφ ′ by the substitution δφ = e −β y(x), we arrive at the Schrödinger-like equation which coincides with the master equation found in [6] in the special case where α = γ = 0 and no scalar field potential is present. Using our expressions for φ and β in the Ellis wormhole solution, we find The stability analysis requires imposing boundary condition. In our case, for x → ±∞ it is reasonable to require δφ → 0, or y = o(|x|). We can note that, asymptotically, Eq. (36)) has solutions in terms of Bessel functions, If ω = iω (an imaginary frequency describing an instability), the solutions become where K ν and I ν are modified Bessel functions. The function K ν tends to zero at large |x|, therefore, correct boundary conditions with imaginary ω are compatible with an instability. On the other hand, the positive nature of the effective potential V eff (x) in Eq. (36) (the expression in brackets) seems to exclude "energy levels" ω 2 < 0. However, this argument cannot be directly applied because of a pole of this effective potential near the wormhole throat x = 0, V eff ≈ 2/x 2 . This potential can be regularized by the appropriate Darboux transformation as described in [6,7].
The regularized potential turns out to contain a sufficiently deep well leading to the existence of an unstable perturbation mode, related to an evolving throat radius. The same result was previously obtained by a numerical study [23] which proved that an Ellis wormhole can either collapse to a black hole or inflate, depending on the sign of the initial perturbation.
The gauge condition we are using, δβ = 0, seems to prevent considering perturbations connected with a changing throat radius. But a more thorough investigation shows [6,7] that the unknown δφ in the master equation Eq. (33) is actually a gauge-invariant quantity. Indeed, a perturbation gauge may be described as a small coordinate shift Then it can be directly verified that quantities like β ′ δφ−φ ′ δβ do not change under such coordinate shifts and are thus gauge-invariant, as well as their products with any background quantities, for example, 1/β ′ . It follows that δφ in our consideration is the specific form of the gauge-invariant quantity ψ = δφ − φ ′ δβ ′ /β ′ in the gauge δβ = 0. Other functions involved in (33) are combinations of the background quantities, therefore we can safely replace there δφ with ψ and conclude that the whole master equation is gauge-invariant. It can thus be used for considering any perturbations, including those with an evolving throat radius. The gauge invariance issue is presented in more detail in [6,7,21], and its analogue for perturbations in cosmology is discussed in [24]. In our further consideration we obtain gauge-invariant master equations for spherical perturbations in a similar way.
Wormholes in Rastall gravity
In Rastall's theory, if the source of gravity is a scalar field φ with a self-interaction potential V (φ), the field equations can be written as [15,18] where a is a constant parameter of the theory, and at its special value a = 1 we return to GR. Thus the effective stress-energy tensor of the scalar field reads where W (φ) = (3 − 2a)V (φ). The modified Einstein equations can be rewritten as For the static metric (1) and φ = φ(x), the Rastall equations reduce to where W φ = dW/dφ. These equations become identical to the GR equations with a massless scalar field if we put where we should take into account that W φ = W ′ /φ ′ . Then all solutions for α, β , γ and φ ′ are the same as in GR. But, a new element in Rastall gravity is that one needs a nonzero potential in order to create these solutions. For any given special solution, the potential can be determined from Eq. (47) or from any of the equations (44)-(46). In particular, for the Ellis wormhole solution (19), (20) the potential is found to be Thus we have the simplest Ellis wormhole solution in Rastall gravity for any value of the Rastall parameter a.
Wormhole stability in Rastall gravity
To obtain the linear perturbation equations, we are consider Eqs. (40) and (42) using the expressions (2) for the Ricci tensor components, the gauge δβ = 0 and the potential (48) as a function of φ. The equations read For our simplest case γ = α = 0, ǫ = −1, the equations read In [18] it has been shown that the stability problem for the Rastall theory in static, spherically symmetric configurations is inconsistent unless all perturbations are zero. It turns out that here we come across the same problem, as could be expected in view of those results. Indeed, from Eq. (58) we obtain, as previously in GR, From the difference of (55) and (56) we obtain On the other hand, from Eqs. (57) and (59) it follows In this expression we have separated the terms contained in (59) from the others. The expressions (59) and (61) coincide only if a = 1, that is, when the Rastall theory reduces to GR, or if the quantity in brackets in (61) vanishes. In the second case, we can find explicitly the behavior of δφ: It is easy to see that, according to Eqs. (54) and (59), in the solution (62) the only possibility is φ 1 (t) = 0. Hence, there is no perturbation at linear level, the same result as in [18]. Quite similarly to [18], it implies the absence of perturbations in all orders of smallness.
5 Wormholes in k-essence theories
Static wormholes
Let us consider the theory defined by the Lagrangian density with the definitions The scalar field equation has the form where the subscripts X and φ denote derivatives with respect to the corresponding variables. The Einstein equations have the form (4) with the stress-energy tensor In the form (5) they can be written as Let us now consider static, spherically symmetric space-times with the metric (1) and φ = φ(x) and choose To avoid the possibility of complex values of f (X), we must then fix η = −1, so that The resulting equations of motion are If we assume that the Ellis wormhole is a solution to Eqs. (70)-(73), we substitute there the expressions (21) and find that the sum and difference of (71) and (72) leads to the relations It follows that ǫ = −1, which is natural for a wormhole solution that must violate the Null Energy Condition, so that T 0 0 − T 1 1 < 0. As a result, we obtain the following explicit expressions for φ ′ and the potential V : Substituting the expression for φ ′ to (70) to find V φ , one can verify that the latter coincides with (76), thus confirming the correctness of the solution. One can integrate φ ′ given by (75) to obtain It is not simple to obtain a closed expression for V (φ) (to do that, we must invert the hypergeometric function). However, V (φ) is well defined since φ ′ > 0 at all x. Also, at some special values of n the hypergeometric function can reduce to simpler expressions. Thus the Ellis wormhole solution is consistent with k-essence theory with a potential.
Instability of the k-essence solution
The perturbation equations in the gauge δβ = 0, under the condition α = γ = 0 (but their perturbations are nonzero) can be written as From Eq. (82) one obtains Using this result, and combining Eqs. (79) and (80), we obtain This expression is consistent with (61) if n = a = 1 and ǫ = −1.
Using Eq. (81), the relation (83) and the background equations, we find again Eq. (84). Hence, unlike the Rastall case, the k-essence perturbation analysis is consistent, quite similarly to the results of [17,18].
In addition, we can obtain an expression for δα ′ by combining (81) with the difference of (79) and (80) as This expression coincides with the one obtained by directly differentiating (83), which verifies the correctness of the model and the calculations. Now, to obtain the master equation for δφ, we use the previous results and insert into them (78), along with the relations due to the background equations, The final form of the master equation is or explicitly, In general, the analysis of Eq. (89) is quite complicated. Let us begin with a simple example which shows a particular case where it is possible to explicitly prove the instability. Particularly, let us fix n = 1/2. This case has been investigated in [16,18] in search for black hole solutions and a study of their stability. In fact, there are some exotic types of black hole, but they are unstable. Now we are considering the same problem for the Ellis wormhole.
With n = 1/2, Eq. (89) greatly simplifies and reads which is easily integrated giving where K 1 (x) and K 2 (x) are arbitrary functions. This evidently demonstrates the instability of the background configuration since the expression (91) exponentially grows with time if K 1 = 0. If n = 1, we return to the situation in GR. If n < 1/2, Eq. (89) loses its hyperbolic nature, and the system is hydrodynamically unstable for the same reason as described in [17] and other papers.
Of more interest is the situation where n > 1/2, in which Eq. (89) has a wave nature. It is then reasonable to get rid of the term containing δφ ′ by putting after which the equation acquires the form or, assuming a single spectral mode, y ∝ e iωt , so thatÿ = −ω 2 y , It is the Schrödinger-like equation usually appearing in stability studies, for which the corresponding boundary-value problem should be solved in order to obtain stability conclusions. For perturbations of wormholes with phantom scalar fields, the effective potentials V eff (x) contain a singularity on the throat which can be regularized with proper Darboux transformations [6,7,21] under the condition that V eff (x) = 2/x 2 +O(1) near the throat (where x is the "tortoise" coordinate in the wormhole space-time, and x = 0 is the throat). This condition is generally satisfied for wormholes supported by phantom scalar fields with arbitrary potentials [21]. Surprisingly, this condition also holds for the effective potential V eff (x) in our equation (94) for wormholes in k-essence theory, so that the stability problem can be solved along the lines of [6,7,21]. This requires a separate study, to be performed in the near future.
Conclusion
The Ellis-Bronnikov solution represents the simplest analytical wormhole solution that can be obtained in GR. It consists of two asymptotically flat regions connected by a throat. This wormhole requires a massless, minimally coupled phantom scalar field: this means that all space, not only the throat, is filled with a field having negative energy density. In spite of being a very elegant and simple solution, the EB wormhole has a major drawback: it is unstable under linear perturbations. To look for a simple wormhole solution like the EB one that may not require phantom fields and/or that are stable is a challenge, even if some extensions of GR are employed.
It is well known that the same metric can be a solution of different gravitational theories. We have exploited this fact in order to investigate the conditions at which the EB wormhole metric can be a solution in the context of two extended gravity theories, Rastall gravity and k-essence. Rastall gravity is a more radical departure from GR since it modifies the usual expression for the conservation law. In some sense, Rastall gravity may be recast in the structure similar to GR where the expression for the energy-momentum tensor must be modified. Unlike that, k-essence is essentially a modification of the matter sector, keeping a Lagrangian formulation, by generalizing the usual kinetic expression. Both theories have applications, for example, in cosmology [14,25,26] and black hole physics [15,16].
We have shown that the EB metric can be a static, spherically symmetric solution in both Rastall and k-essence theories. To achieve that, a potential must be added in both cases, implying that, as opposed to GR, a self-interacting scalar field is required. The next step was to investigate the stability of these solutions in Rastall and k-essence cases. In Rastall gravity we face the same feature that was already found for black hole solutions: the usual perturbative approach leads to inconsistencies forcing to set all fluctuations near the background solution equal to zero. Perhaps this curious property is connected with the absence of a Lagrangian formulation. In k-essence theory, we have shown that the wormhole is unstable with respect to linear perturbation at least for the parameter n in the range n ≤ 1/2. Two remarks must be added concerning these results of the perturbation analysis. First, only the simplest version of a wormhole metric has been investigated. This restriction is motivated by technical reasons since more complex configurations lead to very cumbersome expressions for the perturbations, even if a master equation can be obtained. Very probably, numerical investigation can be necessary, which may imply new technical challenges. However, this remark mainly concerns the k-essence case. In Rastall gravity the situation may be more involved since we can expect that the inconsistency found here in the perturbative approach must remain, then a nonperturbative approach must be implemented. We hope to address these problems in future studies.
Program. The research of K.B. was also funded by the Ministry of Science and Higher Education of the Russian Federation, Project "Fundamental properties of elementary particles and cosmology" N 0723-2020-0041, and by RFBR Project 19-02-00346. | 6,143.2 | 2021-02-22T00:00:00.000 | [
"Physics"
] |
GROMEX: A Scalable and Versatile Fast Multipole Method for Biomolecular Simulation
,
Introduction
The majority of cellular function is carried out by biological nanomachines made of proteins.Ranging from transporters to enzymes, from motor to signalling proteins, conformational transitions are frequently at the core of protein function, which renders the detailed understanding of the involved dynamics indispensable.Experimentally, atomistic dynamics on submillisecond timescales are notoriously difficult to access, making computer simulations the method of choice.Molecular dynamics (MD) simulations of biomolecular systems are nowadays routinely used to study the mechanisms underlying biological function in atomic detail.Examples reach from membrane channels [28], microtubules [20], and whole ribosomes [4] to subcellular organelles [43].Recently, the first MD simulation of an entire gene was reported, comprising about a billion of atoms [21].
Apart from system size, the scope of such simulations is limited by model accuracy and simulation length.Particularly the accurate treatment of electrostatic interactions is essential to properly describe a biomolecule's functional motions.However, these interactions are numerically challenging for two reasons.
First, their long-range character (the potential drops off slowly with 1 /r with distance r) renders traditional cut-off schemes prone to artifacts, such that gridbased Ewald summation methods were introduced to provide an accurate solution in 3D periodic boundaries.The current standard is the Particle Mesh Ewald (PME) method that makes use of fast Fourier transforms (FFTs) and scales as N • log N with the number of charges N [11].However, when parallelizing PME over many compute nodes, the algorithm's communication requirements become more limiting than the scaling with respect to N. Because of the involved FFTs, parallel PME requires multiple all-to-all communication steps per time step, in which the number of messages sent between p processes scales with p 2 [29].For the PME algorithm included in the highly efficient, open source MD package GROMACS [42], much effort has been made to reduce as much as possible the all-to-all bottleneck, e.g. by partitioning the parallel computer in long-range and short-range processors, which reduces the number of messages involved in all-to-all communication [17].Despite these efforts, however, even for multimillion atom MD systems on modern hardware, performance levels off beyond several thousand cores due to the inherent parallelization limitations of PME [30,42,45].
The second challenge is the tight and non-local coupling between the electrostatic potential and the location of charges on the protein, in particular titratable/protonatable groups that adapt their total charge and potentially also their charge distribution to their current electrostatic environment.Hence, all protonation states are closely coupled, depend on pH, and therefore the protonation/deprotonation dynamics needs to be taken into account during the simulation.Whereas most MD simulations employ fixed protonation states for each titratable group, several dynamical schemes have been introduced [8,13,14,23,33,37] that use a protonation coordinate λ to distinguish the protonated from the deprotonated state.Here, we follow and expand the λ-dynamics approach of Brooks et al. [27] and treat λ as an additional degree of freedom in the Hamiltonian with mass m λ .Each protonatable group is associated with its own λ "particle" that adopts continuous values in the interval [0, 1], where the end points around λ = 0 and λ = 1 correspond to the physical protonated or deprotonated states.A barrier potential with its maximum at λ ≈ 0.5 serves two purposes.(1) It reduces the time spent in unphysical states, and (2) it allows to tune for optimal sampling of the λ coordinate by adjusting its height [8,9].Current λ-dynamics simulations with GROMACS are however limited to small system sizes with a small number n λ of protonatable groups [7][8][9], as the existing, PME-based implementation (see www.mpibpc.mpg.de/grubmueller/constpH)needs an extra PME mesh evaluation per λ group and suffers from the PME parallelization problem.While these extra PME evaluations can be overcome for the case where only the charges differ between the states, for the most general case of chemical alterations this is not possible.
Without the PME parallelization limitations, a significantly higher number of compute nodes could be utilized, so that both larger and more realistic biomolecular systems would become accessible.The Fast Multipole Method [15] (FMM) is a method that by construction parallelizes much better than PME.Beyond that, the FMM can compute and communicate the additional multipole expansions that are required for the local charge alternatives of λ groups with far less overhead as compared to the PME case.This makes the communicated volume (extra multipole components) somewhat larger, but no global communication steps are involved as in PME, where the global communication volume grows linearly with n λ and quadratic with p.We also considered other methods that, like FMM, scale linearly with the number of charges, as e.g.multigrid methods.We decided in favor of FMM, because it showed better energy conservation and higher performance in a comparison study [2].
We will now introduce λ-dynamics methods and related work to motivate the special requirements they have on the electrostatics solver.Then follows an overview of our FMM-based solver and the design decisions reflecting the specific needs of MD simulation.We will describe several of the algorithmical and hardware-exploiting features of the implementation such as error control, automatic performance tuning, the lightweight tasking engine, and the CUDA-based GPU implementation.
Chemical Variability and Protonation Dynamics
Classical MD simulations employ a Hamiltonian H that includes potential terms modeling the bonded interactions between pairs of atoms, the bond angle interactions between bonded atoms, and the van der Waals and Coulomb interactions between all pairs of atoms.For conventional, force field based MD simulations, the chemistry of molecules is fixed during a simulation because chemical changes are not described by established biomolecular force fields.Exceptions are alchemical transformations [36,38,46,47], where the system is either driven from a state A described by Hamiltonian H A to a slightly different state B (with H B ) via a λ parameter that increases linearly with time, or where A/B chimeric states are simulated at several fixed λ values between λ = 0 and λ = 1, as e.g. in thermodynamic integration [24].The A → B transition is described by a combined, λ-dependent Hamiltonian ( In these simulations, which usually aim at determining the free energy difference between the A and B states, the value of λ is an input parameter. In contrast, with λ-dynamics [16,25,27], the λ parameter is treated as an additional degree of freedom with mass m, whose 1D coordinate λ and velocity λ evolve dynamically during the simulation.Whereas in a normal MD simulation all protonation states are fixed, with λ-dynamics, the pH value is fixed instead and the protonation state of a titratable group changes back and forth during the simulation in response to its local electrostatic environment [23,39].If two states (or forms) A and B are involved in the chemical transition, the corresponding Hamiltonian expands to with a bias potential V bias that is calibrated to reflect the (experimentally determined) free energy difference between the A and B states and that optionally controls other properties relating to the A B transitions [8].With the potential energy part V of the Hamiltonian, the force acting on the λ particle is If coupled to the protonated and deprotonated form of an amino acid side chain, e.g., λ-dynamics enables dynamic protonation and deprotonation of this side chain in the simulation (see Fig. 1 for an example), accurately reacting to the electrostatic environment of the side chain.More generally, also alchemical transformations beyond protons are possible, as well as transformations involving more than just two forms A and B. Equation 2shows the Hamiltonian for the simplest case of a single protonatable group with two forms A and B, but we have extended the framework to multiple protonatable groups using one λ i parameter for each chemical form [7][8][9].
Variants of λ-Dynamics and the Bias Potential
The key aim of λ-dynamics methods is to allow for dynamic protonation, but there are three areas in which the implementations differ from each other.These are the coordinate system used for λ, the type of the applied bias potential, and how λ is coupled to the alchemical transition.Before we discuss the different choices, let
water
Fig. 1 Simplified sketch of a protein (right, grey) in solution (blue) with several protonatable sites (ball-and-stick representations) of which a histidine (top left) and a carboxyl group (bottom left) are highlighted.The histidine site contains four forms (two neutral, two charged), whereas the carboxyl group contains three forms (two neutral, one negatively charged).In λ-dynamics, the lambdas controls how much of each form is contributing to a site.Atom color coding: carbonsblack, hydrogens/protons-white, oxygens-red, nitrogens-blue us define two terms used in the context of chemical variability and protonation.We use the term site for a part of a molecule that can interconvert between two or more chemically different states, e.g. the protonated and deprotonated forms of an aminoacid.Additionally, we call each of the chemically different states of a site a form.For instance, a protonatable group is a site with at least two forms A and B, a protonated form A and a deprotonated form B.
The Coordinate System for λ
Based on the coordinate system in which λ lives (or on the dynamical variables used to express λ), we consider three variants of λ-dynamics listed in Table 1.The linear variant is conceptually most straightforward, but it definitely needs a bias potential to constrain λ to the interval [0..1].The circular coordinate system for λ used in the hypersphere variant automatically constrains the range of λ values to the desired interval, however one needs to properly correct for the associated circle entropy [8].The Nexp variant implicitly fulfils the constraints on the N forms individual lambdas (Eq.4) for sites that are allowed to transition between N forms different forms (N forms = 2 in the case of simple protonation), such that no additional constraint solver for the λ i is needed.
The Bias Potential
The bias potential V bias (λ) that acts on λ fulfils one or more of the following tasks.
1.If needed, it limits the accessible values of λ to the interval [0..1], whereas slight fluctuations outside that interval may be desirable (Fig. 2a).2. It cancels out any unwanted barrier at intermediate λ values (b) 3. It takes care that the resulting λ values cluster around 0 or 1, suppressing values between about 0.2 and 0.8 (c) 4. It regulates the depth and width of the minima at 0 and 1, such that the resulting λ distribution fits the experimental free energy difference between protonated and deprotonated form (c + d). 5.It allows to tune for optimal sampling of the λ space by adjusting the barrier height at λ = 0.5 (c) Taken together, the various contributions to the barrier potential might look like the example given in Fig. 3 for a particular λ in a simulation.
How λ Controls the Transition Between States
The λ parameter can either be coupled to the transition itself between two forms (as in [8,9]), then λ = 0 corresponds to form A and λ = 1 to form B. Alternatively, each form gets assigned its own λ α with α ∈ {A, B} as weight parameter.In the latter case one needs extra constraints on the weights similar to such that only one of the physical forms A or B is fully present at a time.For the examples mentioned so far, with just two forms, both approaches are equivalent and one would rather choose the first one, because it involves only one λ and needs no extra constraints.If, however, a site can adopt more than two chemically different forms, the weight approach can become more convenient as it allows to treat sites with any number N forms of forms (using a number of N forms independent λ parameters).Further, it does not require that the number of forms is a power of two (N forms = 2 N λ ) as in the transition approach.
Keeping the System Neutral with Buffer Sites
In periodic boundary conditions as typically used in MD simulations, the electrostatic energy is only defined for systems with a zero net charge.Therefore, if the charge of the MD system changes due to λ mediated (de)protonation events, system neutrality has to be preserved.With PME, any net charge can be artificially removed by setting the respective Fourier mode's coefficient to zero, so that also in these cases a value for the electrostatic energy can be computed.However, it is merely the energy of a similar system with a neutralizing background charge added.Severe simulation artifacts have been reported as side effects of this approach [19].
As an alternative, a charge buffer can be used that balances the net charge of the simulation system arising from fluctuating charge of the protonatable sites [9,48].A reduced number of n buffer buffer sites, each with a fractional charge |q| ≤ 1e (e.g. via H 2 O H 3 O + ), was found to be sufficient to neutralize the N sites protonatable groups of a protein with n buffer N sites .The total charge of these buffer ions is coupled to the system's net charge with a holonomic constraint [9].The buffer sites should be placed sufficiently far from each other, such that their direct electrostatic interaction through the shielding solvent is negligible.
A Modern FMM Implementation in C++ Tailored to MD Simulation
High performance computing (HPC) biomolecular simulations differ from other scientific applications by their comparatively small particle numbers and by their extremely high iteration rates.With GROMACS, when approaching the scaling limit, the number of particles per CPU core typically lies in the order of a few hundred, whereas the wall-clock time required for computing one time step lies in the range of a millisecond or less [42].In MD simulations with λ-dynamics, the additional challenge arises to calculate the energy and forces from a Hamiltonian similar to Eq. 2, but for N protonatable sites, in an efficient way.In addition to the Coulomb forces on the regular charged particles, the electrostatic solver has to compute the forces on the N λ particles as well [8] via Accordingly, with λ-dynamics, for each of the λ i 's, the energies of the pure (i.e., λ i = 0 and λ i = 1) states have to be evaluated while keeping all other lambdas at their actual fractional values.
The aforementioned requirements of biomolecular electrostatics have driven several design decisions in our C++ FMM, which is a completely new C++ reimplementation of the Fortran ScaFaCoS FMM [5].Although several other FMM implementations exist [1,50], none of them is prepared to compute the potential terms needed for biomolecular simulations with λ-dynamics.
Although our FMM is tailored for usage with GROMACS, it can be used as an electrostatics solver for other applications as well as it comes as a separate library in a distinct Git repository.On the GROMACS side we provide the necessary modifications such that FMM instead of PME can be chosen at run time.Apart from that, GROMACS calls our FMM library via an interface that can also be used by other codes.The development of this library follows three principles.First, the building blocks (i.e., data structures) used in the FMM support each level of the hierarchical parallelism available on today's hardware.Second, the library provides different implementations of the involved FMM operators depending on the underlying hardware.Third, the library optionally supports λ-dynamics via an additional interface.
The FMM in a Nutshell
The FMM approximates and thereby speeds up the computation of the Coulomb potential V C for a system of N charges: For that purpose, the FMM divides the simulation box into eight smaller boxes (depth d = 1), which are subsequently subdivided into eight smaller boxes again (d = 2) and again (d = 3, 4, . ..).The depth d refers to the number of subdivisions.On the lowermost level, i.e. for the smallest boxes (largest d), all interactions between neighboring boxes are directly calculated (these are called the near-field interactions).Interactions with boxes further away are approximated by a multipole expansion of order p (these are called the far-field interactions).A comprehensive description of the FMM algorithm is beyond the scope of this text, however we will shortly describe the basic workflow and the different operators used in the six FMM stages as these will be referred to in the following sections.For a detailed overview of the FMM, see [22]; for an introduction in our C++ FMM implementation see [12].
FMM Workflow
The FMM algorithm consists of six different stages, five of them required for the farfield (FF) and one for the nearfield (NF) (Fig. 4).After setting up the FMM parameters tree depth (d) and multipole order (p), the following workflow is executed.
Features of Our FMM Implementation
Our FMM implementation includes special algorithmical features and features that help to optimally exploit the underlying hardware.Algorithmical features are • Support for open and 1D, 2D and 3D periodic boundary conditions for cubic boxes.• Support for λ-dynamics (Sect.2).• Communication-avoiding algorithms for internode communication via MPI (Fig. 9).• Automatic tuning of FMM parameters d and p to provide automatic error control and runtime minimization [6] based on a user-provided energy error threshold E (Fig. 10).• Adjustable tuning to reduce or avoid energy drift (Fig. 11).
Intra-Core Parallelism
A large fraction of today's HPC peak performance stems from the increasing width of SIMD vector units.However, even modern compilers cannot generate fully vectorized code unless the data structures and dependencies are very simple.Generic algorithms like FFTs or basic linear algebra can be accelerated by using third-party libraries and tools specifically tuned and optimized for a multitude of different hardware configurations.Unfortunately, the FMM data structures are not trivially vectorizable and require careful design.Therefore, we developed a performanceportable SIMD layer for non-standard data structures and dependencies in C++.
Using only C++11 language features without third-party libraries allows to finetune the abstraction layer for the non-trivial data structures and achieve a better utilization.Compile-time loop-unrolling and tunable stacking are used to increase out-of-order execution and instruction-level parallelism.Such optimizations depend heavily on the targeted hardware and must not be part of the algorithmic layer of the code.Therefore, the SIMD layer serves as an abstraction layer that hides such hardware-specifics and that helps to increase code readability and maintainability.The requested SIMD width (1×, 2×, . . ., 16×) and type (float, double) is selected at compile time.The overhead costs and performance results are shown in Fig. 5.The baseline plot (blue) shows the costs of the M2L operation (float) without any vectorization enabled.All other plots show the costs of the M2L operation (float) and 16-fold vectorization (AVX-512).Since the runtime of the M2L operation is limited by the loads of the M2L operator, we try to amortize these costs by utilizing multiple (2× . . .6×) SIMDized multipole coefficient matrices together with a single operator via unrolling (stacking).As can be seen in Fig. 5, unrolling the multipole coefficient matrices 2× (red), we reach the minimal computation time and the expected 16-fold speedup.Additional unroll factors (3× . . .6×) will not improve performance due to register spilling.To reach optimal performance, it is required to reuse (cache) the M2L operator for around 300 (or more) of these steps.
Intra-Node and Inter-Node Parallelism
To overcome scaling bottlenecks of a pragma-based loop-level parallelization (see Fig. 4), our FMM employs a lightweight tasking framework purely based on C++.Being independent of other third-party tasking libraries and compiler extensions allows to utilize resources better, since algorithm-specific behavior and data-flow can be taken into account.Two distinct design features are a type-driven priority scheduler and a static dataflow dispatcher.The scheduler is capable of prioritizing tasks depending on their type at compile time.Hence, it is possible to prioritize vertical operations (like M2M and L2L) in the tree.This reduces the runtime twofold.First, it reduces the scheduling overhead at runtime by avoiding costly The data flow of the FMM still consists of six stages.However, synchronization now happens on a fine-grained level and not only after each full stage is completed.This allows to overlap parts that exhibit poor parallelization with parts that show a high degree of parallel code.
The dependencies of such a data flow graph can be evaluated and even prioritized at compile time virtual function calls.Second, since the execution of the critical path is prioritized, the scheduler ensures that a sufficient amount of independent parallelism gets generated.The dataflow dispatcher defines the dependencies between tasks-a data flow graph-also at compile time (see Fig. 6).Together with loadbalancing and workstealing strategies, even a non-trivial FMM data flow can be executed.For compute-bound problems this design shows virtually no overhead.However, in MD 8 Initial internode FMM benchmark for 1,000,000 particles, multipole order p = 3 and tree depth d = 5 with one MPI rank per compute node of the JURECA cluster we are interested in smaller particle systems with only a few hundred particles per compute node.Hence, we have to take even more hardware constraints into account.Performance penalties due to the memory hierarchy (NUMA) and costs to access memory in a shared fashion via locks introduce additional overhead.Therefore, we extended also our tasking framework with NUMA-aware memory allocations, workstealing and scalable Mellor-Crummey Scott (MCS) locks [35] to enhance the parallel scalability over many threads, as shown in Fig. 7.
In the future, we will extend our tasking framework so that tasks can also be offloaded to local accelerators like GPUs, if available on the node.
For the node-to-node communication via MPI the aforementioned concepts do not work well (see Fig. 8), since loadbalancing or workstealing would create large overheads due to a large amount of small messages.To avoid or reduce efficiency Fig. 9 Left: Intranode FMM parallelization-efficiency of different threading implementations.Near field interaction of 114,537 particles in double precision on up to 28 cores on a single node with two 14-core Intel Xeon E5-2695 v3 CPUs.Single precision computation as well as other threading schemes (std::thread, boost::thread, OpenMP) showed similar excellent scaling behavior.The plot has been normalized to the maximum turbo mode frequency which varies with the number of active cores (3.3-2.8GHz for scalar operation, 3.0-2.6GHz for SIMD operation).Right: Internode parallelization-strong scaling efficiency of a communication avoiding, replicationbased workload distribution scheme [10].Near field interaction of 114,537 particles on up to 65,536 Blue Gene/Q cores using replication factor c. In the initial replication phase, only c nodes within a group communicate.Afterwards, communication is restricted to all pairs of p/c groups.For 65,536 cores, i.e.only 1-2 particles per core initially, a maximum parallel efficiency of 84% (22 ms runtime) is reached for c = 64, and the maximal replication factor c = 256 yields an efficiency of 73%, while a classical particle distribution (c = 1) would require a runtime exceeding 1 min due to communication latency the latency that comes with each message, we employ a communication-avoiding parallelization scheme [10].Nodes do not communicate separately with each other, but form groups in order to reduce the total number of messages.At the same time the message size can be increased.Depending on the total number of nodes involved, the group size parameter can be tuned for performance (see Fig. 9).
Algorithmic Interface
Choosing the optimal FMM parameters in terms of accuracy and performance is difficult if not impossible to do manually as they also depend on the charge distribution itself.A naive choice of tree depth d and multipole order p might either lead to wasting FLOPs or to results that are not accurate enough.Therefore, d and p are automatically tuned depending on the underlying hardware and on a provided energy tolerance E (absolute or relative acceptable error in Coulombic energy).The corresponding parameter set {d, p} is computed such that the accuracy is met at minimal computational costs (Fig. 10) [6].
Besides tuning the accuracy to achieve a certain acceptable error in the Coulombic energy for each time step, the FMM can additionally be tuned to reduce the energy drift over time.Fig. 10 Depending on a maximum relative or absolute energy tolerance E, the automatic runtime minimization provides the optimal set of FMM input parameters {d, p}.A lower requested error in energy results in an increased multipole order p (magenta).Since the computational complexity of the farfield operators M2M, M2L and L2L scales with p 3 or even p 4 (depending on the used implementation), the tree depth d is reduced accordingly to achieve a minimal runtime (green).With fractional depths [49], as used here, the runtime can be optimized even more than with integer depths Whereas multipole orders of about ten yield a comparable drift of the total energy over time as a typical simulation with PME, the drift with FMM can be reduced to much lower levels if desired (Fig. 11).
CUDA Implementation of the FMM for GPUs
A growing number of HPC clusters incorporate accelerators like GPUs to deliver a large part of the FLOPS.Also GROMACS evolves towards offloading more and more tasks to the GPU, for reasons of both performance and cost-efficiency [31,32].
For system sizes that are typical for biomolecular simulations, FMM performance critically depends on the M2L and P2P operators.For multipole orders of about eight and larger their execution times dominate the overall FMM runtime (Fig. 12).
Hence, these operators need to be parallelized very efficiently on the GPU.At the same time, all remaining operators need to be implemented on the GPU as well to avoid memory traffic between device (GPU) and host (CPU) that would otherwise become necessary.This traffic would introduce a substantial overhead as a complete MD time step may take just a few milliseconds to execute.
Our encapsulated GPU FMM implementation takes particle positions and charges as input and returns the electrostatic forces on the particles as output.Memory transfers between host and device are performed only at these two points in the calculation step.Å Fig. 11 Observed drift of the total energy for different electrostatics settings.Left: evolution of the total energy for PME with order 4, mesh distance 0.113 nm, ewald-rtol set to 10 −5 (black line) as well as for FMM with different multipole orders p at depth d = 3 (see legend in the right panel).Test system is a double precision simulation at T ≈ 300 K in periodic boundaries of 40 Na + and 40 Cl − ions solvated in a 4.07 nm 3 box containing extended simple point charge (SPC/E) water molecules [3], comprising 6740 atoms altogether.Time step t = 2 fs, cutoffs at 0.9 nm, pair-list updated every ten steps.Right: Black squares show the drift with PME for different Verlet buffer sizes for the water/ions system using 4 × 4 cluster pair lists [41].For comparison, green line shows the same for pure SPC/E water (without ions) taken from Ref. [34].Influence of different multipole orders p on the drift is shown for a fixed buffer size of 8.3 Å.The GROMACS default Verlet buffer settings yield a drift of ≈ 8 × 10 −5 kJ/mol/ps per atom for these MD systems, corresponding to the first data point on the left (black square/green circle) The particle positions and charges are split into different CUDA streams that allow for asynchronous memory transfer to the host.The memory transfer is overlapped with the computation of the spatial affiliation of the octree box.
In contrast to the CPU FMM that utilizes O(p 3 ) far field operators (M2M, M2L, L2L), the GPU version is based on the O(p 4 ) operator variant.The O(p 3 ) operators require less multiplications to calculate the result, but they introduce additional highly irregular data structures to rotate the moments.Since the performance of the GPU FMM at small multipole orders is not limited by the number of floating point operations (Fig. 12) but rather by scattered memory access patterns, we use the O(p 4 ) operators for the GPU implementation.
We will now outline our CUDA implementation of the operations needed in the various stages of the FMM (Figs. 4, 5, and 6), which starts by building the multipoles on the lowest level with the P2M operator.
GTX 1080Ti depth d = 3
RTX 2080 depth d = 3 Fig. 12 Colored bars show detailed timings for the various parts of a single FMM step on a GTX 1080Ti GPU for a 103,000 particle system using depth d = 3.For comparison, total execution time for d = 3 on an RTX 2080 GPU is shown as brown line, whereas black line shows timings for d = 4 on a GTX 1080Ti GPU.CUDA parallelization is used in each FMM stage leaving the CPU mostly idle
P2M: Particle to Multipole
The P2M operation is described in detail elsewhere [44].The large number of registers that is required and the recursive nature of this stage limits the efficient GPU parallelization.The operation is however executed independently for each particle and the requested multipole expansion is gained by summing atomically into common expansion points.The result is precomputed locally using shared memory or intra-warp communication to reduce the global memory traffic when storing the multipole moments.The multipole moments ω, local moments μ and the far field operators A, M, and C are stored as triangular shaped matrices and M ∈ K 2p×2p , where p is the multipole order.
To map the triangular matrices efficiently to contiguous memory, their elements are stored as 1D arrays of complex values and the l, m indices are calculated on the fly when accessing the data.For optimal performance, different stages of the FMM require different memory access patterns.Therefore, the data structures are stored redundantly in a Structure of Arrays (SoA) and Array of Structures (AoS) version.
The P2M operator writes to AoS, whereas the far field operators use SoA.A copy kernel, negligible in runtime, does the copying from one structure to another.
M2M: Multipole to Multipole
The M2M operation, which shifts the multipole expansions of the child boxes to their parents, is executed on all boxes within the tree, except for the root node which has no parent box.The complexity of this operation is O(p 4 ); one M2M operation has the form where A is the M2M operator and a and a are different expansion center vectors.The operation performs O(p 2 ) dot products between ω and a part of the operator A. These operations need to be executed in all boxes in the octree, excluding the box on level 0, i.e. the root node.The kernels are executed level wise on each depth, synchronizing between each level.Each computation of the target ω lm for a distinct (l, m) pair is performed in a different CUDA block of the kernel, with threads within a block accessing different boxes sharing the same operator.The operator can be efficiently preloaded into CUDA shared memory and is accessed for different ω lm residing in different octree boxes.Each single reduction step is performed sequentially by each thread.This has the advantage that the partial products are stored locally in registers, reducing the global memory traffic since only O(p 2 ) elements are written to global memory.It also reduces the atomic accesses, since the results from eight distinct multipoles are written into one common target multipole.
M2L: Multipole to Local
The M2L operator works similarly to M2M, but it requires much more transformations as each source ω is transformed to 189 target μ boxes.The group of boxes to which a particular ω is transformed to is called the interaction set.It contains all child boxes of the direct neighbor boxes of the source's ω parent.The M2L operation is defined as where r and a are different expansion centers.The operation differs only slightly from M2M in the access pattern but is of the same O(p 4 ) complexity.As the M2L runtime is crucial for the overall FMM performance, we have implemented several parallelization schemes.Which scheme is the fastest depends on tree depth and multipole order.The most efficient implementation is based on presorted lists containing interaction box pointers.The lists are presorted so that the symmetry of the operator M can be exploited.In M, the orthogonal operator elements differ only by their sign.Harnessing this minimizes the number of multiplications and global memory accesses and allows to reduce the number of spawned CUDA blocks from 189 to 54.However, it introduces additional overhead in logic to change signs and computations of additional target μ box positions, so the performance speedup is smaller than 189/54.The kernel is spawned similarly to the M2M kernel performing one dot product per CUDA block preloading the operator M into shared memory.The sign changing is done with the help of and additional bitset provided for each operator.Three different parallelization approaches are compared in Fig. 13.Considering the hardware performance bottlenecks of this stage, the limitations highly differ for particular implementations.The naive M2L kernel is clearly bandwidth limited and achieves nearly 500 GB/s for multipole orders larger than ten.This is higher than the theoretical memory throughput of the tested GPU, which is 480 GB/s, due to caching effects.The cache utilization is nearly at 100% achieving 3500 GB/s.However, the performance of this kernel can be enhanced further by moving towards more compute bound regime.With the dynamical approach the performance is mainly limited by the costs of spawning additional kernels.It can be clearly seen with the flat curve shape for multipoles naive dynamic symmetric Fig. 13 Comparison of three different parallelization schemes for the M2L operator, which is the most compute intensive part of the FMM algorithm.The naive implementation (red) directly maps the operator loops to CUDA blocks.It beats the other schemes only for orders p < 2. Dynamic parallelization (blue) is a CUDA specific approach that dynamically spawns thread groups from the kernels.The symmetric scheme (magenta) represents the FMM tree via presorted interaction lists.It also exploits the symmetry of the M2L operator Fig. 14 Hardware utilization of the symmetrical M2L kernel of the GPU-FMM smaller than 13 in Fig. 13.The hardware utilization for the symmetrical kernel is depicted in Fig. 14.The performance of this kernel depends on the multipole order p, since p 2 is a CUDA gridsize parameter [40].The values p < 7 lead to underutilization of the underlying hardware, however they are mostly not of practical relevance.For larger values the performance is operations bound achieving about 80% of the possible compute utilization.
L2L: Local to Local
The L2L operation is executed for each box in the octree, shifting the local moments from the root of the tree down to the leaves, opposite in direction to M2M.Although the implementation is nearly identical, it achieves slightly better performance than M2M because the number of atomic memory accesses is reduced due to the tree traversing direction.For the L2L operator, the result is written into eight target boxes, whereas M2M gathers information from eight source boxes into one.
L2P: Local to Particles
The calculation of the potentials at particle positions x i requires evaluating where ωi lm is a chargeless multipole moment of particle at position x i and N box the number of particles in the box.The complexity of each operation is O(p 2 ).This stage is similar to P2M since the chargeless moments need to be evaluated for each particle using the same routine for a charge of q = 1.The performance is limited by register requirement but like in the P2M stage it runs concurrently for each particle and it is overlapped with the asynchronous memory transfer from device to host.
P2P: Particle to Particle
The FMM computes direct Coulomb interactions only for particles in the leaves of the octree and between particles in boxes that are direct neighbors.These interactions can be computed for each pair of atoms directly by starting one thread for each target particle in the box that sequentially loops over all source particles.An alternative way that better fits the GPU hardware is to compute these interactions for pairs of clusters of size M and N particles, with M × N = 32 the CUDA warp size, as laid out in [41].The forces acting on the sources and on the targets are calculated simultaneously.The interactions are computed in parallel between all needed boxbox pairs in the octree.The resulting speedup of computing all atomic interactions between pairs of clusters instead of using simpler, but longer loops over pairs of atoms is shown in Fig. 15.The P2P kernels are clearly compute bound.The exact performance evaluation of the kernel can be found in [41].
GPU FMM with λ-Dynamics Support
In addition to the regular Coulomb interactions, with λ-dynamics, extra energy terms for all forms of all λ sites need to be evaluated such that the forces on the λ particles can be derived.The resulting additional operations exhibit a very unstructured pattern that varies depending on the distribution of the particles associated with λ sites.Such a pattern can be described by multiple sparse FMM octrees that additionally interact with each other.The sparsity that emerges from a relatively small size of the λ sites necessitates a different parallelization than for a standard FMM.To support λ-dynamics efficiently, all stages of the algorithm were adapted.Especially, the most compute intense shifting (M2M, L2L) and transformation (M2L) operations need a different parallelization than that of the normal FMM to run efficiently for a sparse octree.Figure 16 shows the runtime of the CUDA parallelized λ-FMM as a function of the system size, whereas Fig. 17 shows the overhead associated with λ-dynamics.The overhead that emerges from addition of λ sites to the simulation system scales linearly with the number of additional sites with a factor of about 10 −3 per site.This shows that the FMM tree structure fits particularly well the λ-dynamics requirements for flexibility to compute the highly unstructured, additional particle-particle interactions.Note that our λ-FMM kernels still have the potential for more optimizations (at the moment they achieve only about 60% of the efficiency of the regular FMM kernels) such that for the final optimized implementation we expect the costs for the additional sites to be even smaller than what is shown in Fig. 17 16 Absolute runtime of the λ-FMM CUDA implementation.For this example we use one λ site per 4000 particles as estimated from the hen egg white lysozyme model system for constant-pH simulation.Each form of a λ site contains ten particles.The tests were run on a GTX 1080Ti GPU Fig. 17 As Fig. 16, but now showing relative costs of adding λ-dynamics functionality to the regular GPU FMM
Conclusions and Outlook
All-atom, explicit solvent biomolecular simulations with λ-dynamics are still limited to comparatively small simulation systems (<100,000 particles) and/or short timescales [7,9,18].To ultimately allow for a realistic (e.g., const-pH) treatment of large biomolecular systems on long timescales, we are developing an efficient FMM that computes the long-range Coulomb interactions, including local charge alternatives for a large number of sites, with just a small overhead compared to the case without λ-dynamics.
Our FMM library is a modern C++11 based implementation tailored towards the specific requirements of biomolecular simulation, which are a comparatively small number of particles per compute core and a very short wall clock time per iteration.The presented implementation offers near-optimal performance on various SIMD architectures, an efficient CUDA version for GPUs, and it makes use of fractional tree depths for optimal performance.In addition to supporting chemical variability via λ-dynamics, it has several more unique features such as a rigorous error control, and based upon that, an automatic performance optimization at runtime.The energy drift resulting from errors in the FMM calculation can be reduced to virtually zero with a newly developed scheme that adapts the multipole expansion order p locally and on the fly in response to the requested maximum energy error.With fixed p, using multipole orders 10-14 yields drifts that are smaller than those observed for typical simulations with PME.We expect the FMM to be useful also for normal MD simulations, as a drop-in PME replacement for extreme scaling scenarios where PME reaches its scaling limit.
The GPU version of our FMM will implicitly use the same parallelization framework as the CPU version.In fact, GPUs will be treated as one of several resources a node offers (in addition to CPUs), to which tasks can be scheduled.As our GPU implementation is not a monolithic module, it can be used to calculate individual parts of the FMM, like the near-field contribution or the M2L operations of one of the local boxes only, in a fine-grained manner.How much work is offloaded to local GPUs will depend on the node specifications and on how much GPU and CPU processing power is available.
The λ-dynamics module allows to choose between three different variants of λ-dynamics.The dynamics and equilibrium distributions of the lambdas can be flexibly tuned by a barrier potential, whereas buffer sites ensure system neutrality in periodic boundary conditions.Compared to a regular FMM calculation without local charge alternatives, the GPU-FMM with λ-dynamics is only a factor of two slower even for a large (500,000 atom) simulation system with more than 100 protonatable sites.
Although some infrastructure that is needed for out-of-the-box constant-pH simulations in GROMACS still has to be implemented, with the λ-dynamics and FMM modules, the most important building blocks are in place and performing well.The next steps will be to carry out realistic tests with the new λ-dynamics implementation and to thoroughly compare to known results from older studies, before advancing to larger, more complex simulation systems that have become feasible now.The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fig. 5 Fig. 6
Fig.5M2L operation benchmark for vectorized data structures with multipole order p = 10 on an Intel Xeon Phi 7250F CPU for a float type with 16× SIMD (AVX-512).The benchmarks shows the performance of different SIMD/unrolling combinations.E.g. the red curve (SIMD stacking 16 × 2) utilizes 16-fold vectorization together with twofold unrolling For a sufficient number (around 300) of vectorized operations, a 16-fold improvement can be measured for the re-designed data structures
Fig. 15
Fig. 15 Speedup of calculating the P2P direct interactions in chunks of M × N = 32 (i.e. for cluster pairs of size M and N) compared to computing them for all atomic pairs (i.e. for "clusters" of size M = N = 1).All needed FMM box-box interactions are taken into account .
Fig.
Fig.16 Absolute runtime of the λ-FMM CUDA implementation.For this example we use one λ site per 4000 particles as estimated from the hen egg white lysozyme model system for constant-pH simulation.Each form of a λ site contains ten particles.The tests were run on a GTX 1080Ti GPU
Table 1
Three variants of λ-dynamics are considered Expand particles into spherical multipole moments ω lm up to order p on the lowest level for each box in the FMM tree.Multipole moments for particles in the same box can be summed into a multipole expansion representing the whole box.2.M2M: Translate the multipole expansion of each box to its parent box inside the tree.Again, multipole expansions with the same box center can be summed up.The translation up the tree is repeated until the root node is reached.The classical (sequential) FMM workflow consists of six stages.Only the nearfield (P2P) can be computed completely independent of all other stages.Each farfield stage (P2M, M2M, etc.) depends on the former stage and exhibits different amounts of parallelism.Especially the distribution of multipole and local moments in the tree provide limited parallelism in classical loop-based parallelization schemes 4. L2L: Translate local moments μ lm starting from the root node down towards the leaf nodes.Local moments within the same box are summed.5. L2P: After reaching the leaf nodes, the farfield contributions for the potentials FF , forces F FF , and energy E FF are computed.6. P2P: Interactions between particles within each box and its direct neighbors are computed directly, resulting in the nearfield contributions for the potentials NF , forces F NF , and energy E NF .
1. P2M: 3. M2L: Transform remote multipole moments ω lm into local moments μ lm for each box on every level.Only a limited number of interactions for each box on each level is performed to achieve linear scaling. | 10,215.8 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Chemistry",
"Biology"
] |
Sustainable Mortar Made with Local Clay Bricks and Glass Waste Exposed to Elevated Temperatures
The present study involved assessing the replacement of fine aggregate in the mortar with sustainable local materials like clay bricks and glass included 168 specimens (cubes and prisms). Seven mixtures were cast for this work, one control mix (R1) with 100% natural sand whereas mixes from R2 to R5 have 10% and 20% replacing natural sand with waste clay bricks and waste glass separately and respectively. Mix R6 was included 20% replacing sand with combination waste materials (10% waste clay bricks with 10% waste glass). Mix R7 has involved the same percent of replacing the previous mix R6 but with adding Polypropylene fibers 1% by volume. The samples have put in an electrical oven for one hour at 200, 400, and 600 oC then cooled to room temperature to be tested and compared with samples at normal temperature 24 oC. Different mechanical tests were adopted involved flow tests, density, weight loss, compressive strength, flexural strength, and water absorption. The results at different temperatures were discussed where many findings were specified. The flexural strength at 400 oC was showed improving by 56% for 20% waste clay brick and 69% with 10% waste glass, as well all combination mixes illustrated higher strength than the control.
Introduction
One of the challenges faces generally human beings and especially researchers, is pollution, due to the accumulation of demolition constructions or by-product materials from manufacturing. The use of such recyclable materials in construction is considered eco-friendly and appropriate to reduce serious problems such as nonbiodegradability, accumulation of wastes, and protect natural resources from consumption. On the other hand that would open the field forwards the researchers to explore the desirable produced characteristics and the limitations [1][2][3][4][5].
Waste produced from the construction and building manufacture has increased internationally, includes clay bricks and concrete demolition. The United States was produced about 170 million tons of related materials in (2003), as well, Canada produced in (2004) 15.5 million tons, and in (2008) 17.3 million tons [6]. Numerous researches investigated the ability to use the waste of clay bricks in mortars, and have been obtained promising results [7][8]. Bektas et al. (2009) [9] studied experimentally the impacts of recycling the clay bricks replaced parts of the natural sand on the properties of mortar. The results indicated that the mortar flowability was reduced as the percent of replacement increased, but there was a limited effect on the compressive strength. The characteristics and microstructure of mortars made with a waste of red clay bricks were studied (2013) [10]. The research shows the ability to increase the compressive strength of the studied samples from 30 MPa to 50 MPa through optimization of the type and concentricity of alkali activator, in addition to reusing of this notable waste material at the same time. The study of Dang (2019) [11] was based on different parameters such as different replacement ratios and particle size of the recycled brick (RCBA) and additional water content to understand its effect on the flowability, flexural strength, and compressive strength of mortars. The study extended to analyzing the microscopic morphology of Recycled Clay Bricks Aggregate (RCBA) and showed a decline of the physical properties of RCBA due to the porous structure and mechanically crushing treatment. There is interest in exploring the performance of recycled aggregate under high temperatures over the years. Khalaf et al. (2004) [12] investigated the properties of the concrete cast with crushed clay bricks subjected to high temperatures. The experimental program involved crushed clay bricks as coarse aggregate (two different strength) and natural granite aggregate. The results pointed out that the performance of clay bricks concrete rather good or maybe better than granite concrete. Concrete made with waste clay bricks industry exhibited satisfactory mechanical properties at normal temperature and better fire resistance than the concrete made with river aggregates [13].
Glass waste is categorized as a nonbiodegradable residual solid, an alternative solution to avoid its accumulation in landfills is to use it in construction as a minimal to mitigate this serious environmental problem [2]. In the United States in (2015), glass generation was 11.5 million tons, whereas approximately 7 million tons of waste of glass was received in landfills only in that year [14]. Numerous studies assessed the reuse of waste glass powder (WGP) substituted partially with cement. Tan and Du (2013) investigated the suitability of mortar that contained waste glass (as a partial and full replacement of fine aggregate) for construction purposes. The results of flowability were reduced, but there is an increase in mechanical properties [15]. Özkan et al. (2008) [16] investigated many features of mortars containing waste glass, one of them was the high-temperature resistance. Positive results were observed for all samples subjected to different high temperatures, particularly at 300 ᵒC. Generally, the concretes made with 10% replacing fine waste glass aggregate, coarse waste glass aggregates, and fine and coarse glass aggregate had better characteristics for fresh and hardened states at normal (ambient) and high temperatures than those with normal aggregate [17]. Rashad (2014) presented the fresh and dry density of concrete decreased with increasing content of waste glass aggregate. The water absorption reduced as regarding to mixture without glass due to negligible water absorption of the recycled glass. The residual compressive strength increased with incorporated 10% of glass as fine aggregate in concrete that exposure to 700 ᵒC [14]. Guo et al. (2015) studied the mechanical properties of SCAM (selfcompacting architectural mortar) containing RG (recycled glass) at room temperature and when exposed to high temperatures. Increasing the RG content hurts the compressive strength at room temperature, while beneficial effects on the mechanical properties were observed for (SCAM) at 800 ᵒC [18].
Assessment of the behavior of mortar which contained recycled materials such as clay bricks or glass at high temperature not clear yet. Especially, the mortar containing local waste materials (Iraqi clay brick and glass) so a need to do more researches in this field. This work focuses on utilizing sustainable local materials (clay bricks and glass) as a partial replacement for sand reaches 20% of replacing. One of the most important difficulties that we faced in this research is grinding the recycled materials (clay bricks and waste glass) and making them as fine as natural sand, especially glass, which grinds easily because it is a brittle material. The specimens were exposed to different temperatures 24, 200, 400, and 600 ᵒC. The flow, compressive strength, flexural strength, density, weight loss, and water absorption was tested for evaluating the effect of elevated temperature on the mortar having recycled materials.
Materials and Methods
Different materials were employed in the present study for mix designs to produce mortar. Those materials including Portland cement, normal sand, recycle materials (sustainable materials) clay bricks, and glass as partial replacing for natural sand, superplasticizer, and Polypropylene fibers. Type II Portland cement (CEM II/A-L 42.5R) supplied from a local source called Karasta was depended on in this work and complied with Iraqi Specification No. 5 [19] as presented in Table 1. Natural fine aggregate in the Saturated Surface Dry (SSD) condition with a maximum size of 1.18 mm was used in the present paper. Master Glenium 54 (superplasticizer) from BASF Company for modified workability for fresh state mortar was used and conformed to ASTM C494 Type F [20].
Waste clay brick (parts) resulting construction building was used. The clay brick from the local factory (ASO) complies with the Iraqi Specification No. 25. Waste glass collected from broken bottles, the recycled glass goes through several stages before use. The waste glass was washing, cleaning, and then drying. The recycled materials (clay bricks and glass) were manually crushed and sieving range of 1.18 to 0.15 mm for use as sand replacing. The waste materials were used in Saturated Surface Dry (SSD) condition, for that the recycled clay bricks and glass socked in water for 12 hours after that dried the surface. The sieving analysis of natural fine aggregate (sand) and the recycled materials was presented in Table 2, and Figure 1 according to Iraqi standard No. 45 [21]. The cement mortar was reinforced by using Polypropylene fibers from Sika Company where the properties are presented in Table 3. The cement/sand was (1:2.75), and water/cement (0.35) whilst, the dosage of Glenium 54 (superplasticizer) was 2.5% by cement weight, all these percentages were fixed for all mixtures. Seven mixtures were cast in this work: the first one controlled mix (R1) with natural sand only 100% and whereas mixes from (R2 to R5) has 10% and 20% replacing natural sand by waste materials (recycles materials) clay bricks, and glass separately and respectively. Mix R6 was involved 20% replacing sand with combination waste materials (10% waste clay bricks with 10% waste glass). Mix R7 has included the same percent of replacing the previous mix R6 but with adding fibers (Polypropylene) 1% by volume. Table 4 illustrated all details of the mixes. Standard mechanical mixer complies with ASTM C305 [22] used for mixing mortar. The procedure for mixing was including the following steps: all materials were put in a bowl for dry mixing with a low speed of 140 rpm for two minutes. The water and Master Glenium 54 (superplasticizer) (mixed previously together) added to dry materials and mixed for one minute at low speed. Then, the mixer stopped for a rest period (one minute). Afterward, all ingredients were mixed at a high speed of 285 rpm for four minutes. While the mixture having fibers, the mixing going on for four and a half minutes more (half a minute for fiber gradually adding and four minutes for mixing at 285 rpm). After mixing the fresh mixtures was used for flow testing thereafter the mortar cast in cubes and prisms (standard molds) and vibrated. Within 24 hours remove the molds and stored specimens in water tanks for 28 days at room temperature. The samples were put in the electrical oven for one hour at 200, 400, and 600 o C then cooling to room temperature and adopted for different testing were the outcomes used for comparing with samples at normal temperature 24 o C. Figure 2 showed the methodology of research. Many trial mixes were neglected, especially at a high percent of replacing recycled material (30% for clay bricks and above) with natural aggregate due to the difficulty of mixing where the slump was zero and the specimens have a high percentage of voids that effect negatively on the mechanical properties. Figure 3 illustrated the spacemans in air and in oven, slump test and flexural strength test. The flow test of mortar was conducted according to ASTM C1437- [23]. Figure 4, and Table 5 presents the effect of waste materials on the flow test of mortar. The control mix showed the highest flow as regarding other mixes. The flow rate lowered by 7% for 10% replacing waste clay bricks and 27% for 20% as comparing with the control mix. It is apparent that the flow of fresh mortar when utilized waste clay brick tended to decrease with an increase in the percent of replacing. That result of the flow depending on the raw material and manufacturing process in addition to the irregular shape of waste aggregate which are often angular shape and high water absorption even if pre-soaking or adding extra water [14]. Utilizing waste glass as a partial replacement of sand led to decreasing the rate of the flow by 10% and 18% for 10% and 20% replacing respectively. The tendency of the flow to decrease when increases the ratio of waste glass can be attributed to the poor geometry of waste glass led to a decline in the fluidity of the fresh mortar and reduction of fineness modulus [24]. The mix containing waste clay bricks and waste glass demonstrated a reduction of 13%. Polypropylene fiber has a negative impact on the flow property, the higher reduction in flow rate given by mix R7 (41%) as regarded to control mix (R1). The flow rate was decreased by 32% for R7 comparing with the same mix but without fiber (R6). This behavior may be due to the fresh mortar plasticity that is restricted by volume increasing of fiber addition making a reduction in flow rate [25]. Table 5 and Figure 5 presented the fresh density of mortar, the density of the reference mix (R1) was 2293 kg/m 3 . Utilizing waste clay bricks led to lowered density by 8% for 10% and 3.2% for 20% replacing as comparing with the control mix. The density of waste clay bricks was lesser than natural sand on the other hand with increasing percent of replacing the density of fresh mortar will be comparable to control mix due to the filling pores by physical effect and pozzolanic activities [26]. The fresh density of mortar decreased by 5.2% and 5.8% for 10% and 20% replacing natural sand with waste glass respectively and that may be attributed to the lower specific gravity of the waste glass regarding natural sand which is approximately 14.8% lower than sand [24]. The density of combination mixes consisted of more than one type of recycled materials was depending on the materials used itself. The mix R6 has been decreased density comparing with the control mix (R1) by 4.9%. Polypropylene fiber adding has a very slightly increased in mortar density as regarding the same mix without polypropylene fibers. The micro voids have reduced the orientation and the volume of calcium hydrated by constrained crystalline calcium hydrated to growth due to fiber ability, interfacial transition zone was denser as regarding the same mix without polypropylene fibers [27].
Weight Loss
The difference between the weights of the sample before and after exposure to heating was weight loss. Table 5 and Figure 6 illustrated the effect of different temperatures 200, 400, and 600 o C on the weight of the mortar. Weight losses, in general, was increase with rising temperature because of mechanical properties changing due to water evaporating in C-S-H [27]. The results were indicated the higher reduction occurred at 600 o C for all mixes comparing with other temperatures. At 200 ᵒC the replacing 20 waste clay bricks showed the minimum percent of loss weight 4.4% as compared with the same percent of replacing at 24 ᵒC, while with increasing temperature the weight loss increased for 10% and 20% replacing. The weight loss began with the loss of free water in cement paste and absorb water in recycled aggregate with increasing temperature chemically bound water that presented in calcium silica hydrate resulting by the pozzolanic reaction of waste clay brick aggregate start to dehydrate [28]. Replacing 20% of waste glass showed the lowest result for weight loss by 5.4, 7.5, and 6.6% comparing with control mixes at 200, 400, and 600 ᵒC respectively. While, 10% replacing the waste glass with normal fine aggregate presented the weight loss lowered by (6.6, 7.7, and 8.8%) at 200, 400, and 600 ᵒC respectively comparing with normal mixes at same temperatures. The losing weight in the mixture that having waste glass aggregate due to the loss of free water in capillary pores while at a high temperature more than 400 ᵒC return to dehydration of calcium silicate hydrated [29]. The combination mix of waste clay bricks with waste glass replacing natural sand has presented a reduction in weight by 6.2, 8.9, and 10% at 200, 400, and 600 ᵒC respectively. Polypropylene fibers adding in the mixture does not affect losing weight property where there is no relationship between the evaporation water in mortar and fibers [30]. Table 5, and Figures 7 to 10 presented compressive strength results. At room temperature (24 ᵒC) for replacing normal sand with 20% recycle clay bricks and for a mixture that having 10% waste clay bricks with 10% waste glass compressive strength was improved by (2 and 3.5) respectively comparing to the control mix. While the reduction was 15.4% and 25% for 10% and 20% replacing the waste glass with natural sand respectively. Whereas, the decrease in compressive strength results may be caused by the lack of adhesive strength between the cement paste and surface aggregate (waste glass) because of the smooth surface of the glass which led to weakening bonding. Polypropylene fiber adding (R7) led to lowering compressive strength by 5.4% as comparing with the same mixture but without fibers (R6). The reduction in compressive strength returned to a lack of bonding between cement paste and fibers and that could help to propagate micro-cracks [31]. At 200 ᵒC the reference mixture (R1) presented a reduction in compressive strength by 13% and that could be due to mortar dehydration by excluded free water and effected badly on mechanical properties. For waste clay bricks replacing fine aggregate, the compressive strength appeared reduction by 11.2% and 9.3% for 10% and 20% respectively. Most of the damage in the mortar at 200 ᵒC resulting in the elimination of free water in addition to the beginning dehydration of calcium silicate hydrated [32]. Utilizing 20% waste glass for replacing sand showed improvement in compressive strength by 5% at 200 ᵒC. The compressive strength increased by 12% and 7% for a mixture that has 10% waste clay bricks in addition to 10% waste glass, without and with 1% Polypropylene fibers (R6 and R7) respectively. The mixture having waste glass showed higher strength than normal mix especially at high temperature and this behavior may be since the free water hold over in the mix is dissipated at high temperatures much readily in mixes containing glass. In addition to the inability of glass to absorb water [17]. By adding polypropylene fiber at 200 ᵒC the surface temperature of the sample higher than the inside and that led to restricted melting fiber (160 ᵒC). The compressive strength reduced by 4.7% compared with the same mix without fibers (R6) that may be because of voids due to fiber incorporated and less bonding between cement paste and polypropylene fiber. Compressive strength was reduced with increasing temperature to 400 ᵒC by 31% for the control mix comparing with the same mixture at normal temperature 24 ᵒC. The lowering in strength refers to dehydration of C-S-H and ettringite and started generally at 300 ᵒC. Utilizing waste clay bricks as partial sand replacing in mixtures with 10% and 20% led to enhance compressive strength by 8, and 18% respectively. Regarding the waste aggregate type used, clay bricks aggregate shows improvement in compressive strength and that can be attributed to many reasons. First, with increasing temperatures of more than 400 ᵒC, the nature of clay bricks made them stronger and stiffer by twice. Second, at high temperatures, the unhydrated cement paste may undergo additional hydration [17]. Whereas, compressive strength indicated to increased more than the control mixture by 22% for10% waste glass but with increasing percent of replacing to 20% the compressive strength reduced. The mixture consisting of waste glass with waste clay bricks (R6) as replacing for sand was higher compressive strength by 30.3% as compared with the control mix at 400 ᵒC. At 400 ᵒC polypropylene fiber was melted leaving behind voids that affected adversely compressive strength. With increasing temperature to 600 ᵒC, compressive strength showed a huge reduction of more than 53% for reference mixture (R1) as regarding the same mixture at room temperature. Micro cracks were formed in a mortar with increasing the temperature of the oven from 400 ᵒC to 600 ᵒC due to calcium hydroxide dihydroxylation. Compressive strength led to reduction because of weak bonding between cement paste (hydration products) and fine aggregate. The higher compressive strength for waste clay bricks aggregates depending on the higher content used. For 20% replacing the compressive strength was higher than the normal mix at the same temperature (600 ᵒC) [32]. Waste clay bricks consider stable and one of the best aggregate which can use for resist fire and high temperatures [12]. For utilizing waste glass as partial replacing of fine aggregate compressive strength increased by 40% and 63.6% for 10% and 20% respectively as compared with control mix at 600 ᵒC. The combination mixture consisted of 10% waste clay bricks with 10% waste glass observed higher compressive strength than the control mix by 60.4%. The compressive strength for mixture having polypropylene fiber reduced by 33.8% as compared with the same mixture without fibers that due to voids in mixture results melting fibers. Table 5 and Figures 11 to 14, all flexural strength values at an ambient temperature lowered than the reference mix (R1). The results demonstrate that utilizing waste clay bricks aggregate led to a lowering in flexural strength by 11% for 10% and 8.8% for 20% replacing, which may be due to the microcracks in the interfacial transition zone and few internal voids in the waste clay bricks aggregate. That could be due to the method adopted for preparing waste aggregate in addition to the effect of the particle size distribution of waste material and characteristics of clay bricks like strength and roughness [3]. Through the increase in the percent of replacing of waste glass from 10% to 20%, flexural strength lowered by 9.4% and 16.6% respectively, and that may be due to change in properties of interfacial transition zone regarding normal mix in addition to the difference in densities between the normal aggregate (sand) and waste glass. The result of flexural strength lowered by 7.8 for a mixture that having waste clay bricks with waste glass (R6) comparing with the control mix (R1). By adding Polypropylene fibers flexural strength was improved by 6% for mix (R7) as compared with the same mix but without fibers. The initial cracks were restricted in mortar due to Polypropylene fiber's ability [33]. Figure At 200 ᵒC, the flexural strength was showed a reduction by 10% as compared with the same mix at normal temperature. In general flexural strength was higher sensitive than compressive strength because of micro-cracks in samples as regarding same specimens under ambient temperature. The results for adapting waste clay bricks aggregate were comparable with the normal mix at the same temperature that due to the good bonding behavior of waste aggregate with paste at 200 ᵒC. Incorporated waste glass for replacing fine aggregate harms flexural strength by 25.1% and 24.4% for 10% and 20% replacing respectively. This reduction back to evaporate the pore water (25-105 ᵒC) and dehydration of ettringite and calcium silicate hydrated at 105-300 ᵒC [34]. The combination mix that having waste clay bricks and waste glass (R6) was reduced flexural strength by 3.5% as compared with the reference mixture at 200 ᵒC as shown in Figure 9. Polypropylene fibers were started to melt at 160 ᵒC, the surface of specimens was reached to 200 ᵒC but the inside specimens less than the degree of melting therefore the flexural strength was reduced due to less bonding between paste and fiber. When the temperature of the oven reaches 400 ᵒC, flexural strength was illustrated lowering by 66.8% as comparing to the control mixture at ambient temperature. Calcium hydroxide presented dehydroxylation after 300 ᵒC for that, microcracking was happened and affect adversely flexural strength. Waste clay brick aggregate showed higher flexural strength comparing with the normal mix at 400 ᵒC by 11.4% and 56% for 10% and 20% replacing respectively and that may be for good bonding between waste aggregate and paste even though some waste particles smooth surface [35]. The calcium hydroxide Ca(OH)2 in cement paste exposed to a sudden drop in temperature between 375 to 480°C and that led to decompose, with replacing natural sand with waste glass clearly enhance in flexural strength appeared by 69% for 10% and 50.6% for 20% replacing, and that agree with research [36]. The combination mixture (10% waste clay bricks with 10% waste glass) was showed enhancing in flexural strength by 44% as regards the reference mix at 400 ᵒC. The flexural strength was reduced at 400 ᵒC for a mixture that having Polypropylene fibers due to melted and calcined fibers leaving behind voids that cannot stop crack propagation [30]. The flexural strength at 600 ᵒC was collapsed, the reduction in strength reach to 92.2% as regards to reference mix at normal temperature. The dehydration of calcium silicate gel led to decreasing flexural strength at this stage. With increasing temperature to 600 ᵒC the waste clay bricks losing a high percent of flexural strength due to microcracks resulting of dehydration of calcium silicate hydrated gels and that reduction depending on materials of clay bricks itself [28]. On the contrary, the mixture with waste glass was dominant at high temperatures (600 ᵒC). The flexural strength was (1.12 MPa for 10% and 2.08 MPa for 20% replacing considered 0.39 MPa at 600 ᵒC for control mix), that is probably due to the high temperature near to the transition temperature of glass (560 ᵒC) of soda-lime glass, the phenomena of transformation attitude of waste aggregate from ''glassy to rubber'' happened in mortar could able to enhance microcracks and pore structure of mixture [37]. The flexural strength was presented improving by replacing natural sand with waste aggregate (waste clay brick with waste glass). The waste aggregate was more responsive to high temperatures than normal aggregate as shown in Figure 14. The flexural strength was lowered by 31.4% for the mixture having 1% Polypropylene fibers (R7) as comparing with the same mixture without fiber (R6) at 600 ᵒC. The voids were created by melting Polypropylene fibers affect adversely on flexural strength property. Generally, with increasing temperature water absorption was increased. The control mix R1 showed an increase in water absorption by 30% at 400 ᵒC and 49.2% at 600 ᵒC and that could be a high percentage of porosity due to hydration products decompose with increasing temperature [38]. The waste clay bricks aggregate observed clearly a reduction in water absorption compared with the control mix at different temperatures by 27, 27.1, 43.2, and 37.7% for 10% replacing at 24, 200, 400, and 600 ᵒC respectively, and 23.7, 17.5, 31.4, and 30.6% for 20% replacing at 24, 200, 400, and 600 ᵒC respectively as shown in Figure 15. This reduction in water absorption competing with the good quality of materials used for the construction of refractory clay bricks [39]. Waste glass aggregate has the same properties (physical) as sand but it presents a lower percentage of water absorption [24,40], for that the result appeared low rate of water absorption comparing with control mix. For 10% replacing waste glass aggregate the water absorption reduced by 26.3, 28.4, 33.7, and 30.7% at 24, 200, 400, and 600 ᵒC respectively. While for 20% replacing the results were comparable to the control mix at different temperatures. The mixture having waste clay bricks and waste glass illustrated increasing in water absorption in normal temperatures comparing with the control mix. The percent of water absorption increased clearly at 400 ᵒC and 600 ᵒC. With increasing temperature, the mixture that having fibers (polypropylene) showed increased in water absorption and that may be because of cracks propagation due to melting fibers [4,30]. Figure
Conclusions
Depending on the study undertaken here, the findings can be drawn: The fresh mortar was presented decreasing inflow with increasing percent of replacing normal fine aggregate with recycling materials. The waste glass incorporated showed higher flow than waste clay bricks at 20% of replacing while the fibers adding was led to reducing the flow rate; The density for mortar decrease with utilizing waste materials. The waste glass mixture at 10% illustrated a higher density than waste clay bricks; By increasing the temperature of the oven the weight loss was increased. In general, the waste glass showed higher resistance for weight loss at different percentages and temperatures than waste clay bricks; The compressive strength was decreased with increasing temperature for all mixtures. The waste clay bricks 20% showed higher strength than the other mixture at 24 ᵒC. The waste glass and the combination mixture at 200 ᵒC and higher temperatures presented improve in compressive strength than the control mix at the same temperatures; At ambient temperature, the flexural strength for the reference mixture was a higher value as compared with other mixtures. The flexural strength for the mix that having 1% Polypropylene fiber was comparable to the control mix. At 200 ᵒC the mix that adapting waste clay bricks aggregate was comparable with the control at the same temperature. The flexural strength at 400 ᵒC was showed improving by 56% for 20% waste clay brick and 69% with 10% waste glass, as well all combination mixes illustrated higher strength than the control. The waste glass showed higher flexural strength at 600 ᵒC as regarding control in addition to all combination presented higher strength; With increasing the temperature the water absorption was clearly increased. The utilizing of waste materials as partial replacing of natural sand was led to lowering the percent of water absorption as regarding reference mix at different temperatures. By adding 1% polypropylene fibers the percent of water absorption was lowered as compared with the same mixtures without fiber at different temperatures.
Data Availability Statement
The data presented in this study are available in article. | 6,790 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Engineering"
] |
Emulating and characterizing strong turbulence conditions for space-to-ground optical links: the PICOLO bench
Abstract. We present a method to develop a turbulence emulation bench for low-Earth-orbit satellite-to-ground optical communication links under strong turbulence. We provide guidelines to characterize the spatio–temporal dynamics of phase disturbances and scintillation produced by the emulator on a laser beam. We implemented such an emulator for a link at 10 deg elevation and discuss here its design method and characterization. The characterization results are compared to numerical simulations, and this characterization results in the validation of a digital twin of the emulator. The emulator will serve as a testing platform for adaptive optics systems and other free-space optical communication components under strong turbulence conditions.
Introduction
Optical communication between satellites and optical ground stations will deliver high speed data transfer between space and Earth. 1,2In the case of low-Earth orbit (LEO) satellites, higher throughput would enable direct-to-earth (DTE) links, which download high resolution sensor data directly from the satellite hosting the payload to ground.DTE links serve as an alternative to geostationary satellite relay architectures, which may not be available for small constellations.
High data-rate optical communications rely on single-mode fiber coupling for different techniques, such as optical amplification and coherent detection.Unfortunately, the atmospheric turbulence present in the few tens of kilometers close to the ground impacts the quality of the optical beam and hinders fiber coupling, 3 leading to a reduction of the possible data rate due to signal fading. 4Atmospheric turbulence causes phase distortions on the wavefront of the transmitted laser beam.The use of adaptive optics (AO) provides phase correction of the wavefront and thus improves coupling.However, unlike in traditional astronomical applications, LEO-toground links may face strong turbulence conditions that lead to amplitude distortions.The amplitude distortions result in spatio-temporal variations in optical intensity known as scintillation. 5cintillation causes variations in the intensity of the received optical signal 6 and impairments in the wavefront measurements. 7Two factors lead to this strong turbulence regime.First, LEO links need to work at low elevation angles (desirably down to 10 deg) in order to extend the link duration.At these angles, the propagation distance in the atmosphere is very long (>50 km), leading to a longer propagation path and a larger volume of turbulence crossed, increasing the total turbulence distortion strength.Second, day-time operation faces stronger turbulence due to temperature gradients caused by solar radiation.
In order to test AO systems and optical communication components under strong turbulence, ONERA has developed the PICOLO bench: 8 a turbulence emulation for an LEO-to-ground optical link.This laboratory emulator will complement the current efforts in numerical simulations 9 and experimental tests. 10The development of satellite-to-ground links requires extensive testing of the different subsystems.Testing with satellites [11][12][13] is limited by link duration and the lack of LEO satellites equipped with on-board optical terminals.Different ground-to-ground experiments [14][15][16] have been designed to replicate the conditions of those links, but it is difficult to achieve realistic and reproducible turbulence conditions.Turbulence emulators [17][18][19] provide well-known, reproducible, and available optical turbulence conditions that enable the testing of AO systems and other optical communication components.
The originality of this work is threefold: first, this emulator is one of the few systems representative of low elevation LEO-to-ground links including phase but also scintillation effects emulation; second, we provide a thorough laboratory characterization of these effects and compare them to numerical simulations; and third, we have produced a digital twin of the laboratory emulator.The digital twin complements the experimental validation of new instrumental concepts in a two-stage process: first, it is used to assess the expected performances and, later, to interpret the experimental results.
This paper reports the method used for the development and characterization of the PICOLO turbulence emulator.In the first part, the main phases of the bench definition will be detailed, in particular the different steps followed to optimize and validate the sampling of the turbulence volume by only three layers and the down-scaling of the experimental setup to be representative of phase and scintillation effects.The second part is devoted to the characterization of the phase and scintillation of the turbulence produced by the bench and a comparison to numerical simulations.Finally, we discuss perspective upgrades of the bench to meet future needs.
Optical Turbulence Emulation
Most approaches and experiments for turbulence emulation have been developed for either astronomical cases (i.e., weak turbulence) or horizontal links 20 (i.e., constant turbulence profile), but those do not cover the specific needs of the propagation channel of an LEO-to-ground link at low elevation: multi-layer profile, strong turbulence, and high layer translation speeds.Having multiple layers of turbulence is necessary since satellite-to-ground links are slanted links that go across different atmospheric altitudes.The strong turbulence is a result either of day-light conditions, where turbulence is stronger due to solar radiation, or of low elevations, where the path across the turbulence is longer and therefore turbulence is stronger.Per-layer translation speeds are higher for LEO links since there are two different components: natural wind and apparent wind.Natural wind corresponds to the atmospheric local wind that causes a shifting of the different turbulence layers; its vertical profile depends on the dynamics of the atmosphere.Apparent wind corresponds to the apparent translation of the different turbulence layers due to the relative movement of the line-of-sight with respect to the layers during satellite tracking.The apparent wind speed depends on the angular tracking velocity of the telescope and the distance to a given layer.These speeds are typically an order of magnitude higher at the upper atmospheric layers than the typical wind speeds in astronomical applications; therefore, they represent an additional emulation challenge.
Different methods are available for generating laser beam distortions similar to the ones caused by atmospheric turbulence in a laboratory setup.We discuss briefly the methods available (see Ref. 21 for a more detailed overview) and motivate our choice for the emulator design.We distinguish three methods for turbulence production: passive screens, active screens, and turbulence chambers.
Passive phase screens 22,23 use an optical surface with a fixed phase mask structure providing the optical path difference (OPD) corresponding to atmospheric turbulence distortions.This mask can be in transmission or reflection.The OPD is generated by controlling the thickness of the surface in a homogeneous optical index medium or by using a controlled inhomogeneous optical index.The phase screens are often mounted on a rotating stage, which produces a shift that approximates the linear displacement of atmospheric turbulence layers due to wind.Since the phase mask pattern used is specified by the user, the phase screens produce deterministic turbulence and can implement profiles by employing one layer per screen.5][26][27] The OPD is engraved on the optical surface by different methods, such as index matching by Lexitek, 23,28 acrylic paint spraying, 24,29 or the cumulative etching by SILIOS Technologies 30 used by several AO systems coordinated by the European Southern Observatory (ESO).This approach ensures an accurate control of the phase distortion but losing versatility, as the distortions are not reconfigurable except by changing the phase screens.
Active phase screens use optical devices, such as spatial light modulators (SLM) 31 and liquid crystal (LC) 32,33 devices, as phase modulators that act as reconfigurable phase screens.Those are able to produce a linear phase displacement (unlike the rotating static phase screens, which only approximate it) and can also combine it with boiling turbulence, as their phase mask is fully programmable.Likewise passive phase screens, active phase screens also create deterministic turbulence and can represent multi-layer profiles.Unfortunately, these solutions pose problems of polarization conservation, chromatism and, more importantly, are limited in reconfiguration rate.Deformable mirrors 34 can provide a higher reconfiguration rate and achromaticity, but present problems related to cost, spatial frequencies, and opto-mechanical design for multi-layer arrangements.
Turbulent fluid chambers create turbulence by mixing two fluids at different temperatures.For example, hot air turbulence chambers 35 use two streams of hot and cold air.The turbulence strength can be modified by changing the temperature difference between the two streams, whereas the wind speed can be regulated by the speed of the fans that inject them into the chamber.Nevertheless, this method is not able to produce the turbulence strength profile characteristic of slanted links, neither is it able to produce the wind speed profile derived from satellite tracking apparent wind.In addition, the turbulence produced cannot be reproduced in a deterministic manner.
A completely different alternative 36 to turbulence emulation uses a variable optical attenuator to create fade profiles in an optical fiber signal derived from numerical simulations (incorporating both turbulence disturbances and the AO system).This is a cost effective solution to emulate a communication channel under the effect of turbulence, with AO correction or not.However, this method requires knowledge of the fiber coupling statistics with AO correction, 37 which are not available for complex AO operating conditions, such as strong turbulence or feeder links precompensation. 38lthough most turbulence emulators described in the literature target astronomical applications, some were developed for optical communication.For instance, the bench in Ref. 18 is dedicated to the validation of an AO for ground-to-satellite uplink pre-compensation.It is composed of a single phase screen and presents an underestimated beam wandering due to a scaling problem.The emulator in Ref. 19 is representative of an uplink at 30 deg elevation, so it does not focus on a strong turbulence case and the effect of scintillation.Finally, the work in Ref. 17 uses two SLMs with an intermediate reflection to produce a second footprint on each SLM, obtaining four different phase screens.The emulated link corresponds to strong atmospheric turbulence, but the link is horizontal and there is no detailed characterization of the scintillation produced on the beam.In summary, the existing emulators target different cases, and therefore answer trade-offs different from ours, but a detailed methodology for the characterization of phase and amplitude fluctuations is usually lacking.
For our emulator, we decided to use passive phase screens mounted on rotation stages, with the possibility of using an SLM to introduce additional boiling turbulence or bursts of turbulence.Several criteria were considered in this decision: (1) to specify a precise turbulence strength with the correct statistics, (2) to reproduce the delivered turbulence conditions, (3) to produce strong enough turbulence (high phase modulation dynamic), and (4) to be able to adjust the speed of every layer to the strong apparent displacements due to satellite tracking.
Bench Definition
The definition of the bench started with the selection of a reference turbulence profile.We compressed the profile to a three-layer profile that could be implemented on the bench as phase screens (see Sec. 3.1).The profile was then geometrically scaled to reduce its size so it can fit on an optical table (see Sec. 3.2).We specified and procured the manufacturing of the three phase screens according to the selected three-layer profile (see Sec. 3.3).Finally, we designed the optomechanics that allow propagating a laser beam through the turbulence emulator and providing the generated turbulence to a client system and an analysis camera (see Sec. 3.4).
Turbulence Profile Compression
We used as reference profile a modified Hufnagel-Valley 5∕7 model 39 (following the ITU-R P.1621-2 recommendation) with a turbulence strength at the ground surface level of and an upper-level wind speed of V rms ¼ 21 m∕s that influences the turbulence strength of the upper layers of the atmosphere.Another possibility for the profile selection is to use a database of in situ C 2 n profile measurements to define typical or worstcase profiles. 40,41Note that our overall emulator design methodology does not depend on the selection of the reference profile.
The reference profile was first compressed to a 50-layer profile to allow Monte-Carlo numerical simulations.The compression to 50 layers is carried by optimization of layer height and strength under the condition of keeping constant the following turbulence integrated parameters: r 0 as a quantification of the phase distortion strength, θ 0 for the anisoplanatism, and σ 2 χ for the scintillation strength.The method is similar to the methods presented in Ref. 42 but including the scintillation effects too.
A second profile compression was necessary to reduce the number of phase screens required for the implementation of the emulator.The number of layers, and therefore phase screens, on the emulator, should be limited in order to reduce the system's complexity and cost.At the same time, a multi-layer profile is also needed to generate a representative turbulence profile: with the proper representation of phase and scintillation effects and the corresponding temporal dynamics derived from the natural and apparent wind profiles.In addition, the use of several screens limits the periodicity in the generated turbulence (see Sec. 4.2.5 for a discussion on the periodicity).
We decided to use three layers since we consider that three layers allow representing qualitatively the scintillation characteristics of the link.In fact, the scintillation irradiation pattern depends on both turbulence strength and propagation distance, therefore we can design each of the three layers to represent one of the possible combinations and its resulting scintillation.The first layer is located at the telescope pupil and emulates the atmospheric ground layer: very strong in turbulence but with short propagation distance, so negligible scintillation contribution.The time evolution of this layer is mainly driven by natural wind, i.e., slow layer speed (typical order of magnitude 10 m∕s).The second layer is located at a more significant propagation distance.This will produce a typical size of the irradiation pattern smaller than the size of the telescope pupil.When using a Shack-Hartmann wavefront sensor, the typical size of the irradiation pattern is similar to the size of one of the subpupils, so scintillation contributes to the flux variation at the subpupil level, impairing wavefront sensing.The third layer will be located far away and will represent the free atmosphere: weaker turbulence but with a long propagation path.The resulting typical size of the irradiation pattern will have a size close to the telescope pupil, which contributes to the variation of available flux with time, and therefore the stability of the signal regardless of the AO performance.The temporal evolution of the second and third layers is mostly driven by the apparent wind component due to LEO satellite tracking, i.e., very fast layer speed (typical order of magnitude 150 m∕s).
While 50 layers are enough to properly represent the original turbulence profile, the restriction to three layers poses a greater challenge.The first layer was fixed to be at the telescope pupil.The positions and strengths of the second and third layers were found using the same optimization based on integrated parameters as for the 50-layer profile.The natural wind velocity profile was chosen as 10, 15, and 30 m∕s, respectively, whereas the apparent wind is computed from the satellite tracking slew rate and the distance to the corresponding layer.We consider a 10 deg elevation and an orbit altitude of 500 km that result in a slew rate of 3.834 mrad • s −1 .
Table 1 reports the integrated parameters of the 50-layer and the three-layer profiles.It can be observed that the compression of the profile managed to keep the targeted integrated parameters very close.A more detailed comparison of the 50-layer and the three-layer profiles is available in Ref. 8.
Propagation Path Geometrical Scaling
We performed a down-scaling of the link geometry in order to fit it within a laboratory optical table.The complete process is a trade-off between available technologies for phase screen manufacturing (including resolution and stroke), opto-mechanical design, and required space (as we wish to keep the system compact).We provide a simplified discussion of this trade-off.
To achieve a physically equivalent system in terms of turbulence and diffractive effects, the down-scaling must preserve the following dimensionless groups constant: D∕ ffiffiffiffiffi ffi λL p for layer at distance L and a telescope of diameter D to be representative of scintillation effects on the pupil due to each layer; D∕r 0 and the contribution to it of each layer through the C 2 n • dz profile; and finally V∕ðD • f samp Þ, which relates the layer displacement in a sampling interval to the size of the pupil.In the following, for each parameter X we use the notation X sky for its value on sky and X bench for its effective value on the bench and express the relationships between them.
Conservation of scintillation effects leads to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 7 ; 3 7 5 is a geometric compression factor defined by the ratio between the initial and final propagation lengths.The maximum propagation distance available from the top layer to the telescope will therefore drive the scaling of the telescope diameter.At the same time, reducing the total propagation distance requires reducing the telescope diameter too.The telescope diameter in the bench defines the beam footprint on the phase screens; therefore, it cannot be reduced too much, otherwise, the resolution of the manufacturing of the screens limits the turbulence that can be achieved.
Considering conservation of D∕r 0 , the turbulence strength is down-scaled as ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 7 ; 2 3 3 To obtain the downscaling of the C 2 n • dz of every layer, we use r 0;bench .Since the relative C 2 n • dz profile is conserved, the absolute values of C 2 n • dz can be scaled to achieve r 0;bench .Finally, each layer's velocity depends on two factors: the relationship between diameters and a temporal scaling factor.For a given layer the velocity is down-scaled as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 1 4 5 The first factor of the last right hand term is related to the previously defined geometric compression factor.The second factor, τ ¼ f samp;sky ∕f samp;bench , is a time scaling factor and it allows a temporal down-scaling of the layer speeds, by operating the bench at lower sampling frequency.As a result, any time interval Δt sky on the original system is equivalent to Δt bench ¼ τ • Δt sky on the down-scaled system.For example, for an AO system working at a 5 kHz sampling frequency on-sky and 50 Hz on the bench, the equivalent time scaling factor is τ ¼ 5000∕50 ¼ 100, this means that everything on the emulator runs 100 times slower than on-sky.For the same reason, to acquire a time series of Δt sky ¼ 60 s of duration, it would be necessary to record during Δt bench ¼ τ • Δt sky ¼ 100 • 60 s ¼ 100 min on the bench.
Ideally, one would test a system with τ ¼ 1, so the system operates at its nominal frequency.Two different factors make temporal scaling convenient: first, reducing the rotational speed of the phase screens and second, allowing the use of slower components in the emulator or client system.Indeed, due to the apparent wind speed of the LEO satellite, the upper layer speed is quite high, so high that the rotational speed of the phase screen to achieve such a layer velocity would push the limits of the rotation stage and produce possible vibrations and safety issues in case of component malfunction.In addition, the PICOLO bench is dedicated to the development and testing of new concepts and systems, cases in which it may not be possible to run certain components as fast as in operational conditions on sky.Temporal scaling is then a useful option.As an example, operating at slower time scale also allows the use of an SLM, located in the pupil, which operation is usually limited at 50 Hz, to add user-defined non stationary turbulence, bursts, or specific perturbations.Finally, during the characterization of the bench, the infrared camera used is limited to an acquisition frequency of 100 Hz, so the scaling was also necessary during the characterization to obtain an equivalent sampling frequency of 10 kHz.
Note that the geometrical scaling of the emulator is independent of the diameter chosen for the telescope since the relationships are all in terms of the ratio D bench ∕D sky ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi L sky ∕L bench p .If one would like to change the telescope diameter, one could compute D bench using Eq. ( 1) and the optical propagation characteristics in the emulator would not change.
In the following, we provide the scaling for our implementation of the emulator.We wish to have a telescope diameter in the bench equivalent to D sky ¼ 40 cm diameter telescope on-sky.The results of the scaling exercise are summarized in Table 2. From the top layer, the propagation distance toward the telescope is L sky ¼ 57 km on sky, and we want to limit the maximum propagation distance on the bench to 1.4 m.Equation (1) leads to a telescope pupil of D bench ¼ 2 mm.The wavelength is the same on the bench: λ sky ¼ λ bench ¼ 1.55 μm.For the temporal down-scaling, we consider two scenarios: one where an f samp;sky ¼ 5 kHz is equivalent to f samp;bench ¼ 2 kHz, close to the current AO systems, and one where f samp;sky ¼ 5 kHz is These values lead to acceptable specifications for the procurement of the phase screens and their rotating stages while achieving a compact bench design.
Phase Screen Specification
The phase screens have been manufactured by the company SILIOS Technologies.We have provided the phase screen specification as a two-dimensional (2D) phase map of the desired phase.The maps were generated from the specified r 0 and L 0 for each layer and with a von Kármán spectrum.The maps where scaled to keep the same relationship with respect to the telescope diameter after the down-scaling of the bench.The phase screens are manufactured with a 40 μm resolution, which for a D bench ¼ 2 mm and D sky ¼ 0.4 m is equivalent to a maximum spatial frequency representation of 62.5 cycle • m −1 on-sky.The technical constraints in the manufacturing process lead to reducing the turbulence strength in PS1, the strongest layer, since the resulting peak-to-valley distance in the screen was not attainable.This resulted in a change of the specification of C 2 n • dz from 4.615 × 10 −11 m 1 3 to 2.545 × 10 −11 m 1 3 .The change in the turbulence strength of PS1 results in a change of the global r 0 from 2.6 to 3.3 cm, whereas the scintillation characteristics remain the same since PS1 is located at the pupil of the telescope.This loss of turbulence strength was considered acceptable since it does not affect the scintillation characteristics.
Table 3 summarizes the different profiles used in this work.All C 2 n • dz values provided correspond to the on-sky values, i.e., before down-scaling.All the numerical simulations conducted in this work simulate the equivalent on-sky system and therefore use these values.The "compressed" profile corresponds to the compression from the original 50-layer profile to a three-layer profile (see Sec. 3.1).The "specified" profile corresponds to the profile specified to the phase screen manufacturer, where the strength of layer 1 had to be reduced due to manufacturing constraints (see Sec. 3.3).Finally, the "measured" profile corresponds to the profile measured during the phase characterization of the screens (see Sec. 4.1).This is the profile used in Sec.4.2 for the numerical simulations that are compared to the characterization measurements, it is therefore considered as the most representative with respect to the experimental setup.
Opto-Mechanical Design
Figure 1 shows the opto-mechanical layout of the bench, and Fig. 2 provides an image of the bench implementation.The main optical path is marked with a red line.A laser source is injected in the bench using a fiber collimator.A first phase screen (PS3) is positioned close to the source at the highest altitude, whereas PS2 is close to the ground and PS1 is located as close as possible to the entrance pupil of the telescope.The screens are placed on rotation stages.Two mirrors mounted on tip-tilt stages allow proper alignment of the input beam.A filter wheel equipped with neutral density filters allows different power attenuation levels on the laser input.
The telescope is emulated by a combination of lenses.The entrance pupil of the telescope is located at a mechanical stop in front of the first lens of the telescope.We use afocal lens systems to re-image the pupil plane and re-scale it.The output beam of the telescope is finally collimated.A periscope is placed at the output of this main path to ease coupling with a client system.The second path (in blue) is dedicated to the analysis of the perturbated beam.A beamsplitter picks a fraction of the flux and sends it to a near-infrared camera.A flip lens allows to switch between focal plane and pupil plane imaging on the camera.Three planes are conjugated to the entrance pupil (marked with a purple arrow): the first may accept an SLM, though it is a plane mirror for the moment; the second to the first mirror of the output periscope; and the third to the infrared camera (if in pupil imaging configuration).The emulator is integrated on a 600 mm × 900 mm optical breadboard.
Characterization
We present here our methodology for the characterization of the turbulent link channel emulator.The goal of this characterization is to ensure that the phase screens have been properly manufactured, that the emulator produces the correct turbulence conditions, and that those are understood.
We provide measurements of both phase and scintillation on both a per-screen basis and for the three-screen configuration.The study of scintillation covers both its spatial and temporal behavior.We also compare the results to a numerical simulation of the emulator using TURANDOT, 9 an optical physics propagation tool developed by ONERA for CNES (the French Space Agency).The result of this comparison is a digital twin of the PICOLO bench to support cross-validation between experiments and simulations during future developments.
Phase Characterization
Different methods are possible to characterize the phase introduced by the phase screens and verify that they provide the desired phase distortions in terms of phase variance and spectrum.A first method 43 uses the measurement of the full-width-half-maximum of the long-exposure seeing-limited point spread function (PSF) and compares it to the theoretical expectation from the prescribed phase.This method is not applicable on the bench in our case due to both the strong beam wander (resulting in PSF cropping) and speckles in the PSF derived from the strong turbulence conditions (see Fig. 3).Alternatively, we chose to use a dedicated set-up to measure wavefront slopes with a Shack-Hartmann wavefront sensor, reconstruct the phase associated with these measurements, and compare the reconstructed phase statistics to the statistics of the prescribed phase screens.The phase reconstruction is conducted using a Zernike polynomials basis, obtaining modal variances for each mode.To compare the estimated modal phase variances to the screen prescription, a theoretical model of these variances assuming a von Kármán spectrum is fitted using the two model parameters: the Fried number (r 0 ) and outer scale (L 0 ).The fitting provides an estimation of these two parameters, which can be compared to corresponding values for each of the prescribed phase screens.The details of this method and the required setup are discussed in the next section.
The phase characterization presented here was the method used to accept the phase screens from the manufacturer while further characterization was conducted after acceptance.The characterization of the scintillation power spectral densities in Sec.4.2 provides supplementary phase verification since the scintillation characteristics depend on the phase.
Measurement setup
We first obtained Shack-Hartmann measurements of the phase screens, to achieve that we illuminated a circular section of the phase screen with a collimated laser source at a wavelength λ ¼ 1.55 μm.A 4f imaging relay was used to conjugate the footprint of the collimated beam on the phase screen to the pupil of the Shack-Hartmann wavefront sensor.Conjugation of the planes avoids further propagation of the wave between the phase screen and the wavefront sensor, which would produce scintillation and therefore bias the wavefront measurement.The Shack-Hartmann wavefront sensor used is an Imagine Optic HASO4 SWIR 1550, which is capable of providing absolute slope measurements due to the calibration provided by its manufacturer.
The collimator beam footprint was placed at the same distance from the center of rotation of the phase screen as used in the bench.Different samples of the screen were taken by rotating the screen.We decided to measure only the disk that will be illuminated during the rotation of the phase screens since the distance between the rotation center and the beam footprint is constant.This strategy ignores the rest of the screen and provides a limited number of measurements; however, it corresponds to a characterization of the only area of the phase screen that is used.The total number of statistically independent measurements available is around 50 per phase screen; although overlapping measurements were used to average measurement noise even if they do not bring statistical convergence.The same discussion applies to the scintillation characterization.
The acquired slope measurements were used to reconstruct the Zernike coefficients (using the least-squares method 44 ) for each spatial sample; the variance of each coefficient across all samples provides an estimation of the variance for the corresponding Zernike mode.We chose to reconstruct up to the 10 th Zernike polynomials radial order, considering the number of available Shack-Hartmann subpupils.The resulting Zernike modes variances were averaged by radial order and fitted to their theoretical values assuming a von Kármán spectrum 45 using as fitting parameters the Fried number (r 0 ) and outer scale (L 0 ).The fitting was computed by solving a non-linear least squares problem using a Levenberg-Marquardt optimization routine.Special attention needs to be paid to the fitting: The modal variance of the atmospheric turbulence follows a power law with very different orders of magnitudes between radial orders (see Fig. 4); if the least squares cost function is computed using all modal variances, the low order modes, which have bigger variances, will dominate the fitting and bias the estimation.In the von Kármán spectrum, the low radial order modes are influenced by both the outer scale and the Fried parameter, whereas the high radial order modes are only influenced by r 0 .As a result, r 0 can be estimated by first fitting the high radial orders to a von Kármán spectrum with a fixed outer scale; we use the Kolmogorov spectrum, which is equivalent to an infinite outer scale.A second fitting of all modal variances is used to estimate L 0 , this time assuming a von Kármán spectrum with the previously estimated r 0 as a fixed parameter.
Results
Figure 4 shows, for the case of PS2, the measured Zernike coefficient variances averaged per radial order and a comparison to the best fit to a von Kármán spectrum.The close fitting of the measurements to the theoretical spectrum shape confirms that the measured phase follows the power law of the von Kármán spectrum.The deviations at higher radial orders (i.e., high spatial frequencies) are related to noise propagation and the reconstruction of higher Zernike orders (aliasing effect).The results of the Zernike mode variance fitting for the three screens are summarized in Table 4 by means of the resulting r 0 and L 0 estimates.In terms of relative error between the expected and the measured quantities, r 0 error is 10% in the worst case, whereas for L 0 it can be as high as 65%.The bigger mismatch in L 0 can be explained by the difficulty of estimating the variance at low spatial frequencies with a limited number of measurements; since low frequencies have fewer periods over one measurement, it is necessary to use more measurements to estimate low spatial frequencies than to estimate high frequencies.This lack of accuracy could be improved in the future by increasing the number of available phase measurements by measuring all the phase screen area and not only the annular section that is illuminated during the operation of the bench.
In conclusion, these results provide an estimation of the r 0 and L 0 of the screens and confirm that they follow the desired von Kármán spectrum in their spatial statistics.This first characterization allowed us to test the quality of the phase screens and accept them.The later characterization of the scintillation presented in the next section served as a supplementary characterization of the phase produced by the screens.We also used the measured per-screen r 0 to derive the equivalent C 2 n • dz for each screen, reported in Table 3 as "measured" profile.These values were used for the numerical simulations of the wave propagation in the bench.
Scintillation Characterization
Optical wave scintillation, unlike phase, can be directly measured as intensity on the pupil plane.We characterized the scintillation on the emulator by analyzing images of irradiance patterns on the pupil using a matrix detector.The scintillation characterization was conducted in both spatial and temporal domains.We compared the measured results to a numerical simulation of the same propagation case.This results in a cross-validation of the specification of the phase screens and the resulting spatial and temporal signatures for scintillation.
Measurement Setup
As pointed out in Sec.3.4, the bench allows pupil imaging by an afocal telescope that relays the pupil plane from the telescope entrance aperture to an imaging matrix detector.This allows recording the irradiance distribution (i.e., sampled value proportional to the irradiance on each pixel in W • m −2 ) over the pupil of the system, which is used to characterize the scintillation.
The acquisitions are taken in two different ways for spatial and temporal characterization.For spatial characterization, the phase screens are rotated so that for every acquisition the beam footprints on the screen do not overlap.In this way, we reduce the spatial correlation between measurements.For temporal characterization, the phase screens are rotated to a speed that produces the equivalent layer velocity for the given layer, achieving the desired temporal correlation.
For the spatial characterization, the integration time and the screen rotational speed were adjusted to reduce the displacement during exposure to less than a 10th of the pixel size, therefore negligible.In the case of the temporal characterization, the 50 μs exposure time used is equivalent to a 500 ns exposure time on-sky (see the temporal compression discussed in Sec.3.2) and as a result negligible too.The bottom row of Fig. 3 presents a typical image of the experimental acquisitions of pupil irradiance patterns used for scintillation characterization.
We carried all characterizations first on a per-screen basis and later with the three screens together.The individual screen characterization provides the verification of every screen, whereas the three-screen setup characterizes the operating conditions of the bench.
The fiber collimator used as source (i.e., emitter) beam has a 7 mm beam waist diameter at the pupil of the telescope, whereas the pupil is 2 mm.This results in a truncated Gaussian illumination pattern on the pupil.The effect of the Gaussian shape and its wandering was confirmed to be negligible via numerical simulations.As a result, the rest of the numerical simulations conducted do not model this effect and consider a homogeneous illumination pattern.
Numerical simulations
We compared the experimental results to numerical simulations using the optical propagation code TURANDOT.The numerical simulation does not use the phase maps specified to the manufacturer.Instead, it generates phase screens with von Kármán statistics and a C 2 n • dz reported as "measured" profile in Table 3, which was derived from the fitting of r 0 summarized in Table 4.As a consequence, the results of the numerical simulations are expected to be statistically equivalent to the perturbations generated by the emulator but not strictly the same.For spatial characterization statistics, each realization uses news statistically independent draws of the phase screens from the prescribed C 2 n • dz.For the temporal characteristics, a time series is generated from a unique realization of the phase screens, for each time step the layers are shifted according to their wind speed achieving the time series.After propagation, the irradiance over the pupil is computed, obtaining the equivalent of the experimental measurements.Once the equivalent data to the experiment are produced, experiment and simulation data are post-processed in the same fashion.
The numerical simulations do not contain any measurement noise in the resulting irradiance.This is not the case for the experimental measurements, where read-out noise and shot noise are not negligible.The laser power was adjusted to be as high as possible during the characterization.This is limited by the saturation of the matrix detector pixels due to the finite dynamic range, whereas the dynamic range (i.e., variance) of the scintillation speckles increases with the turbulence strength.
Characterization metrics
We characterize scintillation by two main metrics: the power spectral density of the normalized irradiance distribution over the pupil and the scintillation index.
First, we define a normalization of I ðx; y; kÞ, the irradiance distribution across the spatial coordinates x and y, with k being either a temporal or an ensemble index: where the bracket operation corresponds to the sample average of the magnitude, defined in general as hI ðx; y; kÞi x;y;k Note that the normalization is different for each pixel over the pupil of the telescope, therefore it allows the comparison of the scintillation regardless of the average irradiance impinging on the pupil, i.e., it removes any fixed irradiance pattern.
The scintillation index is defined as the variance of the normalized irradiance distribution: ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 4 ; 1 6 9 The variance of the normalized irradiance field can be computed either as a sample variance of the normalized pixel values or from the integral of their power spectral density.For the spatial scintillation index, the sample variance is computed for all pixels in the pupil for each acquisition and then the resulting variances are averaged for all realizations: The temporal computation is the same but computed per pixel using all the samples in the time series of every pixel and then averaged across all pixels: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 7 ; 7 1 2 We also compute the spatial and temporal power spectral density (PSD) of the scintillation patterns.For the spatial characterization, we compute a 2D PSD (using a 2D fast Fourier transform) for every acquisition and then we compute the average of the 2D PSDs of all acquisitions.After computation of the azimuthal average, the spatial PSD is reported as a one-dimensional (1D) PSD.For the temporal PSD computation, we compute the 1D PSD (using the Welch method) of the per-pixel irradiance time series and then take an average of all PSDs for all pixels.
For the temporal characterization, we also provide the pupil-averaged flux.To compute this, we do a spatial average of the normalized irradiance across all pixels for every acquisition, obtaining a unique time series that is analyzed as the per-pixel time series.This is a proxy measurement of the effect of scintillation on the flux measurement of a mono-detector on the focal image plane but computed on the pupil plane from the measurement of the matrix detector.This measurement may be of interest to assess the performance of mono-detectors in these conditions.Nevertheless, this approximation neglects the variation of the angle of arrival, which may lead to a loss of flux if the mono-detector is not large enough.
Spatial scintillation results
The left column of Fig. 5 presents the spatial 1D PSD for PS2, PS3, and the three-screen profile, respectively.The solid line labeled PICOLO, corresponds to the experimental measurements, and the dashed line, labeled TURANDOT, depicts the numerical simulation result.In all cases, both lines overlap for the most part.At least three different factors can explain these deviations: (1) The manufacturing defects and limitations of the phase screens, (2) the presence of noise in the irradiance measurements, and (3) the presence of aliasing due to the finite sampling of the irradiance.In any case, those deviations do not result in strong deviations of the total variance; Fig. 6 shows this by reporting the cumulative integral for each of the PSD.Note also that the low and high frequencies have low power and their contribution to the variance is small, so is the contribution of any small difference between the spectra.
With respect to the manufacturing process effects, the highest spatial frequency present in the phase screens is about 60 cycles∕m, corresponding to a 40 μm pixel size in the phase screens with a 2 mm beam diameter and an equivalent 0.4 m telescope diameter on sky.Any frequency beyond that one is not supposed to be correctly represented by the phase screen; in addition, the phase screens were subject to a subpixel smoothing process by the manufacturer to remove high-frequency defects.This process could also affect the spatial frequencies close to the end of the spectrum.
Regarding measurement noise, we can distinguish two different contributions: detector readout noise and photon noise.Detector read-out noise can be estimated by taking dark frames with the characterization camera and its PSD subtracted from the irradiance one.The presented data were corrected from the data, although its effect is negligible in the variance and in the power spectral densities.The contribution of photon noise cannot be easily estimated since by definition the irradiance measurements under strong scintillation have a high dynamic range; it is therefore not possible to measure the spectrum of this contribution, and the measurements reported contain this signature.
The presence of aliasing has not been quantified either.In conclusion, although it is not possible to allocate the deviations measured to one of the possible causes stated, those are small and have a negligible contribution to the scintillation index.Instead, we highlight the good fitting of all the cut-off frequencies that define the main spectrum features.Since PS2 is stronger in turbulence, it is closer to scintillation saturation and, therefore, the PSD of the spatial scintillation presents two regimes with cut-offs 46,47 at a spatial frequency proportional to r 0 ∕λz and 1∕r 0 , whereas the weaker PS3 has its only cut-off at a spatial frequency proportional to 1∕ ffiffiffiffi ffi λz p , where z is the propagation distance from the layer to the telescope pupil plane.
Table 5 provides a comparison of the spatial scintillation indices for both numerical simulation and experiment.The scintillation index is computed as the sample variance of the normalized irradiance measurements (see Sec. 4.2.3).For PS1, the expected scintillation index is zero since the screen is located at the pupil of the telescope and there is no propagation distance.This is the case in the numerical simulation, whereas in the experimental setup, we measure some scintillation.The reason for this is twofold.First, it is not physically possible to place the screen exactly at the pupil, so there is some propagation.For example, the same screen placed at a distance of 1 mm from the telescope pupil results in an equivalent distance of 40 m on sky and would lead to a scintillation index of 0.064.Second, some diffractive effects are observed (see filament-like structures in the pupil PS1 image in Fig. 3) that also contribute to the inhomogeneity of the pupil illumination.Nevertheless, as can be observed in the three-screen configuration, the scintillation from phase screen 1 does not result in a significant contribution.
Regarding the uncertainty quantification of the results, we provide the error bars for the numerical simulation.The uncertainty of the numerical simulation is due to the statistical convergence of the results.For the computation of the error bars, we used a bootstrapping method, where we divided the available number of samples in groups and computed the average scintillation index from it, where the error bars correspond to the standard deviation among all results.
Temporal scintillation results
The right column of Fig. 5 presents the temporal 1D PSD for PS2, PS3, and the three-screen profile.The solid line labeled PICOLO, corresponds to the experimental measurement, whereas the dashed line, labeled TURANDOT, depicts the numerical simulation result.In all cases, the cut-off frequency 45 is proportional to V∕ ffiffiffiffiffiffiffi z∕k p , with V the transversal velocity of the layer, and k ¼ 2π∕λ the angular wavenumber.The experimental curves show in this case a noise floor at high frequencies, especially for PS2, where the curve has a different floor to the noise floor from the simulation.The causes for these noise floors are the same as for the spatial spectra since the temporal spectra are given by filtering of the spatial spectra due to the shifting of the layers.Both PS3 and the three-layer case show in the TURANDOT case a bump on the high frequencies due to a simulation artifact currently being analyzed.Apart from this, the fit of the curves and their cut-offs shows a satisfactory agreement between the simulation and experiment.Table 6 provides a comparison of the temporal scintillation indices for both numerical simulation and experiment.The scintillation index is computed as the sample variance of the normalized irradiance measurements (see Sec. 4.2.3).Note that, following the ergodicity hypothesis, the scintillation indices in this table should be the same as the spatial scintillation indices reported in Table 5, which proves to be consistent with our results.We provide the error bars for the temporal simulation with all phase screens computed as the standard deviation of the scintillation indices computed for five simulations of the same case with different random seeds for the phase screen generation.
Finally, we study the variation of the integrated flux after PICOLO.This computation is based on the time series resulting from the averaging of the intensity measurements for every pupil pixel at every frame.Note, this is not an equivalent of the coupled flux since it does not take into account the phase and amplitude effects in the coupling into a single mode fiber.Even with perfect AO (i.e., phase) correction, the mismatch between the wavefront amplitude and the Gaussian mode of the fiber will cause further losses that are not accounted for in this measurement.This measurement is closer to the flux measured by a mono-detector big enough that the variation of the angle of arrival due to turbulence and the PSF size does not cause a loss of flux during the measurement of the time series, i.e., the power in the bucket at the telescope aperture level.
Figure 7 shows a part of the time series obtained, both for the experiment, labeled PICOLO, and the numerical simulation, labeled TURANDOT.The time series is further analyzed by computing its PSD, shown in Fig. 8, whereas a histogram of the time series is shown in Fig. 9.The comparison of the PSD shows how the time series have the same time characteristics, including the two regimes with cut-offs around 500 and 1000 Hz.At high frequencies, the experimental results a similar noise floor to the one observed in the temporal spectra.The same high-frequency noise presence can be observed in the time series plot.The shape of the histograms is similar too and the variance of the power (reported in the same figure), equivalent to the scintillation index of the irradiation pattern filtered by the pupil, is also close.Fig. 7 Zoom in on the time series of the pupil averaged flux.
We take this opportunity to discuss the periodicity of the turbulence generated.Note that the emulator only uses a small area of the phase screen, which is the ring resulting from the illumination of the circular beam footprint when the screen rotates.The screens rotate to generate the displacement equivalent to the one produced by the wind for each layer.After a given period of time, the screen completes a turn and the beam starts sampling the same phase distortion as previously, generating a periodic behavior.This effect is diminished by the fact that there are three phase screens rotating at different speeds, so the combination of the three reduces the periodicity of the overall turbulence.The previous statement is true for the phase disturbance since the three screens participate in it.We computed the period for this case, and it corresponds to hours, longer than the typical minute scale time series expected from the emulator.Still, for the scintillation, this period is reduced, since only PS2 and PS3 are involved.Note that both phase and scintillation contribute to fiber coupling, so the periodicity of the coupled flux will be impacted.Finally, the case of the pupil averaged flux presented above is the one most affected by periodicity.Since the speckles produced by PS2 are much smaller than the pupil size, their effect is averaged out in this metric, and only the PS3 speckles (similar in size to the pupil) contribute to it.As a result, the pupil-averaged flux shows a periodicity corresponding to the time that it takes PS3 to complete a rotation.For the time series presented above, over a total duration of 1 s we detected a total of seven periods by computing its autocorrelation, corresponding to the seven rotations of the screen during that amount of time.These periodical effects result from the limitation in the number of phase screens that are used.Once understood and accounted for in the interpretation of the emulator results, such a limitation is deemed acceptable, since to be overcome a greater number of phase screens would be necessary, which would increase the complexity and cost of the setup.
Operating Conditions
Table 7 summarizes the effective parameters of the bench after the characterization reported in this work.
Conclusion
We have presented the methodology for the design and characterization of a turbulence emulator representative of a downlink between an LEO satellite and a ground station at 10 deg elevation.The bench is able to simulate both the strong turbulence conditions at low elevation, as well as the dynamics due to the fast apparent wind caused by the satellite motion.The emulator is able to host different instruments by coupling them to its exit pupil.Therefore, the emulator is able to provide long time series of the disturbed field at the pupil of a telescope under realistic turbulence conditions.
The characterization presented allowed proving that the bench delivers the turbulence conditions expected.This includes a detailed characterization of the scintillation conditions, which is necessary for future investigations regarding the performance of AO systems under scintillation.The agreement found with respect to the numerical simulation motivates the use of the numerical simulation as a digital twin of the bench for performance estimations before testing components on the bench.
As a result, ONERA has a testing platform for future AO systems (wavefront measurement and control laws) under strong turbulence perturbations (scintillation and unsteady turbulence).ONERA uses this platform for its own research and also offers access to it to the community. 48he system will be used to test new AO and free-space optical communication concepts and integrate and validate them before on-sky campaigns.For example, the integration and testing of the AO system for ONERA's FEELINGS ground station 10 study Shack-Hartmann wavefront sensors under scintillation conditions and how to improve their performance, test new AO control algorithms (such as predictive control 49 ), test new turbulence correction concepts, such as photonic integrated circuits 48 and test new telecommunication components or digital signal processing architectures.An upgrade of the bench to add the effect of anisoplanatism in the feeder links 50 cases is currently under study.
Fig. 2
Fig. 2 Image of the implementation of the PICOLO bench.
Fig. 4
Fig.4Reconstructed Zernike mode variances versus their fit to a von Kármán spectrum for PS2.
Fig. 8
Fig. 8 Power spectral density of the time series of the pupil averaged flux.
Fig. 9
Fig. 9 Histogram of the time series of the pupil averaged flux.
Table 1
Integrated turbulence parameters for the 50-layer profile and the compressed three-layer profile.
Table 2
Resulting geometric down-scaling for the different phase screens.equivalent to f samp;bench ¼ 50 Hz for testing new components that cannot work at higher rates at the moment.The layer velocities in the table are reported for the later case, f samp;bench ¼ 50 Hz, and they must be multiplied by 40 for an AO loop running at f samp;bench ¼ 2 kHz.
Table 4
Reconstructed Zernike mode variances fitting results to von Kármán spectrum.
Table 5
Spatial scintillation index comparison between numerical simulation and experiment.
Table 6
Temporal scintillation index comparison between numerical simulation and experiment.
Table 7
Operating conditions of the turbulence delivered by the emulator.Computed from formula for the "measured" profile in Table3τ 0 3.125 μs Computed from formula for the "measured" profile in Table3 | 12,195.4 | 2023-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
METHOD OF TESTING THE READINESS OF MEANS OF TRANSPORT WITH THE USE OF SEMI-MARKOV PROCESSES
In the analysis of the readiness of means of transport, the Markov and semi-Markov processes are particularly applicable, allowing for the description of the usage process over long periods of time, determination of indicators of the exploitability and readiness of the used set of objects, as well as simulation of long-term forecasts of the usage process results. The studies presented in the literature usually concern the theoretical side of the matter, mainly the construction of formal models of the process of changing the operating states of a vehicle. Less attention is paid to the empirical side, especially with regard to the actual conditions of use. Examples of experimental observations presented in the literature most often concern individual cases. This paper lists selected irregularities and presents an example of a study of a real transport system based on semi-Markov processes.
Introduction
Markov and semi-Markov models are used, when the analysed states (failures, deterioration, service) occur at random moments (Buchholz et al. 2018). Thus, their diagnostic and prognostic evaluation is possible. Diagnostics is used to determine the current state of the analysed system/element, while prognoses are used to determine its limit indications, like the remaining service life.
There are many such studies available in the literature, especially in terms of technical facilities readiness and reliability evaluation. In transport engineering or logistics (Darong et al. 2018;Wang, Infield 2018), the theory of one-dimensional Markov and semi-Markov processes is mainly used to describe individual means of transport, treated as a set of functional elements, e.g., a passenger car (Girtler 2012), a bus (Landowski et al. 2017) or a helicopter (Woropay et al. 2004). Considerations concern the determination of their service times (Buchholz et al. 2018;Bunks et al. 2000) or possible scenarios of the aging process (Limnios, Oprişan 2001). Selected components are also examined, such as a compression ignition engine in paper by Rudnicki (2011) or a motor drive system of urban transit in paper by Darong et al. (2018).
Comprehensive analyses allow the formulation and optimization of maintenance policy, as Chen and Trivedi (2005) or Lisnianski and Frenkel (2009) presented in their papers. The systems are analysed as a whole (Borucka et al. 2019;Knopik, Migawa 2017), or the individual phases are considered independently, and each of them is described by a separate Markov model (Alam, Al-Saggaf 1986). Studies describing complex, actual structures are not popular, and those that deserve attention include (Love et al. 2000;Restel 2014;Woropay et al. 2004;Żurek, Tomaszewska 2016). This indicates the need to present methods for modelling such systems using Markov processes, which is also addressed by this paper.
The review of available literature has shown that not all authors using Markov processes bear in mind the assumptions for their application. This is noted for example in papers by Kozłowski et al. (2020), Shi et al. (2013), Zhang et al. (2010). The first requirement is to meet the Markov property. The prediction can be precise only when random sequences of variables meet the Markov property Kozłowski et al. 2020). It is therefore necessary to test the randomness of the sequence of statistical data collected, as was made in papers by and Sanusi et al. (2015). The second assumption concerns the distribution of the variable studied. Many authors point out that this allows to obtain reliable results (Li et al. 2010;Perman et al. 1997). Goodness of fit of empirical data to the parametric distribution is presented, among others, in papers by Van Casteren et al. (2000) -Weibull distribution, Li et al. (2010) -Poisson distribution, Rydén et al. (1998) -double exponential distribution. It happens, though, that only the assumption concerning the form of distributions is made, while the choice of the model is justified by the accuracy of the results obtained. This does not, however, permit the use of models without satisfying the sample requirements. The obtained forecasts may be correct for a dozen or so test observations, but this does not confirm the validity of using the model for the boundary indications of the process, for n → ∞.
An analysis of distributions of variables studied and an assessment of the possibility of fitting them to a known parametric family determines the possibility of using a specific model. It is the Markov model in the case of exponential distributions, and semi-Markov model in the case of other than exponential ones. This paper presents the way of estimating semi-Markov model parameters on the basis of actual observations obtained from the transport company, which additionally allowed for diagnostics and evaluation of the company in terms of its readiness.
Material and methods: introduction to the Markov processes
The literature presents different definitions of semi-Markov processes (Knopik, Migawa 2017;Tang et al. 2018;Zhang et al. 2012). On the basis of works by Grabski, Jaźwiński (2009) and Grabski (2015), it was assumed to define the semi-Markov process with a finite set of states by means of the so-called Markov renewal process.
Let N and N 0 denote a set of natural numbers and a set of non-negative integers. In addition, let S denote a finite or countable set, while R + a set of non-negative real numbers.
Let ( )
, , Ω Ρ denote the probabilistic space in which a sequence of two-dimensional random variables ( ) { } 0 , : n n n N ξ ϑ ∈ , is defined, such that : n S ξ Ω → and : Definition 1 (Papamichail et al. 2016;Hunter 2016;Asmussen et al. 2016 with a probability of 1; 2) for all , i j S ∈ : functional matrix: where: is called the renewal kernel. Vector : is an initial distribution of Markov renewal process. The renewal kernel meets the conditions: are non-decreasing and right-sided continuous functions; The semi-Markov process with a discrete set of states S is defined as follows: then the random process is called a counting process.
Definition 2. The stochastic process
Is called the semi-Markov process generated by the Markov renewal process ( ) { } 0 , : n n n N ξ ϑ ∈ with the initial decomposition p and kernel ( ) Thus defined semi-Markov process is a stochastic one with a discrete space of states S at time t T R + ∈ = . The semi-Markov process is defined when its kernel and initial distribution are specified. The definition shows that: This means that functions that take fixed values in the ranges constitute realizations of the semi-Markov process.
From the definition of both the Markov renewal and SM process, it follows that the state of the semi-Markov process and its duration depends only on the previous state, and not on previous states and their duration. The process kernel and the initial distribution fully define the semi-Markov process (Duan et al. 2019;Wu et al. 2019).
Characteristics of the study subject
The subject of the study is a large enterprise providing road transportation services in Poland. Its fleet consists of 60 Volvo FH 500 trucks (model year: 2012) equipped with compression-ignition engines with a capacity of 13000 cc and with a power of 500 hp. They operate mainly on the basis of long-term agreements with several large production plants, which results in high repeatability of transportation tasks. The company uses modern fleet management software, which archives data on all activities performed by its vehicles. The latter became the basis for the conducted research. The records made available were readings from the vehicles' on-board computers and were sorted and processed from 60 multi-line sheets. After these operations, seven operating states were identified and analysed in detail.
The following operating states were distinguished: »» state S1 -travel, meaning the physical implementation of the transport process; »» state S2 -readiness of vehicle with driver, meaning that the vehicle is operational, in working order and prepared to perform the task. Additionally, the driver is present at the vehicle, awaiting instructions, persons or documents necessary to continue the transportation task, and has met regulatory rest requirements before driving the vehicle; »» state S3 -readiness of vehicle at the stand, meaning that the vehicle is operational, in working order and prepared to perform the task and the driver is absent from the vehicle. This state is usually associated with the vehicle being parked in the garage or in the yard after finishing the shift; »» state S4 -handling works are activities related to vehicle operations on cargo, such as loading, transhipment and unloading; »» state S5 -current maintenance, i.e. inspecting the vehicle before departure and after return, including checking the levels of vehicle fluids, the efficiency of the vehicle lighting, the condition of tires and, if necessary, their replacement; »» state S6 -periodical maintenance, resulting from scheduled service and the vehicle's mileage as well as minor repairs. This includes diagnostics of the technical condition of the vehicle including the following systems: cooling, hydraulic, steering, braking, electric and the powertrain. In addition, in the studied period, the replacements of tie rods, air dryers, brake discs and pads, motor oil, gear fluid in the rear differential, gear fluid in the transmission, oil filter, air filter and cabin pollen filter were recorded; »» state S7 -long-term repairs (in the analysed period these included: head gasket replacement, clutch replacement, suspension system repair, removal and disassembly of the engine with replacement of piston rings). For each distinguished state, basic measures of descriptive statistics were calculated and visual inspection of diagrams was made. The tests were carried out for each vehicle individually as well as collectively for the entire group, concluding that the results obtained for the sample are representative of individual vehicles. With such a large number of observations, drawing conclusions was made possible on the basis of the histograms of probability distribution, as in such a case the histogram takes the shape of the probability density function graph. The scope of the conducted research was presented on the example of the current maintenance operating state.
At the beginning, the graph of the distribution of S5 state duration for the whole sample was analysed, which is shown in Figure 1. This is a naturally limited distribution on the left-hand side with the minimum duration of the state, which in this case is about 5 min. Moreover, it is single-modal with right-sided asymmetry, which is proved by both a positive skew and a higher expected value than the median ( Table 1). The match to several families of parametric distributions was examined. Normal, logarithmically normal, exponential and Weibull distributions were selected. Then, the best model was determined based on the Akaike information criterion, which is a normed match index (Rymarczyk et al. 2019). In this case it turned out to be Weibull distribution with the parameters presented in Table 2.
A confirmation of goodness of fit of empirical data to a selected theoretical distribution was done by comparing the frequencies of real data observation to those of the expected theoretical distribution. In this case, the chisquared test and the Kolmogorov-Smirnov test at the significance level of a = 0.05 were used. The obtained results are presented in Table 3.
Despite the fact that the distribution of the duration of the current maintenance state resembles the Weibull distribution to quite an extent and the distribution diagrams are very similar (Figure 2), the null hypothesis on the compatibility of distributions (p-value = 0.00) was rejected in both the chi-squared and Kolmogorov-Smirnov statistical tests. Similar conclusions applied to all operating states and the similarity to the Weibull and log-normal distributions, could be observed, this was in no case confirmed by statistical tests. This is illustrated by one of the problems related to the analysis of large data sets. In the case of a large sample, even a small deviation will show that the distribution is not well matched. However, in the case of a small sample, it is more difficult to identify a significant deviation even if it is present .
The next stage of the study was the analysis of basic measures of descriptive statistics. The statistics relevant to the current maintenance state are presented in Table 1.
The current maintenance state lasted nearly 30 min on average, but the median is lower and is over 26 min. The minimum and maximum values are 10 and 49 min respectively, so the range is fairly large and equals 39 min. The coefficient of variation at 38% is satisfactory for such processes.
The basic conclusion of the analyses was to limit the availability of some modelling methods. It was not possible to estimate the Markov model, as the requirement condition for its application is the exponential form of distributions of the analysed variables. Therefore, it was decided to formulate the vehicle exploitation model as a semi-Markov process, which does not impose any requirements concerning the form of distribution so they can be freely concentrated in the set ) 0, ∞ .
Semi-Markov model estimation
As already mentioned, the use of the Markov or semi-Markov processes is subject to certain limitations. The basic requirement for their application is the nature of probability distributions of time spent in states, which for Markov processes must be exponential. In semi-Markov processes the probability distributions can be arbitrary. This extension results in a more complex mathematical apparatus (Gupta, Tyagi 2019;Grabski 2017). The second requirement is the so-called memorylessness of the Markov process, which results directly from the definition. Therefore, before estimating the parameters of the model, it was necessary to verify the above assumptions.
First of all, the said assumption about the memorylessness of the studied process was checked. The relation of a given operating state with the duration and type of previous states was investigated. It began with the Kruskal-Wallis test (H = 4.145, p-value = 0.387). The obtained results of the probability value confirmed the expected independence, which was then validated by testing Markov properties using the R environment (The R Foundation 2019). All possible sequences of events (state transitions) were checked by performing chi-square tests on a number of tables created on the basis of empirical interstate relations. The statistics of chi-square test were 2 1054.6 χ = , while p-value = 0.112, which means that the Markov property is fulfilled. Then, for each state, the basic measures of descriptive statistics were determined and the analysis of time distributions of individual states was made in order to adjust them to the known (parametric) family. Such research enabled the choice of the right mathematical model, determination of its parameters and basic characteristics describing the process and its asymptotic properties.
The first stage of the description of the semi-Markov process is to determine the transition matrix between individual states of the process. On the basis of actual observations, it was determined which transitions are possible and which do not occur. The relations determined in this way are presented in the form of a diagram in the Figure 3.
However, the initial distribution was assumed to be in the form: ( ) 1, when 1; 0 0, when 1, where: Weibull distribution therefore, initially the process is in the state S1. Using the determined, acceptable transitions between states ( Figure 3) the kernel of the semi-Markov process was defined. The has the following form: This matrix is an important characteristic of the semi-Markov process. Its nonzero elements ( ) ij Q t denote the probability of the process transitioning from state S i to state S j in a time not exceeding t. Since the distributions of conditional durations of individual operating states do not belong to any of the parametric distributions in this study, the distribution function could not be written using a formula. This made it impossible to determine the kernel of the process in a uniform and transparent manner. In such a situation, it is possible to use the theory of perturbed semi-Markov processes, allowing for the approximation of the distribution function. For this purpose, the values of individual elements of the probability transition matrix were first determined as the frequency of occurrence of individual states in the whole sample. Estimated elements p ij of the matrix P are shown in Table 4. Conditional probabilities p ij constitute the stationary distribution of the Markov chain embedded in the analysed process. The highest values refer to entering into state S1, which means the physical implementation of a transportation task, which is a natural consequence of its highest prevalence in the transport system. However, there is a more important element of the diagnostics, namely limit values of this chain. They are calculated through the equation: where: P -probability transition matrix p ij from state S i to state S j ; P -stationary probability vector p j of the Markov chain.
In the case of the examined process, for the 7-state model, stationary probabilities p j are determined through the following matrix equation: with the normalization condition: 1 2 3 4 5 6 7 1 p + p + p + p + p + p + p = , which is equivalent to the following system of equations: The calculations were made using the Statistica software (https://www.tibco.com/search#q=statistica). The results are presented in Table 5.
The next step was to calculate the expected conditional duration of the distinguished operating states of the semi-Markov process. The results are presented in Table 6. Average conditional duration of individual states and values of probability transitions constituted the basis for calculation of average unconditional process states duration j T according to the equation: For this purpose, the following equations were solved: The obtained expected values of unconditional duration i T of individual operating states of the process are presented in Table 7.
The random variables T i have finite positive expected values. This makes it possible to calculate the probability limits P j for the semi-Markov process, based on the assumption: which comes down to solving the system of equations: , where: T T T ⋅ p + ⋅ p + ⋅ p .
The results are the limit probabilities of the semi-Markov process, the values of which are presented in Table 8.
Results and discussion
The calculated limit probabilities P j constitute an important operational characteristic. They represent the behaviour of the system over a longer period of time t → ∞. For the examined enterprise, the greatest value was found for the state indicating the performance of a transportation task, i.e., S1. Such a result indicates a high (over 54%) usage of means of transport in accordance with their intended use. This result can be considered satisfactory.
The remaining values are definitely lower and result from the implementation of processes accompanying physical distribution, such as handling works (state S4 -10%) or daily current maintenance (S4 -10%). Quite a large result (more than 10%) was found for the state S2 -readiness of vehicle with driver, which may mean better organization of the process and reduced waiting time for documents, persons, service, etc. States S6 and S7 related to repair and overhaul activities may also need to be investigated. They account for more than 20% in total and, as unfit states, have a strong influence on the technical readiness factor, which is determined as the sum of the respective probabilities of the reliability states (Rymarz et al. 2016). For the system under analysis, exploitability states range from S1 to S5. This makes it possible to determine the readiness of the system on the basis of the semi-Markov model according to the equation: 1 2 3 4 5 K P P P P P = + + + + .
The determined coefficient of technical readiness is K = 0.7957 and means that 80% of the time the tested system is ready to perform the task. This is a satisfactory result, but inspection of the causes of the high probability values for unfitness states (S6 and S7) and their correction may result in an increase in this coefficient.
Reliability function
In order to find the reliability function of the studied process, the theory of perturbed semi-Markov processes was used. It required the determination of set A of exploitability states of the studied process, which included states S1 to S5 and set A′ of unfitness states, which included states S6 and S7. Then, in accordance with (Grabski 2015;Grabski, Jaźwiński 2009), coefficient e i was calculated (Table 9) on the basis of the equation: Using the expected values 0 i m (Table 10) calculated from the sample using the Equation (24), coefficient m 0 was calculated: where: 0 i p means the stationary distribution of the Markov chain in the set A of states.
Then, using the equation: value e was calculated. Results obtained: e = 0.0819 and m 0 = 181.13 allowed for determination of the reliability function R(t) according to an approximate equation: which for the analysed process takes the form: A graph of reliability functions for the 7-state model is presented in Figure 4.
Based on the reliability function it is also possible to calculate the mean time to failure of the object using equation: Placing the approximate reliability function: This means that on average every 36 hours there is a failure that requires the intervention of a mechanic. Such a result confirms earlier concern related to high asymptotic indications for the states S6 and S7. Therefore, it is necessary to diagnose in detail the process of repairs carried out in the examined enterprise.
Conclusions
The paper presents an analysis of a real system of operations of means of transport together with a mathematical model allows us to study their level of readiness to carry out transportation tasks using the semi-Markov model. The calculated characteristics made it possible to diagnose the analysed system and assess the level of technical readiness. Areas, which are a source of increasing potential within the organization also need to be inspected in detail.
Moreover, the author's intention was to show that although the form of real observations, especially for such complex technical systems, often differs from parametric distributions, there are methods of examination allowing for the determination of technical readiness and reliability of such systems.
Determination of the reliability of transport systems is an important management component, allowing to asses and, if necessary, modify the adopted strategies. It enables fleet reediness surveying and adjusting maintenance and repair processes to the tasks performed. It must therefore be carried out correctly. The mathematical modelling of transport processes should be preceded by a thorough preliminary analysis covering both their qualitative characteristics and statistical research. In addition, the selected mathematical methods should be used only if the assumptions for their use are confirmed to be met.
The semi-Markov processes presented in the paper are not a popular tool for the evaluation of complex transport systems, therefore such examples are scarcely found in the literature. The reason for this is that the form of real observations, especially for such multidimensional technical systems, often differs from parametric distributions, which makes the analysis much more difficult. As such studies are isolated instances and there is a small number of real examples, the method presented in this paper fills this gap to some extent, which was one of the main assumptions of the author. Moreover, the intention was to emphasize the necessity to carry out qualitative and quantitative evaluation of the collected empirical data before starting to estimate the model parameters.
As a result of the transport system analysis using the semi-Markov model, values of selected indicators were obtained, which were then used to determine the probability of vehicles staying in the distinguished operating states. Such a model allows for a qualitative and quantitative assessment of reliability, identification of weak links and areas requiring detailed control, which can result in an increase of the organization's potential.
If a higher level of detail needs to be achieved, the model may be further developed by distinguishing additional activities in each of the operating states. Such a solution will make it possible, for example, to identify the cause of the increased limit values for the vehicles being in service and repair states. Diagnostics of the transport system with the use of semi-Markov processes based on the forecasting of selected operating indicators may therefore constitute good support for evaluation and control processes in such enterprises.
Disclosure statement
The author declares that she has no relevant or material financial interests that relate to the research described in this paper. | 5,685.4 | 2021-03-30T00:00:00.000 | [
"Mathematics"
] |
Design and Validation of an Adjustable Large-Scale Solar Simulator
: This work presents an adjustable large-scale solar simulator based on metal halide lamps. The design procedure is described with regards to the construction and spatial arrangement of the lamps and the designed optical system. Rotation and translation of the lamp array allow setting the direction and the intensity of the luminous flux on the horizontal plane. To validate the built model, irradiance nonuniformity and temporal instability tests were carried out assigning Class A, B, or C for each test, according to the International Electrotechnical Commission (IEC) standards requirements. The simulator meets the Class C standards on a 200 × 90 cm test plane, Class B on 170 × 80 cm, and Class A on 80 × 40 cm. The temporal instability returns Class A results for all the measured points. Lastly, a PV panel is characterized by tracing the I–V curve under simulated radiation, under outdoor natural sunlight, and with a numerical method. The results show a good approximation.
Introduction
The transient nature of environmental parameters, especially solar radiation, makes outdoor technology testing uncontrollable. Solar simulators are devices that provide approximately natural sunshine with an artificial light source and allow controllable indoor testing under desirable conditions. In fact, solar radiation, in a natural environment, changes continuously over days and seasons, while indoor tests are programmable, repeatable, and stable under laboratory conditions. This way it is possible to change a single parameter (for example, the intensity or the incidence angle of the luminous flux) and to analyze how the system is affected by each of them. Crucial aspects of the design of a solar simulator are the correct spectral matching, irradiance uniformity, and temporal stability. When testing PV cells, the spectrum is paramount since it influences the conversion efficiency of the incident radiation. Flux uniformity and stability have to be acceptable so that the average value is as stable as possible on the target area. Early solar simulators were designed in the 1960s by the National Aeronautics and Space Administration (NASA) for spacecraft ground-testing. Recently, solar simulators have been used in testing, calibrating, and characterizing photovoltaic panels and testing dashboards, steering wheels, and airbags in the automotive industry. Indeed, they have been used in many different applications, such as conducting aging tests on PV materials, investigating the effects of light on the growth of plants and algae, and testing of thermal or thermo-chemical devices for use in chemical reforming.
In thermal applications, solar simulators are classified as low and high flux, depending on the output flux, for a few suns (1 sun = 1 kW/m 2 ) to more than 30 suns, respectively.
The major components of a solar simulator are the light source with power supply and the optics and filters to modify the output beam. Light-source selection is the primary step to obtain suitable simulated radiation. Generally, four types of lamps are used in a
Materials and Methods
The proposed solar simulator is based on metal halide lamps, model Philips MASTER Colour CDM-T MW eco/360 W842 E40 [22] (Figure 1). It is a lamp in ceramic with clear tubular external bulb. In metal halide lamps, the electric arc is generated through a gaseous mixture of vaporized mercury and metal halide compounds held at a pressure between 10 and 35 bar [23]. They have been chosen for their long lifetime (99% after 6000 h and 90% after 16,000 h) and their relatively low cost. Each has a power of 360 W, powered with 3.6 A and 124 V, and a color temperature of 4200 K. The total power is around 7.2 kW.
Materials and Methods
The proposed solar simulator is based on metal halide lamps, model Philips MAS-TER Colour CDM-T MW eco/360 W842 E40 [22] (Figure 1). It is a lamp in ceramic with clear tubular external bulb. In metal halide lamps, the electric arc is generated through a gaseous mixture of vaporized mercury and metal halide compounds held at a pressure between 10 and 35 bar [23]. They have been chosen for their long lifetime (99% after 6000 h and 90% after 16,000 h) and their relatively low cost. Each has a power of 360 W, powered with 3.6 A and 124 V, and a color temperature of 4200 K. The total power is around 7.2 kW. The lamps are mounted on the focal point of parabolic mirrors to create a luminous flux of uniform intensity on the target surface ( Figure 2a). In fact, rays coming from the focus, after being reflected, move parallel to the optical axis, regardless of the distance of the surface. The theoretical path of the incident rays on a parabola is represented in Figure 2b, provided that the surface is perfectly smooth and follows the correct equation. The reflective material used is a thin sheet of aluminum, with an average reflection coefficient of 0.99 in the spectral range. Given the equation of a parabola, the focal distance F is located at In this work, parameters a, b, and c are set to the values of 6, 0, and 0, respectively. Accordingly, the focal distance is located at 4.1 cm from the mirror axis. The lamps are mounted on the focal point of parabolic mirrors to create a luminous flux of uniform intensity on the target surface ( Figure 2a). In fact, rays coming from the focus, after being reflected, move parallel to the optical axis, regardless of the distance of the surface. The theoretical path of the incident rays on a parabola is represented in Figure 2b, provided that the surface is perfectly smooth and follows the correct equation. The reflective material used is a thin sheet of aluminum, with an average reflection coefficient of 0.99 in the spectral range. Given the equation of a parabola, the focal distance F is located at
Materials and Methods
The proposed solar simulator is based on metal halide lamps, model Philips MA TER Colour CDM-T MW eco/360 W842 E40 [22] (Figure 1). It is a lamp in ceramic wi clear tubular external bulb. In metal halide lamps, the electric arc is generated through gaseous mixture of vaporized mercury and metal halide compounds held at a pressu between 10 and 35 bar [23]. They have been chosen for their long lifetime (99% after 600 h and 90% after 16,000 h) and their relatively low cost. Each has a power of 360 W, pow ered with 3.6 A and 124 V, and a color temperature of 4200 K. The total power is aroun 7.2 kW. The lamps are mounted on the focal point of parabolic mirrors to create a luminou flux of uniform intensity on the target surface ( Figure 2a). In fact, rays coming from th focus, after being reflected, move parallel to the optical axis, regardless of the distance the surface. The theoretical path of the incident rays on a parabola is represented in Figure 2 provided that the surface is perfectly smooth and follows the correct equation. The refle tive material used is a thin sheet of aluminum, with an average reflection coefficient 0.99 in the spectral range. Given the equation of a parabola, y = ax 2 + bx + c, ( the focal distance F is located at In this work, parameters a, b, and c are set to the values of 6, 0, and 0, respectivel Accordingly, the focal distance is located at 4.1 cm from the mirror axis. In this work, parameters a, b, and c are set to the values of 6, 0, and 0, respectively. Accordingly, the focal distance is located at 4.1 cm from the mirror axis. The proposed solar simulator is a multilamp pattern with 20 lamps arranged in four rows of five lamps each ( Figure 3). This creates a large target surface with the same luminous characteristics.
Appl. Sci. 2021, 11, x FOR PEER REVIEW The proposed solar simulator is a multilamp pattern with 20 lamps arrange rows of five lamps each ( Figure 3). This creates a large target surface with the sa nous characteristics. The lamp system is mounted on a structure made of aluminum profiles. Th olas were glued on specially shaped wooden boards and mounted on the aluminu ture ( Figure 4a). The system is equipped with a vertical guide to adjust the lam distance and vary the intensity of the simulated radiation. It can also rotate to sim incident radiation not perpendicular to the plane (Figure 4b). Figure 4 depicts the reproduction. The lamp system is mounted on a structure made of aluminum profiles. The parabolas were glued on specially shaped wooden boards and mounted on the aluminum structure ( Figure 4a). The system is equipped with a vertical guide to adjust the lamp-target distance and vary the intensity of the simulated radiation. It can also rotate to simulate an incident radiation not perpendicular to the plane (Figure 4b). Figure 4 depicts the 3D CAD reproduction.
The electrical panels are located on the sides of the structure and are cooled by fans ( Figure 5). The lamps are powered by a three-phase system. Each 220 V phase has an associated neutral phase; the first two phases feed a group of seven lamps each, while the third phase feeds the remaining six lamps. The 20 lamps are connected to the side boxes in two groups of 10, that is, a phase of seven plus three of the phase of six lamps. The parabola arrangement is not symmetrical with respect to the center of the structure, as the pulley is mounted for the handling of lamps. The electrical panels are located on the sides of the structure and are cooled by fans ( Figure 5). The lamps are powered by a three-phase system. Each 220 V phase has an associated neutral phase; the first two phases feed a group of seven lamps each, while the third phase feeds the remaining six lamps. The 20 lamps are connected to the side boxes in two groups of 10, that is, a phase of seven plus three of the phase of six lamps. The parabola arrangement is not symmetrical with respect to the center of the structure, as the pulley is mounted for the handling of lamps. The electrical panels are located on the sides of the structure and are cooled by fans ( Figure 5). The lamps are powered by a three-phase system. Each 220 V phase has an associated neutral phase; the first two phases feed a group of seven lamps each, while the third phase feeds the remaining six lamps. The 20 lamps are connected to the side boxes in two groups of 10, that is, a phase of seven plus three of the phase of six lamps. The parabola arrangement is not symmetrical with respect to the center of the structure, as the pulley is mounted for the handling of lamps.
Results
A series of tests were carried out according to the IEC 60904-9 standard, "Solar simulator performance requirements", namely nonuniformity of irradiance and temporal instability. These are useful parameters to define if a solar simulation meets the requirements of Class A, B, or C, the ranges of which are defined in Table 1. The experiments were carried out in the Department of "Industrial Engineering and Mathematical Sciences" of the Polytechnic University of Marche in Ancona (Italy).
Radiation Nonuniformity
In the experimental stage, the spatial uniformity was investigated by evaluating the irradiance on the test surface of the solar simulator. A pyranometer, model DPA/ESR 154, was used to measure the global solar radiation on the test surface. The device can measure from 0 to 2000 W/m 2 , with a sensitivity of 10.88 µV/Wm −2 , a linearity of 0.75%, and a sampling time of 1 s. The target area was divided into 200 squares, each with a side of 10 cm. The irradiation was thoroughly measured by moving the pyranometer inside the rectangular grid. The pyranometer was positioned in the center of each square. The total measured area is 200 × 100 cm ( Figure 6). Data were acquired with the National Instrument NI-9205, a module for the acquisition of input voltage. The measured signal was converted from mV to W/m 2 in the software LabVIEW to have a more understandable graph. Figure 7 shows the arrangement of the measuring points, from position A1 to J20. The irradiance was assumed to be constant during the entire testing process because of the generally constant electrical alimentation.
A series of tests were carried out according to the IEC 60904-9 standard, "Solar simulator performance requirements", namely nonuniformity of irradiance and temporal instability. These are useful parameters to define if a solar simulation meets the requirements of Class A, B, or C, the ranges of which are defined in Table 1. The experiments were carried out in the Department of "Industrial Engineering and Mathematical Sciences" of the Polytechnic University of Marche in Ancona (Italy).
Radiation Nonuniformity
In the experimental stage, the spatial uniformity was investigated by evaluating the irradiance on the test surface of the solar simulator. A pyranometer, model DPA/ESR 154, was used to measure the global solar radiation on the test surface. The device can measure from 0 to 2000 W/m 2 , with a sensitivity of 10.88 μV/Wm −2 , a linearity of 0.75%, and a sampling time of 1 s. The target area was divided into 200 squares, each with a side of 10 cm. The irradiation was thoroughly measured by moving the pyranometer inside the rectangular grid. The pyranometer was positioned in the center of each square. The total measured area is 200 × 100 cm ( Figure 6). Data were acquired with the National Instrument NI-9205, a module for the acquisition of input voltage. The measured signal was converted from mV to W/m 2 in the software LabVIEW to have a more understandable graph. Figure 7 shows the arrangement of the measuring points, from position A1 to J20. The irradiance was assumed to be constant during the entire testing process because of the generally constant electrical alimentation. According to the IEC 60904-7 standard, the nonuniformity is defined by the following equation: where E max is the maximum irradiance measured by detector (in W/m 2 ) and E min is the minimum irradiance (in W/m 2 ). The highest measured value on the test area was about 1038 W/m 2 at the positions C5, E19, and H19, and the lowest was equal to 729 W/m 2 at the position A1. With the results of the experimental tests, the classes in the test area can be determined in accordance with the quality standards. Table 2 lists the percentage error deviation from the maximum irradiation for the three classes. In particular, Class C (depicted by the blue rectangle) consists of the area that spans from B20 to J1, with a dimension of 200 × 90 cm, and the According to the IEC 60904-7 standard, the nonuniformity is defined by the following equation: where Emax is the maximum irradiance measured by detector (in W/m 2 ) and Emin is the minimum irradiance (in W/m 2 ). The Class A area is composed of the cells ranging from E17 to H10, consid relative maximum in cell E17 equal to 1031 W/m 2 (Table 3). It has an area of 80 The asymmetric position of the rectangles is due, as described above, to the sl placement of the parabolas for the mounting of the pulley. Table 4 summarizes the sions of the three classes obtained, showing the maximum, the minimum and the value in each of them. The Class A area is composed of the cells ranging from E17 to H10, considering its relative maximum in cell E17 equal to 1031 W/m 2 (Table 3). It has an area of 80 × 40 cm. The asymmetric position of the rectangles is due, as described above, to the slight displacement of the parabolas for the mounting of the pulley. Table 4 summarizes the dimensions of the three classes obtained, showing the maximum, the minimum and the average value in each of them.
Radiation Instability over Time
In accordance with IEC 60904-9, the test comprises a long-term irradiation measurement in a specific point on the test surface for a time equal to 10 min. The lamps require a warm-up time to reach the nominal power (approximately 3 min). The irradiance value was controlled and maintained constant at 1000 W/m 2 for the entire duration of the test.
The temporal instability is given by the following equation: where E max is the maximum irradiance at the selected point during the test and E min is the minimum irradiance at the same point, both measured in W/m 2 .
On the test area, four specific points were selected to test the temporal instability: H10, H17, B6, and B13. During the 10-min test, the maximum and the minimum irradiance values were recorded, and the percentage error was calculated by Equation (3). Table 5 summarizes the results, and all the measured points meet the Class A standards since the error is less than 2%. Figure 9 shows the complete trends.
Radiation Instability over Time
In accordance with IEC 60904-9, the test comprises a long-term irradiation measur ment in a specific point on the test surface for a time equal to 10 min. The lamps require warm-up time to reach the nominal power (approximately 3 min). The irradiance valu was controlled and maintained constant at 1000 W/m 2 for the entire duration of the test The temporal instability is given by the following equation: Temporal instability (%) = × 100% ( where is the maximum irradiance at the selected point during the test and is the minimum irradiance at the same point, both measured in W/m 2 . On the test area, four specific points were selected to test the temporal instabilit H10, H17, B6, and B13. During the 10-min test, the maximum and the minimum irradian values were recorded, and the percentage error was calculated by Equation (3). Table summarizes the results, and all the measured points meet the Class A standards since th error is less than 2%. Figure 9 shows the complete trends. Figure 9. Long-term instability test results.
I-V Curve
Further investigation was conducted by characterizing the I-V curve of a PV panel. The experimental analysis was carried out with tests performed under the artificial radiation and with tests outside with natural solar radiation. The panel was connected to a variable resistor, and the curve was obtained by varying the value of the resistance from zero to infinity [24] (Figure 10). In each step, the voltage and the current were simultaneously measured. A 40 W 12 V panel with monocrystalline cells was chosen, with dimensions of 650 × 505 × 30 mm. The module voltage (Vmp) was 17.8 V, the nominal current (Imp) was 2.3 A, the short-circuit current (Isc) was 2.7 A, and the open-circuit voltage (Voc) was 21.3 V. The panel was composed of 36 cells.
I-V Curve
Further investigation was conducted by characterizing the I-V curve of a PV panel. The experimental analysis was carried out with tests performed under the artificial radiation and with tests outside with natural solar radiation. The panel was connected to a variable resistor, and the curve was obtained by varying the value of the resistance from zero to infinity [24] (Figure 10). In each step, the voltage and the current were simultaneously measured. A 40 W 12 V panel with monocrystalline cells was chosen, with dimensions of 650 × 505 × 30 mm. The module voltage (Vmp) was 17.8 V, the nominal current (Imp) was 2.3 A, the short-circuit current (Isc) was 2.7 A, and the open-circuit voltage (Voc) was 21.3 V. The panel was composed of 36 cells. To validate the experimental results, a numerical method was implemented on the software MATLAB. The relationship between voltage and current is as follows [25]: where Isc,TG and Voc,TG are the short-circuit current and the open-circuit voltage at a given temperature (T) and solar irradiance (G), respectively, and s is the shape parameter. The temperature and voltage dependence of Isc and Voc parameters are the following: , , log where STC indicates standard test conditions (cell temperature equal to 25 °C, solar irradiance equal to 1000 W/m 2 and an average solar spectrum at air mass AM1.5), μIsc and μVoc are the current-temperature and voltage-temperature coefficients, respectively; Ns is the number of solar cells of the PV panel; and Vt,T is the thermal voltage given by Equation (9) as a function of the cell temperature, the Boltzmann constant, and the value of the electron charge [26]: Figure 10. Variable resistor scheme [24].
To validate the experimental results, a numerical method was implemented on the software MATLAB. The relationship between voltage and current is as follows [25]: where I sc,TG and V oc,TG are the short-circuit current and the open-circuit voltage at a given temperature (T) and solar irradiance (G), respectively, and s is the shape parameter. The temperature and voltage dependence of Isc and Voc parameters are the following: where STC indicates standard test conditions (cell temperature equal to 25 • C, solar irradiance equal to 1000 W/m 2 and an average solar spectrum at air mass AM1.5), µ Isc and µ Voc are the current-temperature and voltage-temperature coefficients, respectively; Ns is the number of solar cells of the PV panel; and V t,T is the thermal voltage given by Equation (9) as a function of the cell temperature, the Boltzmann constant, and the value of the electron charge [26]: Iterations are required to determine the shape parameter s. The value is initially set to 1, and it is increased by a fixed amount after each iteration. The procedure ends when the difference (called error) between the fill factor and the normalized electric power is less than the tolerance established at the beginning. The flowchart is summarized in Figure 11 [25].
Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 14 Iterations are required to determine the shape parameter s. The value is initially set to 1, and it is increased by a fixed amount after each iteration. The procedure ends when the difference (called error) between the fill factor and the normalized electric power is less than the tolerance established at the beginning. The flowchart is summarized in Figure 11 [25]. Figure 11. Procedure for the determination of the shape S parameter [25].
A valid expression for the fill factor FF tg(T,G) is given by [27] FF T,G = FF 0 (1 − r S ) with FF 0 [28]: and where v oc,T = V oc,T N S n V t,T where FF 0 is the fill factor of the ideal PV without resistive effects, r s is the normalized series resistance, v oc,t is the normalized thermal voltage, n is the diode quality factor, Rs is the internal resistance, and FF is the fill factor at STC conditions. Figure 12 compares the three curves. The simulated solar radiation curve shows a good approximation when compared to the curves of both the outdoor experiment and the numerical method.
where FF0 is the fill factor of the ideal PV without resistive effects, rs is the normalized series resistance, voc,t is the normalized thermal voltage, n is the diode quality factor, Rs is the internal resistance, and FF is the fill factor at STC conditions. Figure 12 compares the three curves. The simulated solar radiation curve shows a good approximation when compared to the curves of both the outdoor experiment and the numerical method.
Conclusions
A metal halide solar simulator for indoor applications is presented in this work. The optical system based on parabolic reflection proves to be a good choice because lamps can be arranged in a matrix pattern and obtain the same flux conditions. This way, a large target surface can be obtained. The disadvantage is that parabolas require a high degree of precision during their construction to obtain the exact shape designed in the theoretical phase. Furthermore, in the assembly phase, a perfect contact between the aluminum parabola and the wooden forms must be guaranteed. An additional advantage is the mobile structure of the solar simulator because it grants light intensity regulation to reproduce, for instance, daily irradiance profiles and the testing of devices under a desirable incidence angle. Spatial uniformity and temporal stability tests provided excellent results. Classes C, B, and A were obtained on the test planes of 200 × 90 cm, 170 × 80 cm, and 80 × 40 cm, respectively. The temporal instability test met Class A standards for all the measured points. In the I-V curve characterization of a PV panel, the results are very similar to those obtained with the external test under natural radiation and with a software-implemented numerical method. Further improvements in realizing a larger target area and extending the spectrum in the infrared and UV area can be investigated. This can be achieved by adding infrared or UV emitters in the right amount, position, and power in the already present lamp array.
Conclusions
A metal halide solar simulator for indoor applications is presented in this work. The optical system based on parabolic reflection proves to be a good choice because lamps can be arranged in a matrix pattern and obtain the same flux conditions. This way, a large target surface can be obtained. The disadvantage is that parabolas require a high degree of precision during their construction to obtain the exact shape designed in the theoretical phase. Furthermore, in the assembly phase, a perfect contact between the aluminum parabola and the wooden forms must be guaranteed. An additional advantage is the mobile structure of the solar simulator because it grants light intensity regulation to reproduce, for instance, daily irradiance profiles and the testing of devices under a desirable incidence angle. Spatial uniformity and temporal stability tests provided excellent results. Classes C, B, and A were obtained on the test planes of 200 × 90 cm, 170 × 80 cm, and 80 × 40 cm, respectively. The temporal instability test met Class A standards for all the measured points. In the I-V curve characterization of a PV panel, the results are very similar to those obtained with the external test under natural radiation and with a software-implemented numerical method. Further improvements in realizing a larger target area and extending the spectrum in the infrared and UV area can be investigated. This can be achieved by adding infrared or UV emitters in the right amount, position, and power in the already present lamp array. | 6,195 | 2021-02-23T00:00:00.000 | [
"Physics"
] |
Measurement of Small-Slope Free-Form Optical Surfaces with the Modified Phase Retrieval
In this paper, we demonstrate the use of the modified phase retrieval as a method for application in the measurement of small-slope free-form optical surfaces. This technique is a solution for the measurement of small-slope free-form optical surfaces, based on the modified phase retrieval algorithm, whose essence is that only two defocused images are needed to estimate the wave front with an accuracy similar to that of the traditional phase retrieval but with less image capturing and computation time. An experimental arrangement used to measure the small-slope free-form optical surfaces using the modified phase retrieval is described. The results of these experiments demonstrate that the modified phase retrieval method can achieve measurements comparable to those of the standard interferometer.
Introduction
With the rapid development of national defense, aerospace, and other fields, the demand for high-precision and high-quality photoelectric products is increasing, and these photoelectric products are gradually developing toward miniaturization. Using free-form surfaces, the imaging quality of the optical imaging system can be greatly improved; the illumination uniformity of the optical illumination system can be evidently improved; and the transmission efficiency of the information transmission system can also be remarkably improved. With the recent advances in optical design and fabrication, the free-form optical surface is commonly used because of its better performance and compactness [1][2][3].
Because the free-form optical surface has more degrees of freedom for correcting optical aberrations, the high precision free-form optical surface metrology remains difficult and is still a challenge [4][5][6]. Therefore, in recent years, many scholars have studied the optical testing methods of free-form surfaces. Thus, a number of metrology methods have been developed [7][8][9], and these methods are roughly divided into contact metrology and non-contact metrology. Because it is easy to scratch the surfaces with contact metrology, non-contact metrology is preferred for high-precision optical surfaces. The recognized interferometer cannot measure the free-form surfaces. This is not only because the standard interferometry has some typical disadvantages, such as high sensitivity to vibrations or temperature fluctuations, hindering its usage to strictly controlled laboratory conditions, but also because the fringes are too dense, and the interference fringes cannot be generated.
In order to solve the problem of free-form optical surface metrology, we introduce a feasible non-contact measurement method called Phase Retrieval (PR), a high-precision method and an alternative to interferometry for optical testing, with the advantages of compactness, low-cost, and a stable system. PR has emerged as a potential solution for free-form surfaces metrology [10][11][12][13]. As PR is a method of wave-front sensing and a simple experimental arrangement in optical metrology, it has been used in system measurement and alignment; it has, for example, been used in the Hubble Space Telescope and the James Webb Space Telescope [14,15]. Besides, PR has also been applied to test spherical mirror surfaces and rotationally symmetric aspherical surfaces [16,17]. As we know, algorithm is the soul of PR. Some PR algorithms have robustness, multiple solutions, and stagnation; for example, the convergence speed of the fastest gradient search in the PR algorithm is not the fastest, and it usually falls into a local minimum. In addition, the conjugate gradient search method in the PR algorithm is more robust than the fastest gradient search method. Thus, we introduce a new PR algorithm, which potentially has the advantage of improving the efficiency of phase recovery in order to solve the limitations of the traditional PR in its iterative uncertainty and slow convergence speed [18,19].
In this paper, we will first introduce the theory and the application of PR and then the improved PR algorithm in Section 2. In Section 3, the results and analysis of the experiment are presented. The conclusion is finally drawn in Section 4.
The Principle of PR
PR technology is based on the theory that the diffraction of coherent light propagates. PR generally involves estimating a complex-valued phase distribution from known intensity distributions at some properly selected planes. It is an inverse problem in optics, which uses the Fourier transform relationship between the pupil and the in-focus plane to iteratively estimate the phase which is suffering from non-uniqueness. Figure 1 shows the schematic layout of the PR principle. When a beam propagates along the optical axis, the diffraction field distribution is formed at a certain propagation distance. The reference wave emitted from the light source is incident on the measured mirror. After the reflection of the output light field, the complex amplitude distribution of the optical wave front contains the error information of the measured mirror. Using these intensity images and the PR algorithm, one can accurately recover the surface error of the measured mirror [20]. The detector is placed at the focal plane of the wave front reflected or transmitted from the surface under testing and will take a number of images, including those in focus and defocused from the focal plane in both directions. The wave front can be estimated with the known pupil size and the defocus amounts of the detectors [21].
free-form surfaces metrology [10][11][12][13]. As PR is a method of wave-front sensing and a simple experimental arrangement in optical metrology, it has been used in system measurement and alignment; it has, for example, been used in the Hubble Space Telescope and the James Webb Space Telescope [14,15]. Besides, PR has also been applied to test spherical mirror surfaces and rotationally symmetric aspherical surfaces [16,17]. As we know, algorithm is the soul of PR. Some PR algorithms have robustness, multiple solutions, and stagnation; for example, the convergence speed of the fastest gradient search in the PR algorithm is not the fastest, and it usually falls into a local minimum. In addition, the conjugate gradient search method in the PR algorithm is more robust than the fastest gradient search method. Thus, we introduce a new PR algorithm, which potentially has the advantage of improving the efficiency of phase recovery in order to solve the limitations of the traditional PR in its iterative uncertainty and slow convergence speed [18,19].
In this paper, we will first introduce the theory and the application of PR and then the improved PR algorithm in Section 2. In Section 3, the results and analysis of the experiment are presented. The conclusion is finally drawn in Section 4.
The Principle of PR
PR technology is based on the theory that the diffraction of coherent light propagates. PR generally involves estimating a complex-valued phase distribution from known intensity distributions at some properly selected planes. It is an inverse problem in optics, which uses the Fourier transform relationship between the pupil and the in-focus plane to iteratively estimate the phase which is suffering from non-uniqueness. Figure 1 shows the schematic layout of the PR principle. When a beam propagates along the optical axis, the diffraction field distribution is formed at a certain propagation distance. The reference wave emitted from the light source is incident on the measured mirror. After the reflection of the output light field, the complex amplitude distribution of the optical wave front contains the error information of the measured mirror. Using these intensity images and the PR algorithm, one can accurately recover the surface error of the measured mirror [20]. The detector is placed at the focal plane of the wave front reflected or transmitted from the surface under testing and will take a number of images, including those in focus and defocused from the focal plane in both directions. The wave front can be estimated with the known pupil size and the defocus amounts of the detectors [21]. Assuming that the aperture of one measured optical surface is D, the focal length is Z, and the laser wavelength is λ. The generalized pupil function is f (x), whose amplitude is | f (x)|, and the phase is θ, which can be obtained with Zernike polynomial fitting: θ(x) = ∑ n α n Z n (x), where the real number α n represents the first nth terms of the polynomial coefficients and Z n indicates the nth terms of the Zernike polynomials.
We could get: where x is an M-dimensional spatial coordinate, and θ is wave-front distortion.
Micromachines 2022, 13, 82 For a linear optical system with the defocus δ in the focal plane, the impulse response function F(u) can be described as: where x is the spatial coordinate of the pupil, u is the coordinates of the image and both of them are two-dimensional vector field coordinates. ψ is the phase of the impulse response, F is the two-dimensional Fourier transform, and ε(x, δ) is the wave-front aberration caused by the defocus δ. (1) is the priori condition of a known optical system corresponding to the size and shape of the pupil. |F(u)| 2 is the image collected by the detector. We estimate α n and then obtain θ in Equation (1) with a number of measurements at different defocuses.
The Modified Gradient Search Algorithm of PR
In this paper, we present a modified gradient search algorithm to solve the phase recovery problem [22,23]. Let g m,k , θ m,k , G m,k , ϕ m,k be the estimated values of f , δ, F, ψ when the mth images iterate k times, g k represents the combined estimate value with every g m,k to f when iterated k times, which is g k ( Repeat from steps b to steps f until the extrusion of the condition, which is the limitation of the iteration times or the function of the object descended to the appointed value. The function of the object is described as [24]: where N represents the width of the collected images. According to b and c, the phase G m,k (u) and the phase G m,k (u) are equal, so we can get: We apply the mathematical optimization method with Equation (3) as the function of the object and the unknown quantity about each partial derivative together with the substitution gradient search algorithm, finally obtaining the estimation of the wave-front distortion corresponding to θ, when B k is smallest. The most important application of the gradient search algorithm is the correct description of the function of the object and the partial derivatives of each variable. We first discuss the partial derivative g(x), which is as the unknown variables. We get the derivative from B to g(x), respectively, and get the derivative from B k to the real part of ∂g real and the imaginary part of ∂g imag Micromachines 2022, 13, 82 and Thus, Equation (4) can be changed to: where c.c. represents the former plural conjugate. (7) can be expressed as: We consider θ(x) as the derivative of the unknown value. From Equation (3) we get the derivative from B k to θ(x): Because of Then, we could get: Thus, we could get: Micromachines 2022, 13, 82 5 of 9 We consider the Zernike coefficient α(x) as the derivation of the unknown value. From Equation (3), we get the derivative from B k to α(x): Take α n,k Z n (x)] = Z n (x) into Equation (12). We get the objective function, which is calculated as: With the objective Equation (3) and its impact on the Zernike coefficient derivative Equation (13), we can use the mathematical optimization algorithm, such as Limitedmemory BFGS algorithm, to solve various Zernike wave-front coefficient values [25][26][27].
Experimental Demonstrations
Here, we demonstrate the measurement ability in small-slope free-form surfaces with the modified phase retrieval discussed in Section 2. Figure 2a shows the diagram of the measuring installation with the course of the clearance beams. In addition, we built the experimental setup to measure the thin, deformable mirror surface (in Figure 2c), shown in Figure 2b. The size of this measured mirror is (35 mm (length) × 35 mm (width) × 15 mm (thickness)), and there are three screws on the back surface of mirror which were used to apply different forces in order to change the surface shape. The collimated laser beam (with the wavelength of 632.5 nm) from the WYKO interferometer passed through the beam splitter and reached the measured surface. The reflected light from the measured surface was directed to the focusing lens, with a focal length of 150 mm, by the beam splitter and then reached the detector. The detector was placed on a computerized moving stage which enabled the detector to take images as it moved away from the focal plane.
In the experiment, the beam size was limited to 10 mm by a stop. Firstly, we built the experiment system, fixed the thin measured mirror in the stage, observed the fringes in WYKO interferometer to maximize the contrast of the interference fringes, added the splitting prism in the optical path, and adjusted them to be coaxial with the measured mirror and pinhole. Secondly, we adjusted the position of the lens and the camera so that the light beam reflected from the measured surface through the prism and entered into the camera. We captured seven images with the camera, and the defocus amounts were 0, ±1.2, ±1.7, and ±2.2 mm. Thirdly, we disposed the collected images with the modified PR algorithm to obtain the mirror surface information. In order to make an effective comparison with the WYKO interferometer, we did not move the position of the experimental setup and measured the deformed surface again. We first measured a reference mirror with 1/20 wave flatness to remove the system errors. The measurement data with the flat mirror are treated as the system errors and subtracted from them when measuring the free-form surfaces. The seven images with different defocuses solved by modified PR are shown in Figure 3a. Figure 3b shows the estimated surface shape recovered by the modified PR with the two images in Figure 3a. The process of obtaining Figure 3b took 5.50 s, and Figure 3c shows the estimated surface shape recovered by the modified PR with all seven images. The process of obtaining Figure 3c took 75.35 s, which means that the improved PR algorithm was 15 times as fast. Figure 3d shows the difference between the estimated surface shapes recovered by the modified PR with all seven images and the two images. We could see that the difference was very small. This experiment demonstrates that the proposed modified PR algorithm is feasible in the surface metrology.
(with the wavelength of 632.5 nm) from the WYKO interferometer passed through the beam splitter and reached the measured surface. The reflected light from the measured surface was directed to the focusing lens, with a focal length of 150 mm, by the beam splitter and then reached the detector. The detector was placed on a computerized moving stage which enabled the detector to take images as it moved away from the focal plane. the experimental setup of the PR system; and (c) the thin measured mirror. The left is the front surface of the thin measured mirror and the right is the back surface of the thin measured mirror. Three screws were used to apply different forces to the measured mirror to change the surface shape.
In the experiment, the beam size was limited to 10 mm by a stop. Firstly, we built the experiment system, fixed the thin measured mirror in the stage, observed the fringes in WYKO interferometer to maximize the contrast of the interference fringes, added the splitting prism in the optical path, and adjusted them to be coaxial with the measured mirror and pinhole. Secondly, we adjusted the position of the lens and the camera so that the light beam reflected from the measured surface through the prism and entered into the camera. We captured seven images with the camera, and the defocus amounts were 0, ±1.2, ±1.7, and ±2.2 mm. Thirdly, we disposed the collected images with the modified PR Figure 2. (a) The diagram of the measuring installation with the course of the clearance beams; (b) the experimental setup of the PR system; and (c) the thin measured mirror. The left is the front surface of the thin measured mirror and the right is the back surface of the thin measured mirror. Three screws were used to apply different forces to the measured mirror to change the surface shape.
To demonstrate the feasibility of the proposed method in free-form surface metrology, we apply different forces to the thin mirror, shown in Figure 2c, and take two defocused images for each force to estimate the free-form mirror surface with the improved PR algorithm. Figure 4a,b are, respectively, the mirror surface estimated by PR and the mirror surface measured with the WYKO interferometer. Comparing the results of the modified PR with the results of WYKO interferometer, the RMS difference is less than 2.777 nm, which shows that the proposed improved PR method is feasible for measuring free-form surfaces. The difference in the PV is relatively large, partially due to the following reasons. Firstly, there is a smoothing process when using the WYKO interferometer, which the solution process of the modified PR method does not have. Secondly, during the solution process we calculated the whole mask circular area with the modified PR, but Figure 4b was obtained after matting (removing boundary Burr) with the WYKO interferometer. Therefore, although the RMS of the whole mask cannot be greatly affected, it will be greatly different from the PV. Figure 3c shows the estimated surface shape recovered by the modified PR with all seven images. The process of obtaining Figure 3c took 75.35 s, which means that the improved PR algorithm was 15 times as fast. Figure 3d shows the difference between the estimated surface shapes recovered by the modified PR with all seven images and the two images. We could see that the difference was very small. This experiment demonstrates that the proposed modified PR algorithm is feasible in the surface metrology. To demonstrate the feasibility of the proposed method in free-form surface metrology, we apply different forces to the thin mirror, shown in Figure 2c, and take two defocused images for each force to estimate the free-form mirror surface with the improved PR algorithm. Figures 4a and 4b are, respectively, the mirror surface estimated by PR and the mirror surface measured with the WYKO interferometer. Comparing the results of the modified PR with the results of WYKO interferometer, the RMS difference is less than 2.777 nm, which shows that the proposed improved PR method is feasible for measuring free-form surfaces. The difference in the PV is relatively large, partially due to the following reasons. Firstly, there is a smoothing process when using the WYKO interferometer, which the solution process of the modified PR method does not have. Secondly, during the solution process we calculated the whole mask circular area with the modified PR, but Figure 4b was obtained after matting (removing boundary Burr) with the WYKO interferometer. Therefore, although the RMS of the whole mask cannot be greatly affected, it will be greatly different from the PV. It can be seen from the above two experimental results that the process of obtaining Figure 3b from the two images took 5.50 s and the process of obtaining Figure 3c with all seven images took 75.35 s, which means that the improved PR algorithm is 15 times faster. Besides, Figures 4a and 4b, respectively, show the estimated surface recovered by PR and the measured surface with the WYKO interferometer; the differences between our technique and the WYKO interferometer in RMS and PV are very small, which demonstrated that our improved PR method could achieve as considerable an accuracy as the WYKO interferometer. The above two points proved the feasibility and effectiveness of our technology in the measurement of small-slope free-form surfaces.
Conclusions
In this paper, we have presented and shown experimentally with an improved PR algorithm based on the traditional gradient search algorithm to improve efficiency of phase recovery. The feasibility of the proposed method has been demonstrated by comparing the measurement results of the deformed thin mirror with the measurement results from WYKO interferometer. This work has additionally shown that PR technology It can be seen from the above two experimental results that the process of obtaining Figure 3b from the two images took 5.50 s and the process of obtaining Figure 3c with all seven images took 75.35 s, which means that the improved PR algorithm is 15 times faster. Besides, Figure 4a,b, respectively, show the estimated surface recovered by PR and the measured surface with the WYKO interferometer; the differences between our technique and the WYKO interferometer in RMS and PV are very small, which demonstrated that our improved PR method could achieve as considerable an accuracy as the WYKO interferometer. The above two points proved the feasibility and effectiveness of our technology in the measurement of small-slope free-form surfaces.
Conclusions
In this paper, we have presented and shown experimentally with an improved PR algorithm based on the traditional gradient search algorithm to improve efficiency of phase recovery. The feasibility of the proposed method has been demonstrated by comparing the measurement results of the deformed thin mirror with the measurement results from WYKO interferometer. This work has additionally shown that PR technology is a viable and realistic method in small-slope free-form surfaces measurement. Now, we are doing research on large-slope free-form surfaces measurement with transverse translation diversity phase retrieval, and our new research will perhaps be shown in the near future. | 4,977 | 2022-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Orientia tsutsugamushi Stimulates an Original Gene Expression Program in Monocytes: Relationship with Gene Expression in Patients with Scrub Typhus
Orientia tsutsugamushi is the causal agent of scrub typhus, a public health problem in the Asia-Pacific region and a life-threatening disease. O. tsutsugamushi is an obligate intracellular bacterium that mainly infects endothelial cells. We demonstrated here that O. tsutsugamushi also replicated in monocytes isolated from healthy donors. In addition, O. tsutsugamushi altered the expression of more than 4,500 genes, as demonstrated by microarray analysis. The expression of type I interferon, interferon-stimulated genes and genes associated with the M1 polarization of macrophages was significantly upregulated. O. tsutsugamushi also induced the expression of apoptosis-related genes and promoted cell death in a small percentage of monocytes. Live organisms were indispensable to the type I interferon response and apoptosis and enhanced the expression of M1-associated cytokines. These data were related to the transcriptional changes detected in mononuclear cells isolated from patients with scrub typhus. Here, the microarray analyses revealed the upregulation of 613 genes, which included interferon-related genes, and some features of M1 polarization were observed in these patients, similar to what was observed in O. tsutsugamushi-stimulated monocytes in vitro. This is the first report demonstrating that monocytes are clearly polarized in vitro and ex vivo following exposure to O. tsutsugamushi. These results would improve our understanding of the pathogenesis of scrub typhus, during which interferon-mediated activation of monocytes and their subsequent polarization into an M1 phenotype appear critical. This study may give us a clue of new tools for the diagnosis of patients with scrub typhus.
Introduction
Orientia tsutsugamushi is the causative agent of scrub typhus, a life-threatening disease characterized by fever, lymphadenopathy, rash and eschar that can be complicated by interstitial pneumonitis, meningitis and myocarditis [1].The proper diagnosis of scrub typhus can be difficult due to the non-specific initial symptoms that are frequently found in other acute febrile illnesses.While scrub typhus is confined geographically to the Asia-Pacific region, a billion of people are at risk and one million new cases arise each year.As O. tsutsugamushi is transmitted to humans by the bite of larval trombiculid mites, people who inhabit regions infested with these vectors are at high risk for acquiring scrub typhus [2].To date, no effective strategy has succeeded in generating long lasting, protective immunity to this particular infection despite aggressive attempts to develop a prophylactic vaccine [3].
Due to the significant genetic and phenotypic differences in its cell wall, including the absence of peptidoglycan and lipopolysaccharide (LPS), O. tsutsugamushi has been classified as a new genus that is distinct from the Rickettsia genus [4].The complete genome sequence of two O. tsutsugamushi strains (Boryong and Ikeda) has recently been described.The O. tsutsugamushi genome contains several repetitive sequences, including genes for conjugative type IV secretion systems (tra genes) [5,6].O. tsutsugamushi is an obligate intracellular bacterium that can invade a variety of cell types both in vitro and in vivo.It has been recently shown that O. tsutsugamushi can exploit a5b1 integrin-mediated signaling and the actin cytoskeleton to invade HeLa cells [7].Another study reported that following phagocytosis by L929 cells, O. tsutsugamushi rapidly escapes the phagosome and enters the cytosol [1].O. tsutsugamushi also infects endothelial and fibroblast cell lines through clathrinmediated endocytosis [8].Once inside the cell, O. tsutsugamushi moves along microtubules to the microtubule-organizing center in a dynein-dependent manner [9].In experimental animals, O. tsutsugamushi infects peritoneal mesothelial cells [10], macrophages [11] and polymorphonuclear leukocytes [12].In humans, O. tsutsugamushi has been detected in peripheral blood mononuclear cells (PBMCs) from patients with acute scrub typhus [13].
The mechanisms governing the interaction between O. tsutsugamushi and host cells are only partially understood.It has been recently demonstrated that the expression of approximately 30% of bacterial genes is modulated when O. tsutsugamushi is cultured in eukaryotic cells.When compared to the bacterial gene expression seen in the L929 fibroblast cell line, the expression of a number of bacterial genes involved in translation, protein folding and secretion is downregulated in J774 macrophages, and this decreased expression correlated with the reduced growth of O. tsutsugamushi in macrophages [14].Infection with O. tsutsugamushi most likely has many effects on the human immune response.In vitro studies have shown that O. tsutsugamushi induces the expression of genes encoding chemokines, including MCP-1 (CCL2), RANTES (CCL5) and IL-8 (CXCL8), in human endothelial cells [15,16].In patients with scrub typhus, the serum level of pro-inflammatory cytokines (e.g.TNF, IL-12p40, IL-15, IL-18 and IFN-c) is also increased [17], demonstrating that O. tsutsugamushi infection is accompanied by an inflammatory response [18].The circulating levels of chemokines such as CXCL9 (MIG) and CXCL10 (IP-10), which are known to attract Th1, cytotoxic T cells and NK cells, and molecules such as granzymes A or B, which are released following the degranulation of cytotoxic lymphocytes, are also increased [18].
In this paper, we report that O. tsutsugamushi induces large changes in gene transcription in naı ¨ve human monocytes.In addition to genes encoding inflammatory cytokines and chemokines, O. tsutsugamushi upregulates the expression of genes involved in type I IFN pathway and genes involved in apoptosis.Interestingly, these in vitro results were related to the expression of genes involved in the immune response, including the IFN response, in patients with scrub typhus.Our study highlights the role of IFN-mediated monocyte activation in the pathogenesis of scrub typhus.
Ethics Statement
Blood samples from patients and controls were collected after informed and written consent obtained from each participant, and the study was conducted with the approval of the Ethics Committee of Siriraj Hospital, Bangkok, Thailand.
Patients
Ten milliliters of blood was collected from patients with acute undifferentiated fever who were seen at Siriraj Hospital or Ban Mai Chaiyapot Hospital.The clinical status of each patient was recorded.Within two hours of blood drawing, PBMCs were separated by Ficoll density gradient centrifugation.The PBMCs were immediately lysed in Trizol reagent (Invitrogen, Carlsbad, CA), as recommended by the manufacturer, and the lysates were stored at 280uC until further analysis.The study participants were retrospectively divided into the following three groups: healthy controls (individuals without any of the four infections), patients with scrub typhus (n = 4) and an infected control group consisting of patients with murine typhus (n = 7), malaria (n = 4) or dengue (n = 7).Patients with evidence suggesting co-infections or those with malignancies were excluded.The diagnostic criteria for scrub typhus were the presence of circulating O. tsutsugamushi-specific IgM with a titer greater than 1:400 in serum from patients with acute disease and/or O. tsutsugamushi-specific IgG with at least a four-fold increase of titer.The criteria for murine typhus were a serum Rickettsia typhi-specific IgM titer greater than 1:400 in patients with acute disease and/or at least a four-fold increase of IgG titer.The criteria for dengue virus infection were a dengue virus-specific IgM titer greater than 1:1,280, as determined using enzyme linked immunosorbent assay (ELISA), and/or positive for dengue RNA using RT-PCR.Malaria infection was determined by the detection of Plasmodium species in blood films observed using a light microscope.Patients with murine typhus, malaria or dengue presented with a lower absolute number of leukocytes than patients with scrub typhus, but the lymphocyte/monocyte ratio was similar (Table S1).Ten healthy individuals were included in the study as controls, and the extracted RNA was pooled to reduce the effect of interindividual variability.The reproducibility of this procedure was tested using two different pools consisting of 5 individuals each.
O. tsutsugamushi culture and isolation
O. tsutsugamushi, strain Kato (CSUR R163), was propagated in L929 cells cultured in minimum essential media (MEM) supplemented with 5% fetal bovine serum (FBS) and 2 mM Lglutamine (Invitrogen, Cergy Pontoise, France), as recently described [19].When almost 100% of the cells were infected, as determined using May Gru ¨nwald Giemsa (Merck, Darmstadt, Germany) staining, the cells were harvested, lysed using glass beads and centrifuged at 5006 g for 5 min to remove cell debris.The supernatants were centrifuged at 2,0006 g for 10 minutes to collect bacterial pellets.The isolated bacteria were frozen in MEM containing 20% FBS and 5% DMSO until use.The titer of the supernatants was determined as described previously [8,20] with slight modifications.Briefly, the bacteria were serially diluted fivefold and incubated with L929 cells grown in 24-well plates.After 2 hours, free bacteria were removed and the infected L929 cells were cultured in MEM containing 5% FBS and 0.4 mg/ml daunorubicin (Biomol, Hamburg, Germany), which partially inhibits cell growth, for 2 days [21].The infection of the L929 cells was quantified using indirect immunofluorescence with pooled serum from Thai patients with scrub typhus at a dilution
Author Summary
Scrub typhus, a life-threatening disease that occurs in the Asia-Pacific region, is a public health problem since a billion of people are at risk and one million new cases arise each year.Orientia tsutsugamushi, the causal agent of scrub typhus, is an obligate intracellular bacterium that mainly infects endothelial cells.We demonstrated here that O. tsutsugamushi grew in monocytes isolated from healthy donors and altered the expression of a large number of genes including interferon-related genes, genes associated with the M1 polarization of macrophages and apoptosis-related genes.Importantly, these data were related to the transcriptional changes detected in mononuclear cells isolated from patients with scrub typhus.Indeed, the microarray analyses revealed the upregulation of numerous genes, which included interferon-related genes, and some features of M1 polarization.This is the first report demonstrating that monocytes are clearly polarized in vitro and ex vivo following exposure to O. tsutsugamushi.These results improve our understanding of the pathogenesis of scrub typhus and may give us a clue of new tools for the diagnosis of patients with scrub typhus.
of 1:400 and fluorescein isothiocyanate-conjugated goat antihuman IgG (BioMe ´rieux, Marcy l'Etoile, France) diluted at 1:200 as a secondary antibody.The infected-cell counting units (ICUs) of O. tsutsugamushi were defined as (the total number of cells used in the infection)6(the percentage of infected cells)6(the dilution rate of the bacterial suspension)/100.In some experiments, O. tsutsugamushi organisms were killed by heating at 100uC for 5 minutes.
Infection of human monocytes
PBMCs were isolated from leukopacks (Etablissement Franc ¸ais du Sang, Marseille, France) over a Ficoll gradient (MSL, Eurobio, Les Ulis, France) and incubated in 24-well plates for 1 hour.Adherent cells were designed as monocytes since more than 90% of them expressed CD14, as previously described [22].Monocytes (1.5610 5 per assay) were incubated with 3610 5 O. tsutsugamushi organisms in RPMI 1640 containing 10% FBS, 20 mM HEPES and 2 mM L-glutamine (Invitrogen) for 2 hours.The monocytes were then extensively washed to remove free organisms and cultured for the indicated times.The uptake and the intracellular fate of the O. tsutsugamushi organisms were determined using immunofluorescence and quantitative real-time PCR (qPCR).Immunofluorescence was performed using pooled serum from Thai patients with scrub typhus and a standard protocole.Cells were then examined by fluorescence and laser scanning microscopy using a confocal microscope (Leica TCS SP5, Heidelberg, Germany) as recently described [23].
To assess bacterial DNA, the monocytes were incubated in 0.1% Triton X-100, and DNA was extracted in a 150 ml volume using a QIAamp Tissue Kit (Qiagen, Courtaboeuf, France), as recommended by the manufacturer.The number of bacterial DNA copies was calculated using the Taqman system (Applied Biosystems, Warrington, UK) with a 5 ml DNA sample.The selected primers and probes were designed based on the available DNA sequence of O. tsutsugamushi strain Boryong (complete genome, GenBank ref.NC_009488.1)and were the following: forward (3235-3257), 59-AAGCATAGGTTACAGCCTGG-WGA-39; reverse (3346-3373), 59-ACCCCAACGGATTTAA-TACTATATCWAC-39; probe R (3307-3338), 59-FAM-CCATCTTCAAGAAATGGCATATCTTCCTCAGG-TAMR-A-39.The resulting PCR product was 139 bp in size.Negative controls consisted of DNA extracted from uninfected monocytes.Each PCR run included a standard curve generated from tenfold serial dilutions of a known concentration of O. tsutsugamushi DNA.The results are expressed as the total number of bacterial DNA copies.
Transcriptional profile of monocytes
Eight hours post-infection, total RNA was extracted from infected and uninfected monocytes using an RNeasy Mini kit (Qiagen) with DNase digestion, as recommended by the manufacturer.The quality of the isolated RNA was assessed by Agilent 2100 Bioanalyzer and an RNA 6000 Nano Kit (Agilent Technologies, Massy, France).The concentration of RNA was determined using a NanoDrop 1000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA).A total of eight RNA samples (four samples per condition) were then processed for microarray analysis.The RNA was amplified and Cy3-labeled cDNA was generated using the Agilent Low RNA Input Fluorescent Linear Amplification Kit (Agilent Technologies), as recommended by the manufacturer.The amplified cDNA was processed and hybridized to 4644K microarray slides (Agilent Technologies).The scanned images were analyzed with Feature Extraction Software 10.5.1 (Agilent) using default parameters.The data processing and analyses were performed using the Resolver software 7.1 (Rosetta Inpharmatics, Cambridge, MA) as previously described [24] and R and BioConductor softwares.The Rosetta intensity error model for single color microarrays was used to perform inter-array normalization.Statistical analyses were performed using the significance analysis of microarrays (SAM) [25].The median false discovery rate was approximately 5%.Genes with an absolute fold change (FC) greater than 2 and a p value of the error model less than 0.01 were considered differentially modulated.The differentially expressed genes were classified based on their Gene Ontology (GO) category.The transcriptional profile of the infected monocytes was compared with the profile of monocytes stimulated with IFN-c or IL-4 to assess their polarization status (manuscript in preparation) using R software and hierarchical clustering.
Transcriptional profile of patient PBMCs
The RNA from PBMCs lysed in TRIZOL was extracted using the RNeasy Mini Kit (Qiagen), and the RNA of ten healthy individuals was divided into two distinct pools.The RNA amplification for microarray analysis was performed using the Illumina TotalPrep RNA Amplification Kit (Ambion, Austin, TX), as recommended by the manufacturer.Five hundred nanograms of amplified cRNA was hybridized onto Human-6 v2 BeadChips (Illumina, San Diego, CA), which contained more than 46,000 probes targeting all known human transcripts.The hybridized chips were scanned on an Illumina BeadStation 500 and assessed for fluorescent signal intensity using Illumina Beadstudio software.Normalization and all analyses of microarray data were performed by GeneSpring GX 9 demo version (Agilent Technologies).Briefly, quantile normalization was applied to the raw signal intensities.Next, the probes in which the normalized expression level was below the twentieth percentile for every sample in any group of patients were excluded, leaving 38,630 probes for further analysis.We focused on two main sets of genes.The first one comprised scrub typhus-associated genes, which were identified by performing a Welch ANOVA with Benjamini-Hochberg correction [27] across the four disease groups.A post hoc Tukey's HSD test was further applied to the Welch ANOVA results.Genes that were differentially expressed in scrub typhus patients compared to the other groups were then selected using the intersection rule.Unsupervised hierarchical cluster was performed for all patient groups on the basis of the Euclidean distance and average linkage.The second gene set was composed of scrub typhus-responsive genes, which were genes whose mean expression level in patients with scrub typhus group was at least twofold greater than the expression in healthy controls.The significance of the GO enrichment was evaluated using the hypergeometric formula with Benjamini-Yekutieli correction [27,28].As the transcriptomic profiles of patient PBMCs and those of naı ¨ve monocytes stimulated with O. tsutsugamushi were obtained using Illumina bead chips and Agilent microarrays, respectively, the two profiles were compared by building a virtual Agilent microarray.These genes were then analyzed using R software.The data are generated in compliance with the MIAME guidelines and have been deposited in the NCBI's Gene Expression Omnibus and are accessible using GEO Series accession number GSE16463 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=bxmzrwwgygmwahu&acc= GSE16463).
Quantitative real time RT-PCR and ELISA
Real time quantitative RT-PCR (qRT-PCR) of the genes of interest was carried out as previously described [24].Briefly, total RNA was isolated from monocytes using a Qiagen kit, and cDNA synthesis was performed using an oligo(dT) primer and M-MLV reverse transcriptase (Invitrogen), as recommended by the manufacturer.Real time PCR from cDNA templates was performed using Light Cycler-FastStart DNA Master PLUS SYBR Green I (Roche Applied Science, Meylan, France).The sequences of the primers are provided in Table S2.The fold change of the target genes relative to b-actin was calculated using the 2 2DDCt method, as described previously [29].The level of IFN-b and TNF in the supernatants was analyzed by IFN-b and TNF ELISA kits (R&D Systems, Lille, France).The IL-1b level was determined using an IL-1b ELISA kit purchased from Diaclone (Besanc ¸on, France).
In situ cell death
Infected monocytes were fixed with 3% paraformaldehyde before being analyzed using the TdT-mediated dUTP nick-end labeling (TUNEL) assay.DNA strand breaks were labeled using an In Situ Cell Death Detection Kit, TMR red (Roche Applied Science), as recommended by the manufacturer.Nuclei were counter-stained with DAPI.The number of TUNEL-positive cells and DAPI-stained nuclei was determined using fluorescence microscopy.Cell death is expressed as the ratio of TUNELpositive cells to DAPI-stained nuclei 6100.
Statistical analyses
Statistical analyses were performed using GraphPad Prism Sofware v. 5.01.The results are expressed as the mean 6 SEM and were compared using the non-parametric Mann-Whitney U test: p values less than 0.05 were considered significant.
O. tsutsugamushi replicates in human monocytes
As O. tsutsugamushi organisms have been detected in monocytes from patients with acute scrub typhus [13], we investigated whether they were able to invade and replicate within naı ¨ve monocytes.Monocytes were incubated with O. tsutsugamushi (two bacteria per cell) for 2 hours, extensively washed to remove free organisms and incubated for the indicated times.After 1 day, the number of bacterial DNA copies was approximately 1610 5 (Figure 1A).Using immunofluorescence, the uptake of O. tsutsugamushi by monocytes could be readily observed (Figure 1B).The number of bacterial DNA copies steadily increased over the course of the 5-day experiment (Figure 1A).The replication of O. tsutsugamushi in monocytes was slower than in L929 cells (Figure 1A), a permissive control cell line.Indeed, the doubling time of O. tsutsugamushi within monocytes was approximately 18 hours compared with 10 hours in L929 cells.Time-dependent replication of O. tsutsugamushi within monocytes was also detected using indirect immunofluorescence (Figure 1B) and confocal microscopy (Figure 1C).These data demonstrate that O. tsutsugamushi was capable of replication in naı ¨ve monocytes.
Global transcriptome analysis of O. tsutsugamushiinfected monocytes
To understand how O. tsutsugamushi inhibits the microbicidal machinery of monocytes, we compared the transcriptional profile of unstimulated monocytes to that of monocytes stimulated with O. tsutsugamushi for 8 hours using whole genome microarrays.
This time was determined as follows.The time course of the expression of genes encoding cytokines (TNF, IL-1b, IL-6, IL-12p40), chemokines (CXCL10, CXCL11), IFN-b and TNFrelated apoptosis-inducing ligand (TRAIL/Apo2L/TNFSF10) was studied using qRT-PCR.Their expression was variable after 2 hours, was maximum after 8 hours and progressively decreased thereafter (Figure S1).Consequently, the time of 8 hours was used to stimulate monocytes with O. tsutsugamushi.We found that 4,762 genes were altered in response to O. tsutsugamushi: 2,380 genes were upregulated and 2,382 genes were downregulated with at least a twofold change and a p value less than 0.01.The differentially expressed genes were classified into different categories based on their GO according to their p value.Among the upregulated genes (2-100-fold increased) with an enrichment higher than 20% were genes involved in the immune response, the inflammatory response, chemotaxis, the anti-viral response and cell-cell signaling (Figure 2A).Other GO categories related to cellular processes, including apoptosis, cell proliferation, cell-cell signaling, receptor activity, signal transduction and transcription factor activity, exhibited lower enrichments, ranging from 10 to 20% (Figure 2A).Among the downregulated genes were genes involved in cell motility, chemotaxis, the cytoskeleton, the immune response, intracellular signaling, receptor activity and signal transduction (Figure 2B).Taken together, these data demonstrate that O. tsutsugamushi activated an important transcriptional program in human monocytes.
Analysis of the inflammatory response induced by O. tsutsugamushi
We next focused on the genes involved in the immune response (Table S3).Approximately 15% of the genes that were upregulated in response to O. tsutsugamushi were cytokines (14 genes) or chemokines (14 genes).The expression of the genes encoding proinflammatory cytokines, such as TNF, IL-1b, IL-6, IL-12p35, IL-12p40, IL-23p19 and IL-15, and chemokines, such as CXCL10, CXCL11 and CCL20, was determined using qRT-PCR.After 8 hours of stimulation with O. tsutsugamushi, the expression of these genes was markedly increased (Figure 3A).This upregulation was sustained, as the expression was similar at 8 and 24 hours (Figure 3B).Next, we examined if the inflammatory response was dependent on bacterial viability.Monocytes were stimulated with live or heat-killed O. tsutsugamushi for 8 hours.In contrast to the genes involved in the type I IFN response, the expression of genes encoding pro-inflammatory cytokines, such as TNF, IL-1b, IL-6, IL-12p40 and IL-23p19, was upregulated in response to both live and heat-killed O. tsutsugamushi, although the expression level of these cytokines was partially reduced in monocytes incubated with heat-killed bacteria (Figure 4A).The transcriptional response induced in monocytes by O. tsutsugamushi was accompanied by cytokine production: O. tsutsugamushi stimulated high levels of TNF (Figure 4B) and IL-1b (Figure 4C) production by monocytes in a manner that correlated with gene expression.Heat-killed organisms also induced the production of TNF, although it was significantly (p,0.05) less than the production induced by live organisms (Figure 4B).In contrast, heat-killed organisms were unable to induce IL-1b production (Figure 4C), despite the increased IL-1b mRNA expression (see Figure 4A).
Other genes, including CD40, CD70 and CD80, which play a major role in macrophage-T cell interactions, and indoleaminepyrrole 2,3 dioxygenase (INDO), a multi-functional protein that plays a role in the intracellular killing of bacteria, were upregulated in response to O. tsutsugamushi (Table S3).The upregulation of the gene encoding INDO was confirmed in monocytes from three donors using RT-PCR (Figure 3).Notably, the expression of several molecules, including CD14, CD22, CCR2, IL16, CLEC4A, CLEC10A and hepcidin antimicrobial peptides, was downregulated.
As the expression of a large number of chemokine and inflammatory cytokine genes associated with the M1 or M2 phenotype of macrophages was affected, we compared the transcriptomic profile of O. tsutsugamushi-infected monocytes with the list of M1 and M2 genes previously published [30].We selected 32 and 28 genes representative of the M1 and M2 profiles, respectively (Table S4), in the genes associated with membrane receptors, cytokines, chemokines and apoptosis.Almost all of the genes characteristic of the M1 phenotype (29 of 32) were upregulated in response to O. tsutsugamushi.In contrast, M2 genes (except for CCL1, CCL23 and IL1RN) were either downregulated or remained unchanged.Taken together, these results demonstrate that O. tsutsugamushi induced a pro-inflammatory, M1 program in monocytes that did not require the presence of live organisms, which was in contrast to the type I IFN transcriptional signature.
Specificity of the M1 program induced by O. tsutsugamushi
As O. tsutsugamushi seems to induce an M1 program in monocytes, we compared this profile with the profile induced by IFN-c, a canonical inducer of the M1 phenotype.The principal component analysis (Figure S2) and hierarchical clustering (Figure 5) revealed that the transcriptional pattern stimulated by O. tsutsugamushi was not identical to the program induced by IFN-c.Approximately 76% of the genes altered in response to IFN-c were also altered in response to O. tsutsugamushi, and 83% of the genes altered in response to O. tsutsugamushi were also altered in response to IFN-c.Nevertheless, the expression of 572 and 413 genes was up-and downregulated in O. tsutsugamushi-stimulated cells compared with IFN-c-stimulated cells, respectively.Among the genes upregulated in O. tsutsugamushi-stimulated monocytes, the GO analysis showed that genes involved in chromatin assembly, locomotion and lipid metabolism were enriched.Among the downregulated genes, we found a specific enrichment for genes in GO categories associated with cell activation, immune system processes and cell death.As determined using KEGG analysis, there was an over-representation of pathways associated with the immune response (14 of 18).Taken together, these findings suggest that while the response of monocytes to O. tsutsugamushi was polarized to an M1-like profile, it exhibited some specific features.
O. tsutsugamushi-induced monocyte apoptosis
The analysis of genes upregulated in response to O. tsutsugamushi revealed the enrichment of genes in four GO categories related to apoptosis, including apoptosis, anti-apoptosis, the induction of apoptosis and the regulation of apoptosis (Table S5).In each GO category, several genes encoding TNF family members and regulators of apoptosis, such as caspases and Bcl-2, were upregulated.The ability of O. tsutsugamushi to induce the apoptosis of monocytes was studied using TUNEL staining.Although no apoptosis was detected in monocytes incubated for 6 hours with O. tsutsugamushi, after 24 h, 461% of monocytes were apoptotic, and this percentage increased to 861% after 48 h.In contrast, less than 1% of monocytes were apoptotic in the absence of O. tsutsugamushi.Interestingly, heat-killed O. tsutsugamushi did not induce monocyte apoptosis during the same incubation periods (Table 2).Taken together, our data show that O. tsutsugamushi induced an apoptosis-related gene program and the apoptosis of a minority of monocytes.
Host response genes in scrub typhus infection
The transcriptional pattern of PBMCs from patients with scrub typhus (n = 4) was compared to the pattern in pooled PBMCs isolated from healthy controls using microarrays.Using an absolute value of a FC greater than 2.0, we identified 613 and 517 transcripts that were up-and downregulated in scrub typhus patients, respectively.Most of the highly expressed genes corresponded to biological process categories including DNA metabolism, the cell cycle and cellular component organization and biogenesis (Table 3).Of particular interest, the upregulated genes involved in immune system process included IFN-c (IFNG) and its related genes that encode absent in melanoma 2 (AIM2), guanylate binding protein 1(GBP1), IFN-c-inducible protein 16 (IFI16) and indoleamine-pyrrole 2 (INDO).Significant enrichment in the downregulated genes was mainly observed in genes associated with immune-related processes, the inflammatory response and chemotaxis.To confirm the microarray-derived results, the level of 12 transcripts that were highly altered in patients with scrub typhus was re-assessed using qRT-PCR.The expression profiles detected using either technique were comparable, except for one gene, SSBP1.The transcriptomic profile of patients with scrub typhus was then compared to the profile of patients with murine typhus, malaria or dengue.Sixty-five probes corresponding to 63 genes were specifically expressed in patients with scrub typhus (p,0.05)(Table S6).The analysis of the microarray data by hierarchical clustering clearly showed that the four patients with scrub typhus were grouped together, separate from the other patients, while the transcriptional response of patients with murine typhus, malaria or dengue was more dispersed (Figure 6).These results clearly demonstrate that scrub typhus was characterized by a specific transcriptional signature.
To reduce this transcriptional signature, the CBLB, LOC642161, CD8A and CD8B1 genes were selected because their expression was at least twofold greater in scrub typhus compared to the expression observed in the other infectious diseases; the FOSB gene was also selected because it was the only downregulated gene with the same fold difference.When hierarchical clustering was performed based on the expression of these five genes, the patients with scrub typhus were still grouped together, even though a patient with murine typhus also grouped in this cluster (Figure 7A).The expression of these five genes was then quantified using qRT-PCR and, as expected, the expression of CBLB, LOC64216, CD8A and CD8B1 was highly upregulated in patients with scrub typhus, while the expression of FOSB5 was downregulated (Figure 7B).This transcriptional profile was specific for scrub typhus, because the expression of these genes was completely different in murine typhus, malaria and dengue (Figure 7B).
Relationship between the transcriptomic profiles detected in patients with scrub typhus the profiles in in vitro-infected monocytes
The transcriptional programs of PBMCs from patients with scrub typhus and those induced by O. tutsugamushi in monocytes were compared using Gene Symbol to allow the comparison between Illumina and Agilent data.The 2,015 probes differentially expressed in O. tsutsugamushi-stimulated monocytes detected using Agilent microarrays corresponded to 1,606 probes when Illumina assays were used.The differences were due to the fact that several Illumina probes are not annotated, leading to the impossibility to find these probes in Agilent probeset.In addition, Agilent probes were longer than those of Illumina.Interestingly, among the 1,606 probes that were modulated in O. tsutsugamushi-stimulated monocytes, 492 (p,0.05) and 184 probes (p,0.01) were also upand downregulated, respectively, in patients with scrub typhus.This signature was clearly distinct from that of healthy controls (Figure 8).Among the 184 genes that were altered in scrub typhus, we found genes that were associated with important features of the monocyte response to O. tsutsugamushi, such as type I and II IFN and M1-associated genes (Table 3).These results suggest that the differential expression of genes in scrub typhus was related to O. tsutsugamushi infection.
Discussion
Our study is the first report demonstrating the replication of O. tsutsugamushi in primary human monocytes; however, the efficiency of bacterial replication was lower than the replication observed in permissive cell lines, such as L929 cells.The sustained presence of bacteria within monocytes may be beneficial for the dissemination to target tissues, as O. tsutsugamushi is known for its high level of genetic and antigenic variability [2].
The interaction of O. tsutsugamushi with monocytes resulted in a transcriptomic pattern in which the expression of a large number of genes was altered.The first feature of the response was the polarization of monocytes towards an M1 phenotype.The M1 phenotype has been largely described in macrophages stimulated by IFN-c, TNF and/or microbial products and has been associated with microbicidal competence and the skewing of the adaptive immune response towards Th1/Th17 responses [31,32].M1 phenotypes have been also described for macrophages stimulated with different bacterial pathogens including Mycobacterium bovis [33], Legionella pneumophila [34] and Helicobacter pylori [35] whereas Mycobacterium tuberculosis [36], Coxiella burnetii [37] and Tropheryma whipplei [29,38] induce M2 profiles.We demonstrate that the transcriptomic profile of monocytes infected with O. tsutsugamushi was not identical to the profile observed following induction with IFN-c because many genes were differentially expressed.The M1 polarization of monocytes infected with O. tsutsugamushi was persistent as demonstrated by the sustained upregulation of inflammatory genes.It may be related to the increased level of pro-inflammatory cytokines that has been detected in patients with scrub typhus [17,18,39,40].As activated or inflammatory monocytes play a major role in the dissemination of some pathogens, such as in the murine model of listeriosis [41] or human cytomegalovirus infection [42,43], we hypothesize that a similar phenomenon may occur in scrub typhus.In addition to the induction of an M1 phenotype, O. tsutsugamushi upregulated the expression of genes involved in polarized immune responses.They included the genes encoding IL-6, IL-12p40, IL-23p19, GM-CSF and CCL20.As IL-6, IL-12p40, IL-23p19 and GM-CSF are important for Th17 proliferation and/or differentiation and that CCL20 binds to CCR6 selectively expressed on Th17 cells [44], we can hypothesize that O. tsutsugamushi orients the immune response to a Th17 phenotype.Note that IL-17 levels are higher in patients with scrub typhus than in healthy controls [45].It is well known that Th1 responses are also critical to protection against intracellular pathogens [46].It is likely that the uncontrolled Th1 and Th17 responses to O. tsutsugamushi contribute to the pathophysiology of scrub typhus [47,48].
The second feature of the transcriptional program induced by O. tsutsugamushi in human monocytes was the upregulation of genes belonging to the ''response to virus'' GO category; these genes essentially corresponded to the type I IFN genes and ISGs.The release of IFN-b and the expression of ISGs have been reported in response to LPS and intracellular bacteria, such as Chlamydia sp., Salmonella enteritica serovar typhimurium [49], T. whipplei [50] and Francisella tularensis [51].We recently reported that Rickettsia prowazekii, an intracellular bacterium related to O. tsutsugamushi, stimulates a type I IFN response in endothelial cell lines [52].Listeria monocytogenes stimulates an IRF-3-dependent cytosolic response consisting of IFN-b and several ISGs [53].It is tempting to speculate that the production of IFN-b and the expression of ISGs, at least in part, are related to the cytosolic location of the bacteria.Indeed, L. monocytogenes [54] and O. tsutsugamushi [55] are known to reside in the cytosol.Among the genes upregulated in response to O. tsutsugamushi, we detected TBK-1 and IRF-7.TBK-1 is involved in IFN-b production after cytoplasmic recognition of L. monocytogenes [56], and IRF-7 is increased after infection and promotes an amplification loop of IFN-b production [49].It is likely that a substantial number of genes included in the ''response to virus'' GO category are controlled by IFN-b and not directly dependent on O. tsutsugamushi infection.In LPS-stimulated cells, a considerable number of the differentially expressed genes are due to type I IFN synthesis and signaling through IFN receptors [49].The effects of type I IFNs can be beneficial or detrimental to host defense against bacterial infections [49].Type I IFNs activate NK cells and cytotoxic T cells (CTLs), which are critical to clear cytosolic pathogens, including Rickettsia spp.[57,58].In addition, type I IFNs sensitize the host cells to apoptosis [59] through the induction of ISGs, such as TRAIL, FAS, XIAP-associated factor-1 (XAF-1), caspase-8, protein kinase R (PKR), 2959oligoadenylate synthase (OAS), phospholipid scramblase and the promyelocytic leukemia gene product [60].All of these genes were upregulated in monocytes infected with O. tsutsugamushi.In contrast, type I IFNs are detrimental to the host during L. monocytogenes infection.They contribute to macrophage cell death [61] and sensitize T lymphocytes to apoptosis induced by listeriolysin O [62].As a result, IRF3 2/2 and IFNAR 2/2 mice show increased resistance to L. monocytogenes compared to wild type mice [56].IFN-b has been shown to inhibit the in vitro replication of Francisella tularensis in murine macrophages [63].Recently, our group has shown that the type I IFN response is detrimental to murine macrophages infected with Tropheryma whipplei.Indeed, macrophage apoptosis and bacterial replication are inhibited in IFNAR 2/2 macrophages compared with wild type macrophages [50].A previous study has shown that type I IFN inhibited O. tsutsugamushi replication depending on the bacterial strain and the genetic background of host cells [64].However, further studies are required to determine the exact role of type I IFNs in O. tsutsugamushi infection.
The third prominent feature of the infection of human monocytes with O. tsutsugamushi was the enrichment in apoptosis-related genes but only a minority of O. tsutsugamushi-infected monocytes were apoptotic.Cell death has already been described in lymphocytes, lymph nodes and endothelial cell lines in response to O. tsutsugamushi [65].In vivo, cell death is prominent in mice susceptible to O. tsutsugamushi, but it is not detected in resistant mice [65].Our results were apparently contradictory with those of Kim et al. obtained with the THP-1 macrophage cell line in which O. tsutsugamushi inhibits transiently the cell death induced by apoptosis promoters.In addition, the ability to prevent apoptosis is not related to bacterial virulence [66].We hypothesized that the apparent discrepancy between gene expression and low level of apoptosis is associated to the modulation of the genes belonging to apoptosis and antiapoptosis GO terms in inflammatory conditions [67].The precise mechanisms used by O. tsutsugamushi to affect the cell death of monocytes remain to be determined.Apoptosis can result from inflammasome activation that involves caspase-1 activation and IL-1b secretion [68].In this study, we demonstrate that live O. tsutsugamushi induced IL-1b secretion by monocytes, whereas heat-killed bacteria stimulated the expression of the gene encoding IL-1b but did not induce secretion of active IL-1b.Similar to what has been reported for F. tularensis [51] and T. whipplei [50], type I IFNs may promote apoptosis in monocytes infected with O. tsutsugamushi.We hypothesize that O. tsutsugamushi stimulates inflammasome activation via IFN-b release when the bacteria reach the cytosol.
Lastly, we extended our analysis of the host response to O. tsutsugamushi to patients with scrub typhus.For that purpose, we analyzed the transcriptional response of whole PBMCs from patients with scrub typhus and then compared this response to that observed in monocytes stimulated with O. tsutsugamushi.In patients with scrub typhus, we observed a significant upregulation of IFNc, a type II IFN, and its related genes, suggesting an important role for the type II IFN pathway in the response to O. tsutsugamushi infection.IFN-c is a key cytokine in macrophage activation and Th1 responses, and these cells are necessary to clear O. tsutsugamushi infection [20].IFN-c can also directly exert an inhibitory effect on the intracellular replication of O. tsutsugamushi in non-immune cells [64].Indeed, increased IFN-c production correlates with the acquisition of resistance to O. tsutsugamushi infection in immune mice [69].We hypothesize that the increased production of IFN-c is, at least in part, a consequence of the elevated number of CD8 + T cells and NK cells observed in patients with scrub typhus [70].This hypothesis is also in agreement with the increased expression of genes encoding the CD8 subunits and the abundance of gene transcripts involved in cell cycle and cell division.All together, these data emphasize the role of type II IFN and cell-mediated immunity in the protection against O. tsutsugamushi infection.The transcriptional signature of patients with scrub typhus included the differential expression of the CBLB, LOC642161, CD8A, CD8B1 and FOSB genes, when compared to healthy controls and patients with other infectious diseases.Among these genes, CBLB is interesting, as there is increasing evidence that it has important functions in the immune system.The CBLB gene encodes the E3 ubiquitin ligase, Cbl-b, which controls peripheral T cell activation and tolerance by regulating CD28 co-stimulatory signaling [71,72].In infectious diseases, the increased expression of Cbl-b in many chronic infections is believed to be due to a defective immune response [15][16][17].Cbl-b also modulates the stability of bacterial effector proteins essential for virulence, as recently reported in Pseudomonas aeruginosa infection [24].The enhanced expression of Cbl-b in patients with scrub typhus may suggest a role for it in the degradation of bacterial products or in the immune evasion of O. tsutsugamushi, but it may also represent a regulatory mechanism of the immune response to prevent overactivation of T cells.Independent of the putative role of these five genes in the pathophysiology of scrub typhus, their specific alteration in scrub typhus may be useful to improve the diagnosis of this infection.In conclusion, we report the global transcriptional response of monocytes in response to O. tsutsugamushi.O. tsutsugamushi induced a specific M1 phenotype and stimulated a type I IFN response.Type I and II IFNs and the M1 signature were also found in the PBMCs from patients with scrub typhus, suggesting that these molecules may be associated with the inflammatory complications of scrub typhus.Monocytes from healthy donors were stimulated with O. tsutsugamushi for 8 hours, and Agilent microarrays were used to detect the differential expression of 2,015 genes that corresponded to 1,606 genes in the Illumina microarrays.Among these genes, 184 (250 probes) were altered in patients with scrub typhus with a p value less than 0.01.The hierarchical clustering of these genes demonstrates that the resulting transcriptional signature was specific to scrub typhus.doi:10.1371/journal.pntd.0001028.g008
Figure 1 .
Figure 1.O. tsutsugamushi replication within monocytes.Monocytes and L929 cells were infected with O. tsutsugamushi (two viable bacteria per cell) for different periods of time. A. The number of bacterial DNA copies was determined using qRT-PCR.The data are expressed as the mean 6 SD of two independent experiments performed in triplicate.B. Monocytes were infected with O. tsutsugamushi, and the bacteria were detected using indirect immunofluorescence.C. Monocytes infected with O. tsutsugamushi for 5 days were labeled with bodipy phallacidin to detect filamentous actin and bacteria were detected using indirect immunofluorescence (in red).One representative micrograph performed in confocal microscopy is shown.doi:10.1371/journal.pntd.0001028.g001 Monocytes were also incubated with heat-killed O. tsutsugamushi for 8 hours, and the mRNA expression level of IFN-b and ISGs (MX1, CXCL10 and CXCL11) was determined using qRT-PCR.In contrast to live O. tsutsugamushi, heat-killed cells failed to induce the expression of IFN-b and ISGs (MX1, CXCL10 and CXCL11) in monocytes.We then investigated whether the O. tsutsugamushi-induced type I IFN signature was functional.Monocytes were incubated with O. tsutsugamushi for 24 hours, and the secretion of IFN-b was determined using ELISA.Monocytes infected with live O. tsutsugamushi produced 65616 pg/ml IFN-b whereas those incubated with heat-killed bacteria were unable to produce IFN-b.These data clearly demonstrate that only live O. tsutsugamushi induced a sustained type I IFN response in human monocytes.
Figure 2 .
Figure 2. GO analysis of differentially expressed genes.Monocytes were stimulated with O. tsutsugamushi or mock stimulated for 8 hours, and the modulation of genes was analyzed using microarrays and GO term tools.The upregulated (A) and downregulated (B) genes were classified based on the major biological processes in which they are involved.The total number of genes present in each biological process and the number of differentially expressed genes are indicated.The results are expressed as the percentage of the upregulated or downregulated genes.doi:10.1371/journal.pntd.0001028.g002
Figure 3 .
Figure 3. Quantitative RT-PCR of selected genes in stimulated monocytes.Monocytes were stimulated with or without O. tsutsugamushi for 8 (A) and 24 (B) hours.RNA was extracted, and qRT-PCR was performed on 16 genes involved in the immune response that were differentially expressing in the microarray experiments.The results, expressed as the log 2 ratio of fold changes, are presented as the mean 6 SEM of three experiments performed in triplicate.doi:10.1371/journal.pntd.0001028.g003
Figure 4 .
Figure 4. Bacterial viability and monocyte responses.Monocytes were stimulated with live or heat-killed O. tsutsugamushi for 8 (A) or 24 (B, C) hours.A. RNA was extracted, and qRT-PCR was performed to detect several genes that were differentially expressed in the microarray experiments.The results, expressed as the log 2 ratio of fold changes, are presented as the mean 6 SEM of two experiments performed in triplicate.B and C. Culture supernatants were analyzed for the presence of TNF (B) and IL-1b (C) using ELISAs.The results are expressed in ng/ml and are presented as the mean 6 SD of two experiments performed in duplicate.*p,0.05.doi:10.1371/journal.pntd.0001028.g004
Figure 5 .
Figure 5. Hierarchical clustering of differentially expressed genes in stimulated monocytes.Monocytes were stimulated with O. tsutsugamushi or IFN-c (500 UI/ml) for 8 hours, and genome-wide expression studies were performed using microarrays from Agilent Technologies.A hierarchical clustering consisting of 300 highly altered genes is shown.doi:10.1371/journal.pntd.0001028.g005
Figure 6 .Figure 7 .
Figure 6.Hierarchical clustering in patients with scrub typhus.RNA was isolated from PBMCs from healthy controls and patients with different infectious diseases, and microarray studies were performed using Illumina Human-6 v2 BeadChips.The unsupervised hierarchical clustering of 22 patients and 2 RNA pools from healthy controls was performed based on the expression of 65 genes specific to scrub typhus (typ).The normalized expression level in each sample was baseline-adjusted to the mean expression level of the healthy control group and color-scaled, with red indicating increased expression and blue indicating decreased expression.doi:10.1371/journal.pntd.0001028.g006
Figure 8 .
Figure 8.Comparison between patient blood samples and in vitro-infected monocytes.Monocytes from healthy donors were stimulated with O. tsutsugamushi for 8 hours, and Agilent microarrays were used to detect the differential expression of 2,015 genes that corresponded to 1,606 genes in the Illumina microarrays.Among these genes, 184 (250 probes) were altered in patients with scrub typhus with a p value less than 0.01.The hierarchical clustering of these genes demonstrates that the resulting transcriptional signature was specific to scrub typhus.doi:10.1371/journal.pntd.0001028.g008
Table 1 .
Modulated genes in the ''response to virus'' GO term.
Monocytes were incubated with O. tsutsugamushi for different periods.Apoptosis was revealed by TUNEL assay and fluorescence microscopy.The results expressed as the ratio of TUNEL-positive cells and DAPI-stained nuclei 6100 are the mean 6 SD of three different experiments.doi:10.1371/journal.pntd.0001028.t002
Table 3 .
Enriched biological processes in scrub typhus. | 9,892.4 | 2011-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Assisted documentation as a new focus for artificial intelligence in endoscopy: the precedent of reliable withdrawal time and image reporting
Background Reliable documentation is essential for maintaining quality standards in endoscopy; however, in clinical practice, report quality varies. We developed an artificial intelligence (AI)-based prototype for the measurement of withdrawal and intervention times, and automatic photodocumentation. Method A multiclass deep learning algorithm distinguishing different endoscopic image content was trained with 10 557 images (1300 examinations, nine centers, four processors). Consecutively, the algorithm was used to calculate withdrawal time (AI prediction) and extract relevant images. Validation was performed on 100 colonoscopy videos (five centers). The reported and AI-predicted withdrawal times were compared with video-based measurement; photodocumentation was compared for documented polypectomies. Results Video-based measurement in 100 colonoscopies revealed a median absolute difference of 2.0 minutes between the measured and reported withdrawal times, compared with 0.4 minutes for AI predictions. The original photodocumentation represented the cecum in 88 examinations compared with 98/100 examinations for the AI-generated documentation. For 39/104 polypectomies, the examiners’ photographs included the instrument, compared with 68 for the AI images. Lastly, we demonstrated real-time capability (10 colonoscopies). Conclusion Our AI system calculates withdrawal time, provides an image report, and is real-time ready. After further validation, the system may improve standardized reporting, while decreasing the workload created by routine documentation.
Introduction
As far back as 1992, Kuhn et al. [1] reported on the advantages of structured reporting in gastroenterology.While this improves the quality of patient care and research, it is not yet widely practiced owing to decreased flexibility and increased workload [2][3][4].Along with other societies, the European Society of Gastroenterology (ESGE) released guidelines on screening colonoscopy performance measures in 2017 and reviewed their clinical application in 2021 [5][6][7][8].
The use of withdrawal time as a performance measure is based on an inverse correlation with the incidence of interval carcinomas [9].The ESGE defines withdrawal time as "time spent on withdrawal of the endoscope from cecum to anal canal and inspection of the entire bowel mucosa at negative (no biopsy or therapy) screening or diagnostic colonoscopy," calculated as the mean over 100 consecutive colonoscopies [5].Currently it is common in clinical practice to determine the withdrawal time by basing the calculation on timestamps of a cecal and rectal image.In any case, there are no clear directions as to whether this image should be taken when reaching or when leaving the cecum.Additionally, there is no standardized practice to account for time not spent on mucosal inspection during withdrawal.The latter is especially important because studies frequently measure withdrawal time in examinations involving an endoscopic intervention.In this case, measurement is commonly performed with a stopwatch, which raises the question of whether withdrawal times measured in clinical practice and in studies are comparable.Furthermore, guidelines advise detailed photodocumentation as it allows re-evaluation at a later point, but the taking of photographs is purely dependent on the examiner and requires extra effort.
Therefore, automatic detection of cecal intubation and withdrawal time, along with "backup" photodocumentation would improve standardized colonoscopy documentation and relieve endoscopists of the additional workload associated with this.In this study, we introduce a deep learning-based system for the automatic measurement of withdrawal time and photodocumentation of the cecum and any polypectomies.
Study design and aim
A frame-by-frame prediction artificial intelligence (AI) algorithm was developed to calculate the withdrawal and intervention times, and to extract an image series.The photodocumentation aimed to represent at least one landmark in the cecum, as well as any detected polyps and their resection.The system was evaluated on 100 prospectively recorded videos and applied in real time during 10 additional examinations.Reported and AI-predicted withdrawal times were then compared with video-based measurement.The information content of the examiners' and the algorithms' photodocumentation were then compared with the examination report.
Data selection
For training of the AI algorithm, 10 557 individual frames were collected from 1300 distinct colonoscopies in nine centers using four different processors (Olympus CV-170 and Evis Exera III CV-190, Olympus Europa SE & Co. KG, Hamburg, Germany; Pentax EPK-i7000, Pentax Europe GmbH, Hamburg, Germany; telligence (AI)-based prototype for the measurement of withdrawal and intervention times, and automatic photodocumentation.
Method A multiclass deep learning algorithm distinguishing different endoscopic image content was trained with 10 557 images (1300 examinations, nine centers, four processors).Consecutively, the algorithm was used to calculate withdrawal time (AI prediction) and extract relevant images.Validation was performed on 100 colonoscopy videos (five centers).The reported and AI-predicted withdrawal times were compared with video-based measurement; photodocumentation was compared for documented polypectomies.
Results Video-based measurement in 100 colonoscopies revealed a median absolute difference of 2.0 minutes between the measured and reported withdrawal times, compared with 0.4 minutes for AI predictions.The original photodocumentation represented the cecum in 88 examinations compared with 98/100 examinations for the AI-generated documentation.For 39/104 polypectomies, the examiners' photographs included the instrument, compared with 68 for the AI images.Lastly, we demonstrated realtime capability (10 colonoscopies).
Conclusion Our AI system calculates withdrawal time,
provides an image report, and is real-time ready.After further validation, the system may improve standardized reporting, while decreasing the workload created by routine documentation.
and Karl Storz Image1 S, Karl Storz SE & Co. KG, Tuttlingen, Germany).For each examination, a maximum of five images were selected per label to avoid data clustering.Table 1 s and Fig. 2 s summarize the number of annotated images per label, as well as the distribution between the training and in-training validation data.Examinations used during training were excluded from the subsequent test video selection.
For testing of the video segmentation, full-length colonoscopy videos, with and without endoscopic intervention, were prospectively collected (five centers, four processors).The recorded examinations were screened chronologically (n = 100; 10 per group per center).Incomplete or corrupted videos (n = 76), and examinations of an already fully recruited test group (n = 97) were excluded.Examinations without a report (n = 69), with insufficient bowel preparation (Boston Bowel Preparation Scale [BBPS] < 6; n = 43), no cecal intubation (n = 5), inflammatory bowel disease, previously performed bowel surgery or radiation therapy were also excluded (n = 44).Furthermore, we excluded examinations in which a resection instrument was permanently visible during withdrawal (n = 42).The data collection process is summarized in Fig. 3 s.
Artificial intelligence model development
The annotated images were split examination-wise into training (80 %), in-training validation (10 %), and after-training validation (10 %) datasets.With these images, a pretrained Reg-NetX800MF model from the torchvision library was fine-tuned for multilabel prediction [10,11].The model training is described in detail in Appendix 1 s.Performance measures on the validation dataset are summarized in Table 2 s.
Withdrawal time
The ESGE defines the withdrawal time as "Time spent on withdrawal of the endoscope from cecum to anal canal and inspection of the entire bowel mucosa […]" [5].No statement regarding inspection of cecal mucosa and cleaning of the intestines exists.
We determined the "reported withdrawal time" via the report, if it was stated.Otherwise, timestamps of the last documented cecal and rectal images were used to calculate the reported time.Video-based "measurement of the withdrawal time" was determined by manually annotating the following video segments (shown in Fig. 4 s Note: times were calculated separately for insertion, cecum inspection, and withdrawal. The "AI-predicted withdrawal time" was determined by post-processing the frame-by-frame predictions for each video resulting in a video segmentation corresponding to the annotations of the manual measurement (Appendix 1 s).
Image report generation
Images of each detected cecal region (ileum, ileocecal valve, and appendiceal orifice) and representative images of each detected polyp sequence were selected by the algorithm if available.Representative polyp images were defined as: (i) a whitelight image, (ii) a digital chromoendoscopy image, and (iii) an image including the polyp and the resection instrument.Each selected image represented the frame with the highest confidence prediction value without prediction of an uninformative label ("low quality" or "outside").
Evaluation of generated image reports
Three board-certified gastroenterologists were randomly presented with either the examiner-created or AI-generated image report for 100 examinations.Examiners were blinded to the test group.The number of distinct polyps and each polyp's resection method were annotated.Following a washout period of 6 weeks, the remaining images were presented to the examiners.Polyps and polypectomies described by less than two of the three examiners were disregarded.
Implementation of real-time application
For real-time application, the previously described EndoMind framework [12] was extended with the newly developed algorithm for multilabel classification and consecutive post-processing of the predictions.Real-time prediction is performed at a rate of 10 frames per second.
Examination characteristics
We analyzed 10 examinations with endoscopic intervention and 10 without from each of the five centers with one endoscopist per center.All participating endoscopists have at least 10 years of experience.Overall, 75 % of the examinations were screening or surveillance colonoscopies (Table 3 s).In the 50 examinations with endoscopic intervention, a total of 104 polyps were detected and resected (Table 4 s).The majority (58 %) were sized 5-10 mm and were in the sigmoid (19 %) or ascending colon (18 %).Histopathology confirmed 70 polyps (67 %) to be adenomas or sessile serrated lesions.
Withdrawal time measurement
The algorithm could not determine a withdrawal time for two of the 100 examinations as no cecal landmark was detected.The reported withdrawal time diverged more than 20 % from the measurement in 33 of 50 examinations without endoscopic intervention and 44 of 50 examinations with intervention.For the AI predictions, this was the case in six of 50 and 18 of 48 of the examinations.The absolute time difference between the AIpredicted and measured withdrawal times was smaller than the difference between the reported and measured times in 44 of 50 cases in both groups.The median absolute differences between AI prediction and measurement were 0.25 minutes (no intervention) and 0.9 minutes (intervention), respectively, compared with 1.3 minutes and 3.9 minutes for the reported times.▶ Fig. 1 demonstrates withdrawal time difference as a violin plot with individual measurements depicted as stars.The center-wise subanalysis is shown in Fig. 5 s.
Evaluation of the AI-generated photodocumentation
The AI-selected report images contained an identifiable image of the cecum in 98 examinations (98 %).Specifically, an image of the ileocecal valve was supplied from 85 examinations (85 %), of the appendiceal orifice in 79 %, and of the ileum in 49 %.Additionally, images of polyps, resection instruments (biopsy forceps or snare), and chromoendoscopy were included in the image reports.▶ Table 1 details the specificity per label for images included in the generated photodocumentation.
▶ Table 1 Specificity of artificial intelligence-selected report images. 2 Example images showing an examiner's documented images (top) and the AI-assistants documentation (bottom).Both reports contain an image of the appendiceal orifice, the ileocecal valve, the ileum, and the detected polyp, but the AI-generated report additionally displays the polyp during polypectomy.Furthermore, in the AI-generated report, a timeline displays the different phases of the intervention (insertion, cecum, withdrawal, and intervention).
Real-time application
Lastly, the algorithm was successfully integrated into our previously described real-time polyp detection framework [12].▶ Fig. 3 shows the resulting video segmentation and photodocumentation generated after each examination.In all 10 colonoscopies, the system correctly identified a cecal landmark and the mean absolute difference between the measured and AI-calculated withdrawal times was 37 seconds (range 13-75 seconds).
Discussion
Withdrawal time is an established performance parameter in clinical practice and research, yet its measurement is not standardized, with methods ranging from calculation by timestamps to manual stopwatch measurement.Furthermore, a prospective study revealed a drastic increase in withdrawal time and adenoma detection rate (ADR; 21.4 % to 36.0 %) when examiners knew that withdrawal time was being monitored [13].
Based on these considerations, we developed a prototype to reliably determine withdrawal time and provide a backup image report to prevent documentation gaps.A novel feature is that our system processes the video signal to identify cecal intubation, polypectomies, and withdrawal time.In contrast, a previously published study relied on examiner-documented images for analysis [14].Despite promising results in a research setting, a mean of 44.7 documented images per report were evaluated, which raises the question of whether clinical application would actually be feasible, given that the examiners in our study documented a mean of 8.6 images per examination (our AI system 5.5) during clinical routine.Other related works have monitored withdrawal speed [15] or quantified mucosal inspection [16,17] to enhance endoscopists' intraprocedural performance.
While AI has recently progressed rapidly, the most researched applications in endoscopy aim to influence diagnostics or therapy; however, even in radiology, where experience with such systems is much greater than in gastroenterology, only a few reach clinical practice [18].In this study, we demonstrate ▶ Fig. 3 AI-generated colonoscopy summary after real-time application of the system in clinical practice.The video segmentation result is presented as color-coded timeline in the grey box above the images.The lowest bar in this box comprises of orange (insertion period), red (cecum period), and green segments (withdrawal time), labelled with their respective durations.Above this, red bars signify polyp sequences, while yellow bars represent the presence of a resection instrument, if applicable.The AI-selected images are displayed below the grey box: left column, landmarks of the cecal region; middle column, white-light inspection, narrow band imaging, and snare polypectomy of the first polyp sequence; right column, white-light inspection, snare polypectomy of the second polyp sequence.
how AI may benefit clinical practice by measuring withdrawal time and providing "backup" photodocumentation.Instead of suggesting diagnoses or giving therapeutic advice, the system relieves endoscopists of the task of "measuring" withdrawal time and simultaneously lowers the risk of incomplete photodocumentation.We hypothesize that this could not only improve acceptance of structured reporting and application of AI, but also increase the report quality.
While the prototype demonstrates functionality for four different processor signals, its generalizability should not be readily assumed, which is a limitation of our study.In particular, the recognition of instruments may vary if new instruments are used.Continuous performance monitoring and center-specific fine-tuning are however a necessity for all applied AI models as modalities can always change.In addition, we are not able to reidentify polyps.
In conclusion, this work proposes a paradigm-shift in medically applied AI: instead of competing with physicians, AI systems should first address the recommended comprehensive documentation of basic findings.In future, the skeleton of a colonoscopy report could be pre-generated, with the examiner then validating the content.Future research should continue to evaluate this approach and extend it to more report modalities, such as polyp classification or quantification of other pathologies.
8 ▶Fig. 1
Comparison of the reported and AI-predicted withdrawal time difference from the measured withdrawal time.Withdrawal time difference (Δ) was calculated by subtraction of the measured time from either the reported time (blue) or AI-predicted time (red).Each curve represents a density plot of the data and is accompanied by a box plot of the data distribution.The dashed line within the density plot represents the mean; the solid line represents the median.Stars represent individual measurements (Δ Report No intervention, one measurement not shown as the value was ≤ 8 minutes; Δ Report Intervention, five measurements not shown as the values were > 8 minutes).
1
Annotation by a gastroenterologist revealed an overall specificity of > 91 % for the automatically selected images.
): 1. t insertion = t first cecum -t enter body 2. t cecum inspection = t last cecum − t first cecum 3. t withdrawal = t exit body -t last cecum 4. t intervention = ∑t intervention end − t intervention start 5. t cecum inspection corrected = t cecum inspection − t intervention in cecum 6. t withdrawal corrected = t withdrawal -t intervention during withdrawal | 3,595.2 | 2022-10-28T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
The Cost of Energy-Efficiency in Digital Hardware: The Trade-off between Energy Dissipation, Energy-Delay Product and Reliability in Electronic and Magnetic Binary Switches
Featured Application: This work has applications in the benchmarking of binary switches for energy-efficient nanoelectronics. Abstract: Binary switches, which are the primitive units of all digital computing and information processing hardware, are usually benchmarked on the basis of their ‘energy-delay product’ which is the product of the energy dissipated in completing the switching action and the time it takes to complete that action. The lower the energy-delay product, the better the switch (supposedly). This approach ignores the fact that lower energy dissipation and faster switching usually come at the cost of poorer reliability (i. e. higher switching error rate) and hence the energy-delay product alone cannot be a good metric for benchmarking switches. Here, we show the trade-off between energy dissipation, energy-delay product and error-probability, for both an electronic switch (a metal oxide semiconductor field effect transistor) and a magnetic switch (a magnetic tunnel junction switched with spin transfer torque). As expected, reducing energy dissipation and/or energy-delay-product generally results in increased switching error probability and reduced reliability.
Introduction
The primitive element of all digital circuits (for computing, signal processing, etc.) is a "binary switch" which has two stable states encoding the binary bits 0 and 1. Computing and digital signal processing tasks are carried out by flipping such switches back and forth between the two states.As a result, for a given algorithm and a given computing architecture, the energy cost and speed of a digital computational task are determined by the energy dissipation and the switching delay of the switches.Therefore, it has become common practice to benchmark digital switches on the basis of their 'energy-delay-product', which is the product of the energy dissipated during switching and the switching time [1].
any saving in energy or computational time gained by employing switches with lower energy-delay product may be offset by the additional resources that would be needed for error correction.In this paper, we show the direct relation between energy dissipation and error-resilience with two examples -a field effect transistor and a nanomagnetic switch flipped with current induced spin-transfer-torque [2].
Field-effect-transistor switch
A metal-oxide-semiconductor-field-effect-transistor (MOSFET) is the archetypal binary switch that encodes the bits 0 and 1 in its two conductance states -high (ON) and low (OFF).In the ON-state, charges flood into the channel providing a conduction path between the source and the drain to turn the transistor on, while in the OFF-state, these charges are driven out of the channel to disrupt the conduction path and turn the transistor off.Therefore, the two states are ultimately encoded in two different amounts of charge -Q1 and Q2 -in the channel.The switching action changes the amount of charge from Q1 to Q2, or vice versa, resulting in the (time-averaged) flow of a current wheret is the amount of time it takes for the channel charge to change from Q1 to Q2, or vice versa.This current will cause energy dissipation of the amount where R is the resistance in the path of the current and V IR .We can think ofV as the amount of voltage needed to be imposed at the transistor's gate to change the charge in the channel by the amount Q.Note that the energy dissipation given in Equation ( 2) is not independent of the switching time, because V depends on the switching time for a fixed Q and R , which clearly shows that for a fixedQ and R, we will dissipate more energy if we switch faster (smaller t).Therefore, a more meaningful quantity to benchmark energy-efficiency is the energy-delay product which is we can reduce this quantity by reducing Q, but that increasingly blurs the distinction between Q1 and Q2, thereby impairing our ability to distinguish between bits 0 and 1.If Q is too small, then thermal generation and recombination can randomly change the amount of charge in the channel by an amount comparable to Q and cause random switching.
Therefore, larger Q translates to stronger error-resilience and better reliability.This makes it obvious that there is a direct relation between reliability and energy-delay product; if we reduce the energy dissipation or energy-delay product by reducing Q, then we will invariably make the switch less reliable.We can make this argument a little more precise by noting that g Q C V , where g C is the gate capacitance.The thermal voltage fluctuation at the gate terminal is given by g kT C , where kT is the thermal energy [3], and hence the thermal charge fluctuation in the channel is: This quantity must be much smaller than the Q one needs to switch the conductance state of the transistor, and hence Clearly, is a measure of the 'switching reliability'; the larger is its value, the more reliable is the switch.From Equation (2), we can now obtain that which immediately shows that we have to tolerate more energy dissipation Ed and larger energy-delay product Edt if we desire more reliability (i.e. a larger ) [4].
In some specific cases, such as a field-effect-transistor, we may be able to derive a relation between the energy dissipation/energy-delay product and the error probability.Consider the conduction-band diagram in the channel of an n-channel field effect transistor along the direction of drain current flow as shown in Fig. 1.In the OFF-state, there is a potential barrier at the source-channel junction which prevents electrons in the source contact from entering the channel and turning the transistor ON.This barrier has to be lowered by the applied gate potential V in order to allow electrons to enter the channel when the transistor has to be turned ON.Therefore, this barrier should be approximately equal to the quantity q V .It is clear then that the transistor can spontaneously turn ON while in the non-conducting state (causing a switching error that results in a bit error), if electrons can enter the channel from the source by thermionic emission over the barrier.The probability of entering the channel in this fashion, which is roughly , is then the switching error probability p. From Equation (2), we then get that the energy dissipation can be written as and the energy-delay product can be written as ( where is the gate charging time.Equations ( 5) and ( 6) show the direct dependences of the energy dissipation and energy-delay product on the error-probability p.These two equations clearly show that lower energy dissipation or lower energy-delay product are associated with higher switching error probability in a transistor switch.
Nanomagnetic switches
Next, we consider a magnetic switch.Unlike the electronic switch, this case is not amenable to any analytical treatment and hence we will resort to simulations.A bistable nanomagnetic switch can be fashioned out of a ferromagnetic elliptical disk where, because of the elliptical shape, the magnetization can point only along the major axis, either pointing to the left or to the right, as shown in Fig. 2(a).This type of nanomagnet is said to possess in-plane magnetic anisotropy (IPA).In thinner nanomagnets, the surface anisotropy may be dominant and the magnetization can point perpendicular to the surface, either up or down.This type of nanomagnet is said to possess perpendicular magnetic anisotropy (PMA) [Fig.2(b)].Either type makes a binary switch if we encode the bit information in the magnetization orientation which can point in just two directions.In this paper, we will consider only the IPA nanomagnetic switch, although the results will apply equally to PMA nanomagnets.
The IPA nanomagnet can be vertically integrated as the "soft" layer into a three-layer stack consisting of a "hard" ferromagnetic layer and an insulating (non-magnetic) spacer, to form a magnetic tunnel junction (MTJ), as shown in Fig. 2(c).The hard layer is permanently magnetized in one of its two stable directions.When the soft layer's magnetization is parallel to that of the hard layer, the MTJ resistance (measured between the two ferromagnetic layers) is low, while, if the two magnetizations are antiparallel, the resistance is high.Thus, the MTJ acts as a binary switch, much like the transistor, whose two resistance states -high and low -encode the binary bits 0 and 1.The difference between the transistor and the MTJ is that the former is volatile (since charges leak out when the device is powered off), while the MTJ is non-volatile since the bit information is encoded in magnetization (spins) and not charge.
In order to make the magnetizations of the hard and soft layers mutually parallel (ON state), we can employ spin-transfer-torque [2].We apply a voltage across the MTJ with the negative polarity of the battery connected to the hard layer.This will inject spinpolarized electrons from the hard layer into the soft layer whose spins are mostly aligned along the magnetization orientation of the hard layer.These injected electrons will transfer their spin angular momenta to the resident electrons in the soft layer, whose spins will then gradually turn in the direction of the injected spins, and that will magnetize the soft layer in a direction parallel to the magnetization of the hard layer.This is how the MTJ is turned "on".In order to turn it "off", we will reverse the polarity of the battery.That will inject electrons from the soft layer into the hard layer, but because of spin-dependent tunneling through the spacer, those electrons whose spin polarizations are parallel to the magnetization of the hard layer will be preferentially injected.As these spins exit the soft layer, their population is quickly depleted, leaving the opposite spins as the majority in the soft layer.That aligns the magnetization of the soft layer antiparallel to that of the hard layer.When that happens, the MTJ turns off.
For any given magnitude of injected current (with a given degree of spin polarization), we can calculate the switching error probability (at room temperature) associated with spin-transfer-torque switching of an MTJ by carrying out Landau-Lifshitz-Gilbert-Langevin simulation (also known as stochastic Landau-Lifshitz-Gilbert (s-LLG) simulation).To do this, we solve the following equation: The last term in the right-hand side of Equation ( 7) is the field-like spin transfer torque exerted by the injected current Is and the second to last term is the Slonczewski torque exerted by the same current.The coefficients a and b depend on device configurations and following [5], we will use the values is the time-varying magnetization vector in the soft layer normalized to unity, mx(t), my(t) and mz(t) are its time-varying components along the x-, y-and z-axis, respectively (see Fig. 2 for the Cartesian axes), demag H is the demagnetizing field in the soft layer due to its elliptical shape and thermal H is the random magnetic field due to thermal noise [6].The different parameters in Equation (3) are: is the magnetic permeability of free space, Ms is the saturation magnetization of the cobalt soft layer, kT is the thermal energyis the volume of the soft layer which is given by 4 major axis, = minor axis and = thickness a a a a a a
, t is the time step used in the simulation, and Gaussians with zero mean and unit standard deviation [6].The quantities , , 1 are calculated from the dimensions of the elliptical soft layer following the prescription of ref. [7].The nanomagnet soft layer is assumed to be made of cobalt with saturation magnetization Ms = 8x10 5 A/m and = 0.01.Its major axis = 800 nm, minor axis = 700 nm and thickness = 2.2 nm.We assume that the spin polarization in the injected current is which we take to be .The spin current is given by Using the vector identity , we can recast the vector equation in Equation ( 7) as [8] This vector equation can be recast as three coupled scalar equations in the three Cartesian components of the magnetization vector [8]: where In our s-LLG simulations, we consider six different switching currents of 0.5, 1.0, 5.0, 10.0, 15.0 and 20.0, corresponding to current densities of1.14 × 10 A/m 2 , 2.27 × 10 A/m 2 , 1.14 × 10 A/m 2 , 2.27 × 10 A/m 2 , 3.41 × 10 A/m 2 and 4.55 × 10 A/m 2 , respectively.We generate 1,000 switching trajectories for each current by solving Equation (9).We start with the initial condition 0 0 99 0 0 1 0 0 1 ., .; .
and run each trajectory for 20 ns with a time step of 0.1 ps.After 20 ns, each trajectory ends with a value of y m either close to +1 (switching success) or -1 (switching failure).The error probability is the fraction of trajectories that result in failure.In Fig. 3, we plot the error probability (at room temperature) as a function of the current injected.Keeping in mind that the bulk of the energy dissipated is proportional to the square of the current, we see that the error probability decreases monotonically with increasing current or increasing energy dissipation.This shows that energy efficiency can only be purchased at the cost of reliability.In this respect, the magnetic switch shows the same trait as the electronic switch.In both cases, we have to expend more energy during switching if we wish to increase switching reliability.
In Fig. 4, we show the error probability as a function of switching time (pulse width of the injected current) for a fixed magnitude of the current.The current strength chosen for this plot was 10 mA.In this simulation, we turned off the current after different intervals of time and continued the simulation for 20 ns to see whether the value of my ends up close to +1 (success) or -1 (failure).Again, the simulation duration of 20 ns was sufficient to ensure that for each simulated trajectory, my ends up close to either +1 (success) or -1 (failure) at the end of the simulation.One thousand switching trajectories were generated for each pulse width and the error probability is the fraction of trajectories that result in failure.We observe that the error probability decreases with increasing current pulse width (longer passage of current, or slower switching), as expected.
Conclusions
In this article, we have shown the relationship between energy dissipation, switching delay and reliability for binary switches used in nanoelectronics.Typically, energy efficiency and faster switching come at the cost of reduced reliability.Consequently, it is not appropriate to benchmark switching devices only in terms of their energy-delay product since a lower energy-delay product can always be purchased at the cost of error-resilience.This begs the question if there are computing and information processing paradigms that can tolerate high error probabilities because they can afford to be more energy-efficient.Boolean logic, which is at the heart of most arithmetic logic units in modern day computers, demands a high degree of reliability [9] and therefore is not likely to be frugal in its use of energy.On the other hand, there are computing paradigms (e. g. neuromorphic, probabilistic, Bayesian) where the computational activity is often elicited from the collective activity of many devices (switches) working in unison.In those cases, a single device (or few devices) being erratic does not impair overall circuit functionality [10].Consequently, they can tolerate much higher error probabilities.Hardware platforms for these non-Boolean computing paradigms are therefore likely to be more energy efficient than Boolean logic and that has already motivated a great deal of interest in them [9].
Figure 1 .
Figure 1.Conduction band profile along the channel of a field effect transistor in the OFF-state (solid line) and ON-state (broken line).
Figure 2 .
Figure 2. A nanomagnet shaped like an elliptical disk has two stable magnetization orientations which can encode the binary bits 0 and 1.(a) in-plane magnetic anisotropy, and (b) perpendicular magnetic anisotropy.(c) A magnetic tunnel junction (MTJ) showing the high-(OFF) and low-(ON) resistance states.
PreprintsFigure 3 .
Figure 3. Switching error probability as a function of injected current magnitude.The energy dissipated is proportional to the square of the current.The current was kept on for the entire duration of the simulation, which is 20 ns.
Figure 4 .
Figure 4. Switching error probability as a function of current pulse width (i.e. the duration of spin transfer torque).The current strength was kept fixed at 10 mA. | 3,767.4 | 2021-05-26T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Truthlikeness: old and new debates
The notion of truthlikeness or verisimilitude has been a topic of intensive discussion ever since the definition proposed by Karl Popper was refuted in 1974. This paper gives an analysis of old and new debates about this notion. There is a fairly large agreement about the truthlikeness ordering of conjunctive theories, but the main rival approaches differ especially about false disjunctive theories. Continuing the debate between Niiniluoto’s min-sum measure and Schurz’s relevant consequence measure, the paper also gives a critical assessment of Oddie’s new defense of the average measure and Kuipers’ refined definition of truth approximation.
relevant consequences verisimilitude decreases with increasing logical strength. In a letter on September 30, 1986, I stated a counterexample: Let p say that the number of moons in our solar system is 2, and q say that it is 2 or 2000. Then the stronger falsity p is closer to the truth than the weaker q. 1 I added that examples of this sort seem to prove that a satisfactory general definition of verisimilitude cannot be given merely by the concepts of truth value and deduction (whatever restrictions are given to the latter notion), but we need also a concept of distance or similarity.
Schurz sent his reply in a letter on October 3, 1986. He agreed with the judgment of my example, but pointed out that it presupposes that one can compare p and q for their verisimilitude, whereas his theorem 5.3 concerns qualitative cases without any "internal metric" for primitive predicates or propositions. For such situations the only plausible way of comparison is based on logical strength: the stronger falsity has less verisimilitude. Schurz assured further that he was looking for a natural embedding of a distance measure for "internally ordered" primitives into the relevant consequence approach. But this was not yet developed in the soon published joint paper of Schurz and Weingartner (1987).
One had to wait for the publication of Schurz and Weingartner (2010) to see how they propose to integrate a quantitative weight function into the relevant consequence approach. But, alas, the same theorem is still valid: among "completely false theories" (with empty restricted truth content) truthlikeness decreases with logical strength. Thus, the old debate, which started some 30 years ago, is still among us.
We shall return to the issue about false disjunctions in Sect. 5 after reviewing several other approaches and disputes about them. Section 2 summarizes some attempts to rescue Popper's original definition. Section 3 outlines the similarity approach and the controversy between Oddie's (1986) average measure and Niiniluoto's (1987) min-sum-measure. Section 4 shows that there is in fact quite a lot of convergence between rival approaches, if one restricts attention to conjunctive theories. Finally, Sect. 5 illustrates different intuitions about verisimilitude by showing how the main rival accounts give different answers to the ordering of false disjunctions.
Reactions to Popper's retreat
In 1960 Karl Popper gave a comparative definition for one scientific theory to be closer to the truth than another rival theory. His notion of verisimilitude or truthlikeness was intended to express the idea of "a degree of better (or worse) correspondence to truth" or of "approaching comprehensive truth" (see Popper 1963). For Popper, in his debate with Thomas Kuhn, it was also crucially important that this notion gives a definition of scientific progress as increasing verisimilitude.
Let T be the class of true statements and F the class of false statement in a given interpreted scientific language L. Let A and B be consistent theories, i.e. deductively closed classes of statements in L. Popper suggested that theory B is at least as truthlike as theory A if and only if (1) A ∩ T ⊆ B ∩ T and B ∩ F ⊆ A ∩ F.
Here A ∩ T is the truth content of A, and A ∩ F is the falsity content of A. B is more truthlike than A if at least one of the inclusions in (1) is strict. This is the case when the symmetric difference B T (B-T) ∪ (T-B) is a proper subset of A T.
Popper's definition has some nice properties: (TR1) T is more truthlike than any other theory A.
(TR2) If A and B are true, and B entails A, but not vice versa, then B is more truthlike than A. (TR2a) If A and B are true, and B entails A, then B is at least as truthlike as A. (TR3) If A is false, then A ∩ T is more truthlike than A.
Here TR1 says that the whole truth T is maximally truthlike. According to TR2, among true theories, truthlikeness covaries with logical strength. TR2a is a weaker version of TR2. TR3 says that the truth content of a false theory is more truthlike than the theory itself.
Many theories are incomparable by (1). In particular, Popper's definition does not allow that a false statement is so close to the truth that it is cognitively better than ignorance, where ignorance is represented by a tautology, i.e. the weakest true theory in L. Thus, the following principle is not satisfied: (TR4) Some false theories may be more truthlike than some true theories.
To illustrate TR4, for the cognitive problem about the number of planets (i.e. 8), the approximately true answer "9" is more truthlike than the tautological answer "any number greater than or equal to 0" or "I don't know". It is even better than its true negation "different from 9". Miller (1974) and Tichý (1974) proved that Popper's definition does not work in the intended way, since it cannot be used for comparing false theories: if A is more truthlike than B in Popper's sense, then A must be true. Thus, the following nontriviality principle is violated: (TR5) A false theory may be more truthlike than another false theory.
Both conditions TR4 and TR5 are important, if the notion of truthlikeness is to yield the hoped-for criterion of scientific progress, since for a fallibilist science typically develops by the replacement of a false theory by a better false theory, while the accumulation of truths in the sense of TR2 is only a rare special case. 2 Popper's first public reaction to Miller's result was to restrict comparisons of theories to their truth content, but this leads to the fatal "child's play objection" by Tichý (1974): any false theory can be improved by adding arbitrary falsities to it. In his Introduction to the volume of Postscript to the Logic of Scientific Discovery, Popper claimed that the admitted failure of his definition of verisimilitude has only negligible impact on his theory of science (see Popper 1982, pp. xxxv-xxxvii). A formal definition of verisimilitude is "not needed for talking sensibly about it". A new definition is needed only if it strengthens a theory, and it is "completely baseless" to claim that this unfortunate mistaken definition weakens his theory. Popper added that no one has ever shown "why the idea of verisimilitude (which is not an essential part of my theory) should not be used further within my theory as an undefined concept".
After this dramatic retreat by Popper, some of his followers suggested that critical rationalists need no precise notion of truthlikeness (John Watkins, Noretta Koertge, Hans Albert). Miller (1994) has continued to endorse Popperian falsificationism, even though he is skeptical of the possibility of finding a language-invariant definition of verisimilitude. 3 Still, many philosophers advocating scientific realism made efforts to rescue Popper's definition by logical means (cf. Niiniluoto 2017).
A model-theoretic version of Popper's approach has been proposed by Miller (1978) and Kuipers (1982). Let Mod(A) be the class of models of A, i.e. the L-structures in which all the sentences of A are true. Then define A to be at least as truthlike as B if and only if But if T is a complete theory, then this model-theoretic definition has an implausible consequence: among false theories, if theory B is logically stronger than A, then B is also more truthlike than A. Definition (3) is thus vulnerable to Tichý's "child's play objection", and the following adequacy condition is not satisfied: (TR6) Among false theories, truthlikeness does not always increase with logical strength.
Miller's later attempts to characterize truthlikeness by objective metrics have also led to violations of the principle TR6 (see Miller 1994, pp. 205, 215). To illustrate the problem with TR6, the stronger falsity "more than 100" is not better than the weaker falsity "9 or more than 100" for the question about the true number of planets. But for the same reason, we should disagree with Laymon's (1987) conclusion that the weaker of two false theories is always more truthlike, i.e. we require (TR7) Among false theories, truthlikeness does not always decrease with logical strength.
For example, the weaker claim "9 or more than 100" is not better than "9" as an estimate of the number of planets (see Niiniluoto 1998). 4 Several other logical treatments of verisimilitude violate the conditions TR4 or TR6. For example, the treatment by means of deductive power relations by Brink and Heidema (1987) disagrees with TR4. Mormann (2006) applies sophisticated mathematical tools of topology and metric spaces to the logical space of theories, which is "naturally ordered with respect to logical strength", and derives from these considerations the definition (3) and, as its improvement, a metric which preserves the order (3) "as far as possible". Kuipers (1982) applies the definition (3) to what he calls "nomic truthlikeness": a theory A asserts the physical possibility of the structures in Mod(A), 5 claiming that Mod(A) is identical with the class Mod(T) of all physically possible structures or "nomic possibilities". In this context T is usually not a complete theory. 6 Kuipers (1987b) argues that Popper had "bad luck" when he formulated his intuition with statements [see (1)] instead possibilities, since (3) can be read so that all rightly admitted possibilities by B are admitted by A and all wrongly admitted possibilities of A are admitted by B. While this treatment avoids the Miller-Tichý trivialization and satisfies TR5, it violates TR4. Kuipers admits that the "naive definition" (3) is simplified in the sense that it treats all mistaken applications of a theory as equally bad. For this reason, and to avoid the threat of TR6, in his later work Kuipers has developed a "refined definition" which, by using a qualitative treatment of similarity, allows that some mistakes are better than other mistakes (see Kuipers 1987b, 2000. Cevolani (2016) has shown that Popper had "bad luck" in another sense: if truth and falsity contents are defined as classes of content elements, i.e. negations of statedescriptions, as Rudolf Carnap proposed in his treatment of semantic information, then Popper's comparative definition is again saved from Miller's refutation-even though not from the problems with TR4 and TR6. Schurz and Weingartner (1987) diagnose the failure of Popper's definition as the problem that a theory is assumed to be closed under all logical consequences. They show that the Miller-Tichý refutation is blocked if the truth content and the falsity contents of a theory A are restricted to its relevant consequences, so that irrelevant disjunctive weakenings and redundant conjunctions are eliminated. It is important that these restricted sets together are logically equivalent to the original theory A. 7 This modified definition satisfies TR1, TR2, and TR5, but among "completely false theories" (with empty restricted truth content) truthlikeness decreases with logical strength (cf. TR7). Further, as the original version of Schurz and Weingartner's relevant consequence approach does not satisfy TR4, in their recent work they have supplemented it with a quantitative measure for formulas (see Sect. 5). 8 5 Kuipers (1982Kuipers ( , 1987bKuipers ( , 2000 used to assume that the claim of a theory A is that A defines precisely the class T of nomic possibilities (i.e. A T), but in Kuipers (2014) he formulates theories with the weaker exclusion claim that A defines a superset of T. 6 If T is complete, then all of its models are elementarily equivalent, so that Mod(T) is a singleton set, consisting only of the actual world. But even when Mod(T) has more than one element, the child's play objection or TR6 applies to "strongly false" theories A such that Mod(A) ∩ Mod(T) Ø (see Kuipers 1987b). For a different interpretation of Kuipers,see Sect. 3 below. 7 This result, which is formulated as a conjecture in Schurz and Weingartner (1987), was not appreciated in the somewhat negative comment on "truncated theories" in the survey Niiniluoto (1998). Schurz and Weingartner (2010, p. 427), inform that a proof of this equivalence has been accomplished in propositional logic but so far not in full first-order logic. 8 A related way of rescuing Popper's definition with a restricted notion of content has been proposed by Gemes (2007).
The similarity approach
A recurring difficulty of attempts to rescue Popper's definition (1) is that the concepts of truth, falsity, and logical consequence are insufficient to avoid the undesirable consequences TR6 or TR7, and to make sense of the important fallibilist principle TR4. Against approaches relying merely with truth values and logical content or logical consequences, the advocates of the similarity approach employ the notion of similarity or resemblance between (statements describing) states of affairs. The similarity approach was discovered in 1974 independently by Pavel Tichý within propositional logic and Risto Hilpinen within possible worlds semantics (see Hilpinen 1976). Tichý (1974) hinted that a more general solution can be obtained by using Hintikka's distributive normal forms in predicate logic, which he then was the first to implement in Tichý (1976) for the full polyadic case. In the meantime, the similarity approach was developed in 1975 for monadic predicate logic by Ilkka Niiniluoto (see Niiniluoto 1977) and Tuomela (1978). 9 Later extensions and debates are summarized in the independently written monographs of Oddie (1986) and Niiniluoto (1987), and in the collection of essays edited by Kuipers (1987a). Hilpinen (1976) assumed as a primitive notion the concept of similarity between possible worlds. His definition, which compares the minimum distances and the maximum distances of rival theories from the actual world, solves the Miller-Tichý problem with TR5 and satisfies TR6, but it leaves most theories incomparable, and does not satisfy TR4. 10 It satisfies TR2a but not TR2. Niiniluoto (1977), instead, replaces possible worlds by constituents in monadic first-order logic. For a monadic language L with one-place predicates M 1 , …, M k , the Q-predicates Q 1 , …, Q K are defined by conjunctions of the form (±)M 1 (x) &···& (±)M k , where (±) is replaced by the negation sign ∼ or by nothing. Here K 2 k . A constituent C i in L expresses which Q-predicates are instantiated and which are not, so that its logical form is (4) (±)(Ex)Q 1 (x) &···& (±)(Ex)Q K (x). If an empty universe is excluded, the number of constituents is 2 K − 1. Let CT i be the class of Q-predicates which are non-empty by constituent C i . All generalizations (i.e. quantificational statements without individual constants) can be expressed in a normal form as a disjunction of constituents.
The simplest distance between monadic constituents, due to W. K. Clifford already in 1877, is the number of their diverging claims about the Q-predicates. Thus, the Clifford measure between constituents C i and C j is the (normalized) cardinality of the symmetric difference |CT i CT j |/K between the classes CT i and CT j . Tichý's (1976) proposal for the polyadic language implied as a special case, against the Clifford measure, that the distance between monadic constituents should take into account the similarities between Q-predicates. This led Niiniluoto in the same year to propose his alternatives to the Clifford measure (see Niiniluoto 1987, pp. 314-321). In propositional logic with atomic sentences p, q, and r, a constituent is a sentence of the form (±)p & (±)q & (±)r, where p and ¬p are called literals, and again each sentence has a disjunctive normal form as a disjunction of constituents. According to the Hamming distance, which corresponds to the Clifford measure, the distance between propositional constituents p&q&r and p&¬q&¬r is 2/3 (see Fig. 1 where negation ¬p is denoted by bar p).
As constituents of L are mutually exclusive and jointly exhaustive, there is one and only one constituent C* in L which is true, so that it can be taken to be the target of the definition of truthlikeness. Then the distance of a theory h, which is a disjunction of constituents, should be a function of the distances of its disjuncts from the target C*. Oddie (1986) follows Tichý (1974) in choosing this function to be the average distance av (C*, h). Inspired by Hilpinen (1976), Niiniluoto (1977) first proposed the weighted average mm (C*, h) of the minimum and maximum distances. Niiniluoto's (1987) later min-sum measure ms (C*, h) differs from the min-max measure by replacing the maximum distance by the normalized sum sum (C*, h) of all distances of the disjuncts of h from C*: , where γ > 0 and γ > 0 are real-valued parameters indicating our interest in being close to the truth and avoiding falsities. Then the degree of truthlikeness Tr(h, C*) of a theory h relative to the target C* can be defined as 1 − ms (C*,h). A comparative notion is achieved by defining h to be more truthlike than h (relative to target C*) if and only if Tr(h, C*) > Tr (h , C*).
The examples about the number of planets can be formalized in a monadic language with Q-predicates Q i (x) "x has i planets", for i 0, …, 100. Then the distance between predicates Q i and Q j is |i − j|/100. As Q 8 (a) is true for our solar system a, the distance of the claim Q 13 (a) from the truth is 5/100, and disjunctive claims can be handled with av or ms . 11 The power of this quantitative similarity approach can be seen in the fact that it makes all generalizations in L comparable for their truthlikeness. Using Jaakko Hintikka's distributive normal forms, it can be extended to generalizations in full first-order languages with relations, which serves to define a metric in the logical space of complete theories. 12 Another extension is to modal monadic languages with the operator ♦ of physical or nomic possibility, so that nomic constituents have the following form Distance from the true nomic constituent, using the Clifford measure or its generalizations, gives a solution to the problem of legisimilitude or closeness to the true law of nature (see Niiniluoto 1987, Ch. 11). 13 The idea of nomic constituents gives a natural reinterpretation of Kuipers' (1982) original treatment of theoretical verisimilitude (see Niiniluoto 1987Niiniluoto , pp. 381-382, 1998Zwart 2001;cf. Kuipers 2000, p. 305). Instead of discussing models or possible worlds in the condition (3), a theory expresses the possibility of kinds of individuals or structures, so that (3) can be written as a condition on Q-predicates, and the quantitative distance |A T| of a theory A from the truth T (see Kuipers 1987b, p. 88) is equal to the Clifford measure. Later Kuipers (2000) has spoken about "conceptual possibilities", and in Kuipers (2014) he explicitly adopts the suggestion that these possibilities could be Q-predicates (e.g. black raven, black non-raven, non-black raven, non-black nonraven). This means that a theory for Kuipers is a nomic monadic constituent in the sense (6). 14 In Kuipers (2014) he reformulates his approach so that a theory includes only an exclusion statement, i.e. the conjuncts of (6) with negations (e.g. the law "all ravens are black" excludes the possibility of non-black ravens). This analysis also shows that his later "revised" definition with a notion of "structurelikeness" can be compared to those variants of the Clifford measure which reflect distances between Q-predicates.
Let us still compare the average and min-sum measures as methods of balancing the goals of truth and information in the case of monadic languages. They define truthlikeness in different ways through a "game of excluding falsity and preserving truth" (see Niiniluoto 1987, p. 242, and If h is false, then Tr(h v C*, C*) > Tr(h, C*), so that they both satisfy TR3. Both avoid Miller's refutation of Popper, so that TR5 is satisfied. Further, both are able to handle false theories so that TR6 and TR7 are satisfied: comparison of false theories depends on the distances of the false disjuncts from the truth.
Some of the properties of the min-sum measure depend on the choice of the parameters γ and γ . Note that the average distance of a tautology t from the truth C* is 1/2, so that av satisfies TR4. For the min-sum measure, Tr(t, C*) 1 − γ for a tautology t. It follows that a false constituent C i is less truthlike than the weakest true theory (i.e. worse than ignorance) if its distance from the truth is larger than γ /γ. Such answers can be called misleading, while those better than ignorance are truth-leading. If the weight γ for the truth factor is large, no constituent is better than t. If the weight γ for the information factor is large, all constituents are better than t. 15 This gives a practical guide for choosing the weights γ and γ so that TR4 is satisfied. For example, the choice γ ≈ 2γ implies that in Fig. 1 the truth-leading answers are within a circle with radius 1/2 from the complete truth, and the misleading answers are outside this circle (see Niiniluoto 1987, p. 232). Zwart (2001) notes that the followers of Popper's explication give "content definitions" in the sense that the least truthlike of all theories is the negation~C* of complete truth C* (i.e. the disjunction of all false constituents), while the "likeness definitions" imply that the worst theory is the "complete falsity" (i.e. the constituent at the largest distance from the truth). For suitable choices of the weights, e.g. γ /γ close to 1/2, the min-sum measure gives a likeness definition (see Niiniluoto 1987, p. 232;2003). Zwart and Franssen (2007) use Arrow's theorem to argue that content and similarity definitions cannot be merged to give a truthlikeness ordering. Oddie (2013Oddie ( , p. 1670, thinks that the principle TR2 expresses the core of "content measures", suggesting that the min-sum measure is a "likeness-content compromise". According to his constraints, "strong" likeness definitions should accept the average measure. But, in fact, the min-sum definition shows that a likeness or similarity account can satisfy both TR2 (called "the value of content for truths" by Oddie) and TR5 (thus avoiding "the value of content for falsehoods"). Niiniluoto (1987, pp. 418-419), points out that for the trivial or "flat" distance function, which takes all false constituents to be equally distant from the truth, the min-sum measure reduces to Levi's (1967) measure of epistemic utility. Levi's measure turns out to be a typical case of a content-based measure of closeness to truth (weighted average of truth value and content measured relative to an even probability distribution), which violates TR4 and TR6. This shows the compatibility of the minsum measure with the content approach, but only in the extreme case of flat distance or "likeness nihilism" (cf. Oddie 2013Oddie , p. 1675. The debate about Popper's principle TR2 is the main difference between the average approach of Oddie (1986) and the min-sum approach of Niiniluoto (1987). Namely, av violates TR2 and TR2a by allowing that a true theory can be improved by adding to it new false disjuncts, if the average distance is thereby decreased: for example, "8 or 13 or 20" is better that "8 or 20" as an answer to the question about the number of planets. 16 In contrast, ms sees here the addition of "13" as superfluous, once we already have hit upon the truth "8". In a game of finding truth and excluding falsity, a theory can be seen as a set of alternative guesses about the target. Even though the theory does not endorse the false guesses conjunctively but allows them (see Oddie 2013Oddie , p. 1671, still every falsity included in a theory is a mistake whose seriousness depends on its distance from the target. In this game, we try to hit the true constituent as soon as possible (cf. TR1). After several shots, our score depends on our best result ( min ) and the sum of all the mistakes ( sum ). Thus, this measure is based on a general principle of penalty: in calculating the distance from the truth, you have to pay for all of your mistakes. But this penalty is to some extent compensated, if we succeed in improving our best result so far.
The min-sum measure ms is an improvement of the min-max measure mm , which satisfies only the weaker TR2a, since it takes into account what happens in the theory between the best and worst guesses. In particular, a theory becomes worse when equally false new disjuncts are added to it: (TR8) If C i and C j are equally distant from the truth, i j, then Tr(C i ∨ C j , C*) < Tr(C i , C*). This is called "thin better than fat" by Niiniluoto (1987, p. 233). Another important principle is that adding a new disjunct improves a theory only if the minimum distance is at the same time decreased (see M8 in Niiniluoto 1987, p. 233). Adding a new disjunct leads to a loss of information-about-the-truth which can be compensated only by at the same time improving the minimum distance from the truth: (TR9) Tr(h v C i , C*) > Tr(h, C*) if and only if C i is closer to the truth than the minimum of h. This principle is satisfied by the min-sum measure, when γ is sufficiently large in relation to γ (ibid., pp. 230-231).
Oddie returned to this debate in his 2014 revision of his 2001 survey of truthlikeness in Stanford Encyclopedia of Philosophy (see Oddie 2013Oddie , 2014. He has no hesitation in the continuing rejection of the principle TR2, which he attacks indirectly by an ingenious derivation of the average measure from three axioms. The first of them is the "uniform distance principle": (AV1) If C i and C j are equally distant from the truth, i j, then Tr(C i ∨ C j , C*) Tr(C i , C*). This seems to be tailor made for average, as it directly contradicts TR8. For example, AV1 implies that "4 or 12" and "12" are equally truthlike answers to the question about the number of planets. 17 In the two-dimensional problem of locating a city on a map, with the Euclidean distance, the full circle around 0, 0 with radius 1 (or any of its parts) is equally truthlike as the point 0, 1 . Similarly, in Fig. 1 pqr vpqr andpqr are equally truthlike by AV1. This could be valid for minimum distance, which excludes all content considerations and defines the notion of approximate truth in Hilpinen's (1976) sense, but not for a notion of verisimilitude based on the penalty principle. One peculiarity of AV1 is that its conclusion fails, if a third closer constituent is added to the disjunction: if C i and C j are equally distant from the truth and farther from the the truth than C k , then the average measure and the min-sum measure agree that Tr(C i v C j v C k , C*) > Tr(C i v C k , C*).
Oddie's second assumption is the "weak pareto principle": (AV2) If C j is at least as close to the truth as C i , then Tr(h(j/i), C*) ≥ Tr(h, C*), where h(j/i) is the result of substituting C i in theory h by C j . This is valid for the minsum measure by the penalty principle: for its special case, "ovate better than obovate", see M12 in Niiniluoto (1987, p. 233). The third axiom is the "difference principle": (AV3) The difference in the degrees of truthlikeness of h and h(j/i) depends only on C i and C j and the number of constituents in the normal form of h. But it can claimed that the location (not only the number) of constituents in the normal form of h is relevant: in the planet case, the shift from "14 or 15 or 20" to "13 or 15 or 20" is more dramatic than the shift from "10 or 14 or 15" to "10 or 13 or 15", since the former improves the best guess of the theory. This objection is not based on the endorsement of min as an adequate measure of truthlikeness, as Oddie's reply suggests (Oddie 2013(Oddie , p. 1674, but rather on the principle TR9 satisfied by ms .
If your intuitions are in favor of TR2, TR8, and TR9, and you are not willing to accept the special principles AV1 and AV3, then Oddie's argument for the average function is not compelling.
The definition of truthlikeness is not only an exercise of logical intuitions, but it can be assessed also from the viewpoint of the intended applications of this notion in the philosophy of science. For Popper (1963) the two main applications were the falsificationist methodology and scientific progress. First, it is natural to expect that the elimination of false hypotheses should increase truthlikeness. But this idea needs a qualification: if our current hypothesis is the disjunction of two false theories A and B, where A is closer to the truth than B, then the elimination of A does not increase truthlikeness. This result is delivered both by ms and av . If we falsify our only hypothesis A, and are left with a tautology or ¬A, then by TR4 we may have a loss in verisimilitude. In order to gain in truthlikeness, the falsified hypothesis should be replaced by a more truthlike alternative. But suppose that our hypothesis is A v B v C, where A is true. Then the falsification of B should increase truthlikeness. This requirement follows from ms by TR2, while it may fail in some cases for av .
Secondly, scientific progress in the objective realist sense be can explicated by increasing truthlikeness, and scientific regress by decreasing truthlikeness (Niiniluoto 1984). Adequacy conditions TR1-TR7 have a natural interpretation as principles of progress. For example, TR4 states that science has been progressive with false Footnote 17 continued objective truthlikeness (see Niiniluoto 1987, pp. 238-241). For the problem of estimating unknown degrees of verisimilitude by expected values, see Niiniluoto (1977. but truthlike theories: it is better to have Newton's theory than to be ignorant about mechanics, and by TR5 it is still better to replace Newton's theory with Einstein's special theory of relativity. One relevant comparison is between principles TR8 and AV1: are we making regress, if a theory is weakened by adding a new disjunct without a gain in closeness to the truth? But the most dramatic examples concern the violation of TR2 by the average measure. According to the cumulative principle TR2, adding conjunctively a new truth to an old truth is an instance of scientific progress. This result is delivered by the min-sum measure. Advocates of the average measure, who disagree with this conclusion, should explain what they mean by progress-or alternatively deny that the notion of truthlikeness is applicable to the axiological problem of characterizing progress in science.
Convergence: conjunctive theories
In the similarity approach, theories are represented as disjunctions of constituents. Such normal forms are generalizations of the disjunctive normal form in propositional logic. Schurz and Weingartner (2010) use instead conjunctive normal forms, where theories are conjunctions of content elements. Schurz (2011) observes that the "basic feature approach" developed by Roberto Festa and Gustavo Cevolani also belongs to the conjunctive camp. 18 It is restricted to c-theories or convex conjunctive theories, which are generalizations of constituents in propositional logic, i.e. statements about atomic sentences with some definite positive claims, some negative claims, and some left open. Then a c-theory h is more verisimilar than another c-theory h if h makes more true claims and less false claims than h . This is again a variant Popper's criterion (1) and defines a qualitative partial ordering. A quantitative verisimilitude measure V is c-monotonic if it agrees with this qualitative ordering. Inspired by Amos Tversky's similarity metrics, 19 Festa and Cevolani have developed a c-monotonic "basic feature approach", where the verisimilitude of a c-theory depends on its matches and mismatches in relation to the true constituent (see . They also note that this quantitative measure, when restricted to c-theories, agrees with many existing measures, like average, min-max, and refined relevant consequences. 20 But the claim that the min-sum measure is not c-monotonic (see Cevolani et al. 2011, p. 188) is mistaken, since the supposed counterexamples involve inadmissible choices of parameters. 21 Propositional c-theories can be immediately generalized to monadic predicate logic (see Niiniluoto 2011). Such conjunctive theories include those statements which make definite existence claims about some Q-predicates, non-existence claim about some Qpredicates, but may leave some Q-predicates open. This class includes purely universal generalizations and purely existential statements. Monadic constituents are a special case, where the claims leave no question marks about the cells. Tuomela (1978) defined the distance of such generalizations from the true constituent directly by a function which compares their claims about the Q-predicates, so that his approach in fact initiated the basic feature approach. For constituents, this distance includes the Clifford measure as a special case (see Niiniluoto 1987, pp. 319-321). Following Oddie (1986, p. 86), Festa (2007 calls such monadic c-theories "quasi-constituents", and extends their treatment to statistical hypotheses. As monadic c-theories can be expressed as disjunction of constituents, their degree of truthlikeness can be calculated by the min-sum measure. In Fig. 2 we have a generalization g i with positive claims in PC i , negative claims in NC i , and question marks in QM i , whereas CT* includes the existence claims of the true constituent. Then the min-sum measure gives a general formula which shows that the distance of such a generalization from the truth increases with c−c (the number of wrong existence claims), b (the number of wrong non-existence claims), and m (its informational weakness): Niiniluoto 1987, p. 337). Apart from some coefficients this is essentially the same as the result by the balanced contrast measure (with φ 1): So there is no deep difference between the disjunctive approach by normal forms and the conjunctive approach by basic features.
Recent work on conjunctive theories shows that there is perhaps surprising convergence in the rival approaches to truthlikeness. One of the remarkable results is that Popper's TR2, which Tichý and Oddie have heavily criticized, holds for their average measure av when applied to c-theories: (8) When restricted to c-theories, the average measure satisfies TR2. This means that the average and min-sum measures give the same truthlikeness ordering of pairs of true c-theories when one of them is logically stronger than the other. 22 Another important result concerns the relations of truthlikeness and belief revision. The basic observation is that a false belief system B may become less truthlike, when it is expanded or revised by a true input A. For example, let B state that the number of planets is 9 or 20 and A that this number is 8 or 20; then the expansion of B by A states that this number is 20. If the original theory B claims that this number is 19, then the revision of B by A is 20 (see Niiniluoto 2011). But it can be shown that convex c-theories avoid this problematic conclusion: (See Cevolani et al. 2011;Niiniluoto 2011.) On the other hand, even though c-theories behave in a nice way, one should emphasize that all statements are not conjunctive in this sense. For example, in propositional logic disjunctions p v q and implications p → q are not convex, and in predicate logic the generalizations (x)(Fx → (Gx v Hx)), (Ex)(Rx & (Gx v Hx)) and (x)(Cx → (Fx ↔ Hx)) are not convex. The methodological superiority of the disjunctive approach can be seen in its ability to treat all statements with different logical forms in qualitative and quantitative first-order (and even higher-order) languages.
But as soon as we go beyond c-theories, the consensus breaks down. Cevolani and Festa (2018) propose an extension of their basic feature approach in terms of partial consequences, and it turns out that this leads to a measure which is ordinally equivalent to the average measure. So they are able to apply their extended measure to any statements in the propositional language, but the violation of TR2 forces them to reconsider their earlier work on scientific progress and belief revision. In particular, even what Niiniluoto (2011) regarded as the only "safe case" of increasing truthlikeness, viz. the expansion of a true belief system by a true input, is not generally valid any more. For example, if we correctly believe that the number of planets is 8 or 13 or 20, and we expand this belief by the true input that this number is not 13, then the conclusion "8 or 20" is less truthlike by the average measure (but not by the min-sum measure).
In the final section, we still have to assess the refined conjunctive treatment by Schurz and Weingartner (2010), which is not restricted to c-theories.
The problem of false disjunctions
After considering some disagreements about true disjunctions (cf. TR2), we are now ready to return to the old debate about false disjunctions, introduced in Sect. 1. Recall that Schurz and Weingartner (1987) use the relevant consequence relation to analyze theories into their conjunctive components. Let A t be the true conjunctive parts of A and A f its false components. Then, modifying Popper's criterion (1), A is at least as verisimilar as B if and only if (10) A t B t and B f A f . In their quantitative treatment for propositional languages with n atomic sentences, Schurz and Weingartner (2010) define a measure V which ordinally agrees with the partial ordering (10). For a theory A, the measure V(A) is the sum of the V-values of all conjunctive parts of A. For literals, where p is true, V(p) 1 and V(¬p) −1. The V-value of a tautology t is 0, and that of a contradiction -(n + 1). Further, where k a is the number of a's literals and v a is the number of a's true literals. It follows that for a true disjunction V(p 1 v p 2 ) 1/n, and for a completely false theory (with an empty A t ) It immediately follows that the refined account of Schurz and Weingartner agrees with their old approach about false disjunctions: among completely false theories verisimilitude decreases with logical strength (see Schurz 2011, p. 208). This means that the V-function violates the adequacy condition TR7. If the value function V is applied to the sentences in Fig. 1, we obtain Here V agrees with the ordering of the constituents by the min-sum measure Tr relative to the target pqr, stronger falsities in this list are less truthlike, and (with admissible values of parameters) the division between truth-leading (positive V) and misleading (negative V) is the same. Since ¬p&¬q&¬r is the least truthlike of all consistent sentences, measure V defines a likeness ordering in Zwart's (2001) sense. But differences between these measures appear in the case of false disjunctions: V(¬p) −1 < −1/3 −V(p v q) V(¬p v¬q), but Tr(¬p) > Tr(¬p v¬q).
The latter inequality for Tr follows from the condition that ¬p v ¬q has the same minimum distance 1/3 from p&q&r as ¬p but it has to pay penalties for the constituents (i.e. pqr and pqr) which it adds to the normal form of ¬p. Differences can be found also among partially true theories: V(p&(¬q v¬r)) 1 − 1/3 2/3 > 0 V(p&¬q), but Tr(p&(¬q v¬r) < Tr(p&¬q).
The argument for Tr(¬p) > Tr(¬p v ¬q) follows directly from the min-sum definition, so that it is not based on any assumption about the relative distances of the false disjuncts from the truth. Indeed, here Tr(¬p) Tr(¬q) holds by symmetry. So this example is different from the case of false estimates of the number of planets, discussed in Sect. 1. As the new approach by Schurz assumes that all true literals have the same V-value, and similarly for false literals, it does not tell how problems of numerical approximation could be handled in the relevant consequence approach.
It is interesting to observe that Oddie's average measure, in spite of its acceptance of the uniform distance principle AV1, agrees in this case with Schurz's judgment. Even though ¬p and ¬q have the same average distance 2/3 from the truth pqr, the average distance for ¬p v ¬ q is 11/18, which is less than 2/3. This follows from the fact that the two additional constituents in the normal form of ¬p v ¬q in fact reduce the average distance. Cevolani and Festa (2018), whose "partial consequence approach" agrees with Oddie's average measure, also support Schurz's conclusion that the logically weaker of two false disjunctions is more truthlike.
In conversation, Schurz raised the following question: assuming that (x)Fx is true, is the weaker falsity (Ex)¬Fx closer to the truth than the stronger falsity ¬Fa? To formalize this problem, let language L include one monadic predicate F and two individual constants a and b. Then the truth in L is expressed by Fa & Fb. The statement ¬Fa is equivalent to the disjunction (¬Fa&Fb) v (¬Fa&¬Fb), while (Ex)¬Fx adds to this disjunction one more false disjunct Fa&¬Fb. Therefore, by the min sum measure with the target (x)Fx, we have Tr(¬Fa) > Tr((Ex)¬Fx). One might think otherwise, and follow Schurz's principle that here verisimilitude should decrease with logical strength, but this intuition would probably reflect the idea that a definite mistake about a concrete given individual a is somehow more serious than an indefinite claim about some individual just existing out there. But that consideration would lead us back to the situation where the false disjuncts are at different distances from the truth. While most of us would agree that the stronger answer "2" is closer to the truth about the number of moons than the weaker answer "2 or 2000", the comparison between "99 or 100 or 2000" and "99 or 2000" is more controversial -at least different answers would be given by Oddie and Niiniluoto (cf. TR9).
Let us finally see how Kuipers would solve the problem of false disjunctions. Note first that such false statements cannot be represented as constituents or c-theories, so that his new approach is not applicable. But if we go back to his early approach (3) and represent theories in propositional logic as sets of constituents, 23 then, interestingly enough, his early and later accounts give diverging answers. The "naïve" definition of Kuipers (1982) suffers from the child's play objection, so that by the modified symmetric difference criterion (3) the stronger falsity ¬p is closer to the truth than the weaker ¬p v ¬q. Here Kuipers agrees with the min-sum judgement (but for a different reason) and disagrees with Schurz. According to the refined account, theory A is at least as truthlike as theory B if (11) For all x in B and all z in T there is y in A such that s(x, y, z) (12) For all y in A-(B ∪ T) there are x in B-T and z in T-B such that s(x, y, z), where s(x, y, z) means that y is between x and z, i.e. y is at least similar to z as x (see Kuipers 2000, p. 250). As here the true theory T consists only of the true constituent C*, these conditions for false theories A and B can be simplified to (11 ) For all x in B there is y in A such that s(x, y, C*) (12 ) For all y in A there is x in B such that s(x, y, C*). Taking now A as ¬p v ¬ q and B as ¬p, conditions (11 ) and (12 ) are satisfied, so that A is at least as close to the truth as B. But for the constituent pqr in A there is no element x in B such that s(pqr, x, C*). Hence, A is in fact more truthlike than B by the revised definition of Kuipers. This agrees with the judgment of Schurz, so that both of them reject TR9, and thus disagree with the min-sum approach.
For cases where x and y are real numbers or natural numbers, the relation of betweenness s(x, y, z) could be explicated by the geometrical distance, i.e. s(x, y, z) if and only if |y-z| ≤ |x-z|. 24 Then (11 ) is satisfied if the minimum distance of A to C* is smaller than or equal to the minimum distance of B to C*. Likewise, (12 ) is satisfied if the maximum distance of A to C* is smaller than or equal to the maximum distance of B to C*. In this interpretation, the refined account of Kuipers would agree with Hilpinen's (1976) comparative notion of truthlikeness. Therefore it could violate the stronger form of TR2, and it would satisfy Oddie's uniform distance principle AV1, which is not valid for the min-sum measure. But if s(x, y, z) means that y is between x and z, so that z ≤ y ≤ x or x≤ y ≤ z (see Kuipers 2000, p. 249), then AV1 is not satisfied. For both interpretations, Kuipers would agree that "9" is better than "9 or 20" as the number of planets, but he would allow that "9 or 20" and "9 or 12 or 20" are equally good as "9 or 19 or 20", in disagreement with both av and ms . 25 These examples thus give us good reasons to continue old and new debates about the definition of truthlikeness. and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 10,819.2 | 2018-10-26T00:00:00.000 | [
"Philosophy"
] |
Electron Doping a Kagome Spin Liquid
Herbertsmithite, ZnCu$_3$(OH)$_6$Cl$_2$, is a two dimensional kagom\'{e} lattice realization of a spin liquid, with evidence for fractionalized excitations and a gapped ground state. Such a quantum spin liquid has been proposed to underlie high temperature superconductivity and is predicted to produce a wealth of new states, including a Dirac metal at $1/3$rd electron doping. Here we report the topochemical synthesis of electron-doped ZnLi$_x$Cu$_3$(OH)$_6$Cl$_2$ from $x$ = 0 to $x$ = 1.8 ($3/5$th per Cu$^{2+}$). Contrary to expectations, no metallicity or superconductivity is induced. Instead, we find a systematic suppression of magnetic behavior across the phase diagram. Our results demonstrate that significant theoretical work is needed to understand and predict the role of doping in magnetically frustrated narrow band insulators, particularly the interplay between local structural disorder and tendency toward electron localization, and pave the way for future studies of doped spin liquids.
For decades, the resonance valance bond (RVB), or quantum spin-liquid, state has been theorized to be an intricate part of the mechanism for high temperature superconductivity [1,2]. One geometrically frustrated system, Herbertsmithite ( Fig.1(a)), is considered an ideal spin two dimensional liquid candidate due to its perfectly ordered kagomé lattice of S = 1/2 copper ions, antiferromagnetic interactions with J ≈ −200 K, strong evidence for fractional spin excitations by neutron scattering, and, most recently, convincing indications of a gapped spin-liquid ground state by oxygen-17 NMR [3][4][5][6][7][8]. All of these factors suggest Herbertsmithite is the realization of a quantum spin liquid. Recent predictions expanded upon Andersons theory in DFT calculations of electron doped Herbertsmithite, M x Zn 1−x Cu 3 (OH) 6 Cl 2 , where Ga 3+ or other aliovalent metals replace zinc [9,10]. A trivalent substitution introduces electrons into the material, raising the Fermi level to the Dirac points at x = 1, and giving rise to a rich phase diagram spanning from a frustrated RVB spin liquid (x = 0) to a strongly correlated Dirac metal (x = 1) with possible Mott-Hubbard metal-insulator transitions, charge ordering, ferromagnetism, or superconducting states.
It is challenging to synthesize electron doped Herbertsmithite directly as Cu 1+ will not assume the same distorted octahedral site on the kagomé lattice as Cu 2+ under thermodynamic conditions, and copper(I) hydroxide is thermodynamically unstable towards disproportionation and evolution of hydrogen gas. By using low temperature topochemical techniques, this problem is circumvented by producing a kinetically meta-stable phase [11][12][13][14]. Here we use intercalation of lithium to produce electron doped Herbertsmithite, ZnLi x Cu 3 (OH) 6 Cl 2 with 0 ≤ x ≤ 1.8. Laboratory X-ray powder diffraction (XRPD), Fig.1(b), shows the underlying structure is maintained throughout the doped series. Lithium is not directly detected due to its small X-ray scattering intensity relative to copper and zinc. Any changes in the lattice parameters as a function of doping are small and are within the resolution of the Laboratory X-ray diffractometer (see SI). During Rietveld analysis, CuO and Cu 2 O were tested and are absent from the air-free samples by both XRPD and neutron diffraction. Unlike the air stable parent, the doped samples decomposed readily in air, Fig.1(c), with the most heavily doped samples completely decomposing within hours. This rapid and total decomposition is in agreement with the formation of a reduced copper (Cu 1+ ) hydroxide in the bulk that is prone to decomposition in moisture. The color change from blue to black is also in agreement. As soon as there are any Cu 1+ ions present, there is another possible optical absorption mode: intervalence charge transfer (i.e. Cu 2+ + Cu 1+ → Cu 1+ + Cu 2+ ), or, put another way, a transition from an impurity band in the gap to the conduction band. Such absorption modes are common in mixed valent systems, such as the Cu 1+ -Cu 2+ mixed valence (N 2 H 5 ) 2 Cu 3 Cl 6 [15].
To determine the position of Li within the structure, we carried out neutron powder diffraction of the undoped and maximally Li-doped specimens using the high flux NOMAD diffractometer at the Spallation Neutron Source, Oak Ridge National Laboratory (see SI).
Rietveld analysis reveals that the previously reported structure accurately models the data of the doped specimens, with the exception of the presence of a pocket of negative scattering in a tetrahedral hole formed by three (OH − ) and one Cl − group, located above and below the copper triangles in the kagomé layer. This is consistent with the presence of Li, which has a negative scattering factor. Although the site is physically small for a Li ion, the connectivity is consistent with a favorable tetrahedral bonding environment for Li. The XRPD studies are also consistent with this model. There are systematic changes in the O-Cu-Cl bond angle and the O-Cu, Cl-Cu, and O-O bond lengths (see SI). As the doping increased, the oxygen atoms move away from the Cu kagomé lattice and spread from one another. In concert, the Cl atom moves away from the kagomé lattice along the c-axis. These combined movements create more space in the Cl-(OH) 3 tetrahedral hole. Further, a similar geometry is found in CuMg 2 Li 0.31 [16], and a stable Rietveld refinement is obtained for the maximally doped sample N, when including Li in that site, with the occupancy refining to ∼0.9 (x = 1.8 (3) per formula unit, see SI). This structure puts the Li ion in close proximity to the Cl atom and appears to form a neutral LiCl dimer along the c-axis with a bond distance of ∼1.4Å.
Such a dimer is consistent with our attempts to intercalate the larger K + ion, which resulted instead in the formation of KCl. Future work is needed to determine if this model is an accurate description of the local atomic structure. In the doped samples, this satellite is greatly reduced due to the filled 3d shell in Cu 1+ preventing this loss transition from occurring [18,19]. If it were purely Robin-Day Class 1 mixed valance (pure Cu 1+ and Cu 2+ sites with no interactions of ground or excited states), we would expect a mixed XPS Signal of Cu 1+ and Cu 2+ with an approximate 2:1 ratio. In this case, however, there must be interactions between neighboring Cu 1+ and Cu 2+ , given the shared hydroxyl bridge, through which we know (from the parent) that adjacent Cu ions interact [20][21][22]. The result is a suppression of the Cu 2+ XPS satellites, even though resistance measurements show the charges must be localized. This model (which has discrete Cu 1+ and Cu 2+ ions, Robin-Day Class 2), would not only suppress the Cu 2+ satellites but also give rise to an optical intervalence charge transfer, which would explain the black color of the material upon even light doping.
Secondly, the photoelectron induced Auger Cu L 3 M 4,5 M 4,5 spectra, Fig.2 [18,19]. Although information on copper oxidation states is lost in a depth profile analysis with ion sputtering, it can be used to determine the chemical composition [24]. As expected from the topochemical synthesis method, a thin surface layer of Li and benzophenone starting material is detected; upon ion sputtering (up to 100 min), the ratio of Cu:Zn:Cl is in agreement with the expected parent Herbertsmithite phase, with Li located throughout (see SI). Despite the introduction of a substantial number of electrons, the material remains insulating: two probe room temperature resistance measurements on cold pressed pellets in a glovebox give a resistance > 2 MΩ for the doped series. Fig.3(a) shows the magnetic susceptibility, χ ≈ M/H, for the ZnLi x Cu 3 (OH) 6 Cl 2 series. For x = 0, the inverse magnetic susceptibility is well-known to be linear at high temperatures and dominated by the kagomé network, with the signal at T < 20 K containing significant contributions from 9 defect Cu 2+ ions on the Zn 2+ site between kagomé layers [5]. We thus performed fits to the Curie-Weiss law in the low temperature (T = 1.8-15 K) and high temperature (T = 100-300 K) regions to extract estimates of the number of spins arising from the intrinsic and excess Cu ions respectively as a function of x. The extracted Curie constants of both the low and high temperature regions decrease linearly with increasing doping level, Fig.4(a).
This systematic decrease is consistent with the reduction of magnetic Cu 2+ (S = 1/2) to non-magnetic Cu 1+ (S = 0). With an x-intercept value of x = 3.3(5), the high temperature extrapolation to zero is also consistent with the known stoichiometry of Herbertsmithite, Zn 0.85 Cu 3.15 (OH) 6 Cl 2 , where x = 3.15 would be necessary to convert all Cu 2+ to Cu 1+ . All of the Weiss temperatures are negative, becoming less negative upon doping (see SI), in agreement with the expectation that the number of spins are reduced in the lattice. The low temperature extrapolation x-intercept value is x = 3.9(9); this is within error equal to that found from the high temperature extrapolation. Any subtle divergence between the high and low temperature x-intercept likely reflects a difference in reducibility of the kagomé compared to the interlayer Cu 2+ ions, since the high temperature paramagnetism includes both the kagomé and interlayer spins, whereas the latter is attributable only to the interlayer defect spins. Given the placement of the Li ions near the kagomé layer, it is no surprise the kagomé layers are more greatly reduced than the interlayer sites. Further, the difference in local coordination (interlayer Cu in O 6 octahedron vs kagomé Cu in O 4 Cl 2 octahedron), would result in a difference in redox potential for Cu 2+ + e − → Cu 1+ between the two sites, so reducing one should be slightly more favorable than reducing the other. Fig.3(b) shows the low temperature heat capacity. There are two regions of significant entropy change as a function of doping: at T ≈ 5 K, the heat capacity of the sample decreases with increasing Li content while at higher temperatures, there is an entropy gain at nonzero x. Qualitatively, the low temperature data can be explained by same mechanism as the magnetization, namely a reduction of the number of spins as Cu 2+ is converted to Cu 1+ .
To more quantitatively describe the changes, we parameterized the temperature-dependent data as a function of composition and applied magnetic field with the model: The γT term captures the linear contribution to the specific heat from the spin liquid (either intrinsic or due to defect spins). The phonon contribution is described by the β 3 T 3 and β 5 T 5 terms [25]. These phonon terms were calculated based on the field fit to the parent. The terms were then held constant for the remaining series at , β Fig.5(a). Upon doping, A HT (Fig.4(b)) sharply increases then begins to gradually decrease.
This model also fits to the field dependent heat capacity, shown in Fig.5(b). Similar to the zero field data, the phonon terms, β 3 T 3 and β 5 T 5 , were calculated based on the field fit to the parent and held constant at the above values for the remaining series. The parameters γ, A HT , and ∆ HT were shared across fields for each sample and each sample was refined independently until convergence. These constraints yielded results consistent with the zero field fits. All the fits clearly demonstrate the field dependence of the low temperature feature which is consistent with a contribution from the magnetic interlayer Cu 2+ . The low temperature magnetization measurements, sensitive to the interlayer Cu on the Zn site, indicate that these interlayer Cu atoms are also systematically reduced as a function of doping. If these Cu impurities give rise to the finite γ, it is expected that γ would also be reduced with doping as observed. Alternately, if the γT term describes the spin liquid contribution to the heat capacity, a systematic decrease in this value could be explained by the reduction of the spin liquid nature of the material as electrons are introduced into the system. More interestingly, the high temperature Schottky anomaly shows no field dependence and reproduces the trend seen in the zero field data. Direct assignment of the heat capacity terms to specific origins is future work, but it is promising that a single model recapitulates data across temperatures, fields, and composition.
This experimental data is in good agreement with two models for singlet trapping as a function of doping; a Monte Carlo simulation of the trapping of neighboring singlets by Cu 1+ defects (blue dashed line Fig.4(b)) and a calculation of singlet trapping by localized electrons on Cu triangles in the kagomé lattice (black dotted line) (see SI). Since the magnitude of the gap is on the same order as the expected singlet-triplet gap energy in isolated valence bonds in Herbertsmithite [26], it is alluring to interpret the growth in high temperature specific heat as arising due to the trapping of valence bonds into a glass or solid-like state. However, further work is needed to exclude other possibilities, such as a localized oscillator mode arising from the inserted Li ions. The singlet trapping models are also in agreement with the magnetization data. Every intercalated Li atom reduces one Cu atom, removing its spin contribution and yielding a one-to-one relationship. So upon doping, the Curie constant will linearly go to zero, in agreement with the experimental data.
In conclusion, we have successfully introduced electrons into the prototypical kagomé quantum spin liquid Herbertsmithite. Despite the predictions, the doping of this system did not lead to metallicity or superconductivity down to T = 1.8 K. The magnetic field, temperature, and composition dependent specific heat all fit remarkably well to a single model.
What are the precise physical origins responsible for this behavior? It is plausible that the location of the inserted Li ions provides a sufficiently strong disorder potential that Anderson localization is never overcome, irrespective of electron count, but other explanations cannot be ruled out [27] [28]. The interesting physics is the following: why does charge doping this spin liquid not change it into a metal? The lower connectivity, with the 2-D kagomé lattice connects to four magnetic neighbors (n = 4) as compared to six magnetic neighbors of a 2-D triangular lattice (n = 6), may also play a role in the doped series behavior. Previous pressure and doping studies on higher connectivity frustrated geometries, such as organic triangular lattice κ-(ET) 2 Cu 2 (CN) 3 [29], Na x CoO 2 [30], and Na 4 | 3,414.6 | 2016-10-13T00:00:00.000 | [
"Physics"
] |
Proposed mechanism of antibacterial mode of action of Caesalpinia bonducella seed oil against food-borne pathogens
The antibacterial mechanism of action of Caesalpinia bonducella seed oil on membrane permeability of Listeria monocytogenes NCIM 24563 (MIC: 2 mg/ mL) and Escherichia coli ATCC 25922 (MIC: 4 mg/mL) was determined by measuring the extracellular ATP concentration, release of 260-nm absorbing materials, leakage of potassium ions and measurement of relative electrical conductivity of the bacterial cells treated at MIC concentration. Its mode of action on membrane integrity was confirmed by release of extracellular ATP (1.42 and 1.33 pg/mL), loss of 260-nm absorbing materials (4.36 and 4.19 optical density), leakage of potassium ions (950 and 1000 mmol/L) and increase in relative electrical conductivity (12.6 and 10.5%) against food-borne pathogenic bacteria L. monocytogenes and E. coli, respectively. These findings propose that C. bonducella oil compromised its mode of action on membrane integrity, suggesting its enormous food and pharmacological potential. Article Info Received: 12 July 2015 Accepted: 17 August 2015 Available Online: 28 January 2016 DOI: 10.3329/bjp.v11i1.24163 Cite this article: Shukla S, Majumder R, Ahirwal L, Mehta A. Proposed mechanism of antibacterial mode of action of Caesalpinia bonducella seed oil against foodborne pathogens. Bangladesh J Pharmacol. 2016; 11: 257-63. Proposed mechanism of antibacterial mode of action of Caesalpinia bonducella seed oil against food-borne pathogens Shruti Shukla1, Rajib Majumder2, Laxmi Ahirwal3 and Archana Mehta3 Department of Food Science and Technology, Yeungnam University, 280 Daehak-Ro, Gyeongsan-Si, Gyeongsangbuk-Do 712 749, Republic of Korea; Department of Applied Microbiology and Biotechnology, Yeungnam University, Gyeongsan, Gyeongbuk 712 749, Republic of Korea; Laboratory of Plant Pathology and Microbiology, Department of Botany, Dr. Hari Singh Gour University, Sagar 470 003 MP, India. This work is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, distribute and perform the work. You must attribute the work in the manner specified by the author or licensor Present study evaluated the antibacterial effect of C. bonducella seed oil in various in vitro assays against selected food-borne pathogens, and proposed its possible antibacterial mode of action. Materials and Methods Collection of sample and oil preparation The seeds of C. bonducella were collected in March 2006 from Sagar District, Madhaya Pradesh, India and further taxonomic identification was conducted by Herbarium In-charge, Department of Botany, Dr. H. S. Gour University, Sagar, MP, India. A voucher specimen has been deposited in the Herbarium of Laboratory of Botany Department, under the number (Bot/H/2692). The seeds were dried, powdered and passed through 40 -mesh sieve and stored in an airtight container for further use (Shukla et al., 2010). The powdered seed material (100 g) was subjected to hydrodistillation for 3 hours, using a Clevenger apparatus. The oil was dried over anhydrous Na2SO4 and preserved in an airtight vial at low temperature (4°C) for further analysis (Shukla et al., 2010).
Introduction
Food-borne disease or food poisoning is any illness resulting from the consumption of food contaminated with food-borne pathogenic bacteria, which has been of great concern to public health (Tauxe, 2002).Since long, synthetic chemicals have been used to control microbial growth and to reduce the incidence of food-borne illness (Zurenko et al., 1996).Although these synthetic preservatives are effective, they might be detrimental to human health (Aiyelaagbe et al., 2007).Hence, there is a huge attention on plant-based novel antibacterial agents such as essential oils for controlling food-borne pathogens as safe and natural remedies (Giang et al., 2006).The application of essential oils as safe and effective alternatives to synthetic preservatives in controlling pathogens may possibly able to reduce the risk of foodborne outbreaks and may assure consumers with safe food products (Yoon et al., 2011).
Caesalpinia bonducella (L.) Roxb. is often grown as hedge plant, and found throughout the hotter parts of India.C. bonducella seeds are used by the tribal people in India for controlling blood sugar and for the treatment of diabetes mellitus, asthma and chronic fever (Chakrabarti et al., 2003).All parts of C. bonducella (leaf, root and seed) are used in traditional system of medi-cines.Previously we reported the antibacterial, antidia-betic, antioxidant, anti-inflammatory, anti-pyretic and analgesic properties of C. bonducella seed oil (Shukla et al., 2010).
Previous results on the antibacterial screening of C. bonducella seed oil showed that it had strong and consistent inhibitory effect against various food-borne pathogens (Shukla et al., 2013).The seed oil displayed varying degree of susceptibility against various foodborne pathogens including L. monocytogenes and E. coli within a range of 2 to 8 mg/mL (Shukla et al., 2013).
Present study evaluated the antibacterial effect of C. bonducella seed oil in various in vitro assays against selected food-borne pathogens, and proposed its possible antibacterial mode of action.
Collection of sample and oil preparation
The seeds of C. bonducella were collected in March 2006 from Sagar District, Madhaya Pradesh, India and further taxonomic identification was conducted by Herbarium In-charge, Department of Botany, Dr. H. S. Gour University, Sagar, MP, India.A voucher specimen has been deposited in the Herbarium of Laboratory of Botany Department, under the number (Bot/H/2692).The seeds were dried, powdered and passed through 40 -mesh sieve and stored in an airtight container for further use (Shukla et al., 2010).The powdered seed material (100 g) was subjected to hydrodistillation for 3 hours, using a Clevenger apparatus.The oil was dried over anhydrous Na2SO4 and preserved in an airtight vial at low temperature (4°C) for further analysis (Shukla et al., 2010).
Microorganisms
The test food-borne pathogens included Listeria monocytogenes NCIM 24563 and Escherichia coli ATCC 25922.All the bacterial strains were grown in the nutrient broth at 37°C.The bacterial strains were maintained on nutrient agar slants at 4°C.
Measurement of extracellular adenosine 5'-triphophate (ATP) concentration
To determine the efficacy of C. bonducella seed oil on membrane integrity, the extracellular ATP concentrations were measured according to the previously described method (Bajpai et al., 2013).The working cultures of L. monocytogenes and E. coli containing approximately 10 7 CFU/mL inoculum were centrifuged for 10 min at 1,000 ×g, and the supernatants were removed.The cell pellets were washed three times with 0.1 M of sodium phosphate buffer (pH 7.0) and then cells were collected by centrifugation for 10 min at 1,000 ×g.A cell suspension (10 7 CFU/mL) was prepared with 9 mL of sodium phosphate buffer (0.1 M; pH 7.0) and 0.5 mL of cell solution was taken into the Eppendorf tube for the treatment of C. bonducella seed oil.Then, the different concentrations (control and MIC) of C. bonducella seed oil were added to the cell solution.Samples were maintained at room temperature for 30 min, centrifuged for 5 min at 2,000 ×g, and incubated in ice immediately to prevent ATP loss until measurement.The extracellular (upper layer) ATP concentrations were measured using an ATP bioluminescent assay kit (USA) which comprised ATP assay mix containing luciferase, luciferin, MgSO4, dithiothreitol (DTT), ethylenediamine tetraacetic acid (EDTA), bovine serum albumin and tricine buffer salts.The ATP concentration of the supernatants, which represented the extracellular concentration, was determined using a luminescence reader after the addition of 100 µL of ATP assay mix to 100 µL of supernatant.The emission and excitation wavelengths were 520 and 420 nm, respectively.
Assay of potassium ions efflux
A previously described method was used to determine the amount of the potassium ions (Bajpai et al., 2013).The concentration of free potassium ions in bacterial suspensions of L. monocytogenes and E. coli was measured after the exposure of bacterial cells to C. bonducella seed oil at MIC concentration in sterile peptone water for 0, 30, 60 and 120 min.At each pre-established interval, the extracellular potassium concentration was measured by a photometric procedure using the calcium/potassium kit.Similarly, control was also tested without adding C. bonducella seed oil.Results were expressed as amount of extracellular free potassium (mmol/L) in the growth media in each interval of incubation.
Measurement of release of 260-nm absorbing cellular materials
The measurement of the release of 260-nm-absorbing materials from L. monocytogenes and E. coli cells was carried out in aliquots of 2 mL of the bacterial inocula in sterile peptone water (0.1 g/100 mL) added of C. bonducella seed oil MIC concentration at 37°C.At 0, 30 and 60 min time interval of treatment, cells were centrifuged at 3000 xg, and the absorbance of the obtained supernatant was measured at 260 nm using a 96-well plate ELISA reader (Carson et al., 2002).Similarly, control was also tested without adding C. bonducella seed oil.Results were expressed in terms of optical density of 260-nm absorbing materials in each interval with respect to the ultimate time.
Measurement of cell membrane permeability
Effect of C. bonducella seed oil on cell membrane permeability of test microorganism was determined as described previously (Patra et al., 2015) and expressed in terms of relative electrical conductivity.Prior to the assay, cultures of test microorganisms were incubated at 37ºC for 10 hours, followed by centrifugation 4,000 xg for 10 min, and washed with 5% glucose solution (w/v) until their electrical conductivities reached close to 5% glucose solution to induce an isotonic condition.Minimum inhibitory concentrations of C. bonducella seed oil acquired for both the test organisms were added to 5% glucose, incubated at 37 o C for 8 hours, and the electrical conductivities (La) of the reaction mixtures were determined.Further, electrical conductivities of the bacterial solutions were measured at 2 hours of intervals for a total duration of 8 hours (Lb).The electrical conductivity of each test pathogen in isotonic solution killed by boiling water for 5 min served as a control (Lc).The relative electrical conducti-vity was measured using an electrical conductivity meter.The permeability of bacteria membrane was calculated according to the following formula: Relative conductivity (%) = La -Lb / Lc × 100.
Statistical analysis
All data are expressed as the mean ± SD by measuring three independent replicates.Analysis of variance using one-way ANOVA followed by Duncan's multiple test to test the significance of differences using SPSS software.
Measurement of ATP
The extracellular ATP concentrations in the control cells of L. monocytogenes and E. coli were found to be 0.32 and 0.48 pg/mL, respectively (Table I).L. monocytogenes and E. coli cells treated with C. bonducella seed oil at MIC concentration showed significant (p<0.05)increase in the release of extracellular ATP concentration.In this assay, the extracellular ATP concentrations for L. monocytogenes and E. coli cells were measured to be 1.42 and 1.33 pg/mL, respectively (Table I).
Measurement of potassium ion leakage
In this assay, antibacterial mode of action of C. bonducella seed oil was confirmed by the release of potassium ions from the treated cells of L. monocytogenes and E. coli (Figure 1).The release of potassium ions from the bacterial cells occurred immediately after the addition of C. bonducella seed oil at MIC concentration following a sturdy loss along the specified time period (Figure 1a and 2b).However, no leakage of potassium ion was observed in control cells of L. monocytogenes and E. coli during the study.
Measurement of 260-nm materials
Further, antibacterial mode of action of C. bonducella seed oil was visualized by the confirmation on the leakage of 260-nm absorbing materials when the test food-borne pathogens exposed to C. bonducella seed oil at MIC.In this assay, exposure of L. monocytogenes and E. coli to C. bonducella seed oil caused rapid loss of 260-nm absorbing materials from the bacterial cells.The optical density (260 nm) of the culture filtrates of L. monocytogenes and E. coli cells exposed to C. bonducella seed oil revealed an increasing release of 260-nm absorbing materials with respect to exposure time (Figure 2).However, no changes in the optical density of control cells of test pathogens were observed during the study.After 60 min of treatment, approximately more than 2-fold increase was observed in the optical density of the bacterial cell culture filtrates treated with C. bonducella seed oil (Figure 2a and 2b).
Measurement of cell membrane permeability
Figure 3 and 4 depict the mechanism for the effect of MIC exposure of C. bonducella seed oil on the membrane permeability of L. monocytogenes and E. coli in terms of their relative electrical conductivities.The C. bonducella seed oil expressed time-dependent effect in this assay, and the relative electrical conductivity of each test pathogen was increased timely.However, C. bonducella seed oil exerted significantly (p<0.05)higher effect on L. monocytogenes (Figure 3a) with increased proportion of relative electrical conductivity as compared to E. coli (Figure 3b).No relative electrical conductivity was observed in untreated samples as a control.
Discussion
In this study, membrane permeability parameters such as determination of extracellular ATP concentration, leakage of potassium ions, loss of 260 nm-absorbing materials, and measurement of relative electrical conductivity were used to determine the mode of action of C. bonducella seed oil against selected food-borne pathogenic bacteria.This phenomenon may lead to significant impairment in membrane permeability of the tested bacteria by C. bonducella seed oil, which caused the intracellular ATP leakage through defective cell membrane as also reported previously (Herranz et al., 2001).It has been found that cells of B. subtilis treated with essential oil components resulted in the release of increased level of extracellular ATP pool (Helander et al., 1998).
The use of ATP has been confirmed in various cell functions including transport work moving substances across cell membranes which might be an important parameter to understand the mode of action of antibacterial agents.Subsequently, the antibacterial effect might be established because of proton motive force inhibition, mitochondrial respiratory inhibition, inhibition of substrate oxidation and active transportation, and loss of pool metabolites, as well as disruption of synthesis of macromolecules, proteins, lipids and polysaccharides may also occur.In general, leakage of intracellular material is induced by many antimicrobial agents resulting in the death of a cell (Denyer, 1990;Farag et al., 1989).
Moreover, in this study, C. bonducella oil exerted remarkable efficacy on the release of potassium ions from the treated cells of L. monocytogenes NCIM 24563 and E. coli ATCC 25922 cells at MIC concentration.
Previously Bajpai et al. (2013) also observed the effect of C. cuspidata oil on the cell membrane of target pathogens confirming that oil had significant effect on the release of potassium ions from the bacterial cells exposed to C. cuspidata oil.
The bacterial plasma membrane provides a permeability barrier to the passage of small ions such as potassium ions which are necessary electrolytes, facilitating cell membrane functions and maintaining proper enzyme activity.This impermeability to small ions is regulated by the structural and chemical composition of the membrane itself.Increases in the leakage of potassium ions will be an indication of disruption of this permeability barrier.Maintaining ion homeostasis is integral to the maintenance of the energy status of the cell including solute transportation, regulation of metabolism, control of turgor pressure and motility (Cox et al., 2001).Therefore, even relatively slight changes to The effect of carvacrol, an essential oil component, on bacterial proton motive force was strongly correlated to the leakage of various substances, such as ions, ATP, nucleic acids (260 nm materials) and amino acids (Herranz et al., 2001).
The observation that the amount of loss of 260-nmabsorbing materials was as extensive as the leakage of potassium ions might indicate that the membrane structural damage sustained by the bacterial cells resulted in release of macromolecular cytosolic constituents (Farag et al., 1989).This suggested that monitoring K + efflux and release of 260-nm absorbing materials from L. monocytogenes NCIM 24563 and E. coli ATCC 25922 might be more sensitive indicators of membrane damage and loss of membrane integrity (Cox et al., 2001).
Furthermore, this study revealed that C. bonducella oil has ability to disrupt the plasma membrane as confirmed by the changes observed in the relative electrical conductivity of both the tested bacteria.
Similarly oil from different origins have also shown the remarkable effect on relative electrical conductivity parameters of bacterial pathogens (Patra et al., 2015).
Maintaining membrane permeable integrity is essential for overall metabolism of a bacterial cell hence, changes in the relative electrical conductivity on membrane integrity may severely hamper the cell metabolism, which may eventually lead to cell die (Cox et al., 2001).
Based on the findings of this study it can be hypothesized that the accumulation of the oil in the plasma membrane caused instant loss of the cell integrity and became increasingly more permeable to ions and other essential metabolites that might be the reason for the establishment of the antibacterial activity of C. bonducella seed oil.Excessive leakage of cytoplasmic material is used as indication of gross and irretrievable damage to the plasma membrane (Cox et al., 1998).
Based on these facts and outcome of this study, a proposed mechanism of action of the antibacterial effect of C. bonducella seed oil against food-borne pathogenic bacteria is demonstrated in Figure 4.
These activities might be attributed to the presence of several biologically active components present in the seed oil of C. bonducella such as saponins, terpenoids, phenolics, flavonoids and polysaccharides (Ashebir and Ashenafi, 1999), as also supported by other researchers (Al-Reza et al., 2010;Rahman and Kang, 2010).In addition to this, the other major components of the C. bonducella seed oil which are well known for their antimicrobial efficacy such as bonducin, caesalpin, lysine, aspartic acid, stearic acid, tocopherol, campesterol, beta-sitosterol, stigmasterol, and avenasterol can contribute to potential antibacterial activity of seed oil of C. bonducella against the tested food-borne pathogens (Sultana et al., 2012).
The results of this study confirm that C. bonducella seed oil disrupted membrane functions of test food-borne pathogens.C. bonducella seed oil exerted its inhibitory effect through membrane permeabilization associated with membrane-disrupting effects leading to simultaneous reduction in cell viability, loss of 260-nm absorbing materials, leakage of potassium ions, release of ATP and increase in relative electrical conductivity, which confirmed the loss of membrane integrity.Based on these findings, it is concluded that C. bonducella seed oil showing significant antibacterial activity, can be used as a natural antimicrobial agent in food and pharmaceutical industries to control the growth of food -borne pathogens.
Ashebir M, Ashenafi M. Assessment of the antibacterial acti-
Figure 1 :
Figure 1: Effect of the C. bonducella seed oil on the leakage of potassium ions from the tested food-borne pathogenic bacteria L. monocytogenes (a) and E. coli (b)
Figure 2 :
Figure 2: Effect of the Caesalpinia bonducella seed oil (CBSO) on the release rate of 260-nm absorbing material from L. monocytogenes NCIM 24563 (a) and E. coli ATCC 25922 (b).Data are expressed as mean ± SD (n = 3).Values with different superscript (a, b, c) are significantly different (p<0.05) between control and treatment groups
Figure 4 :
Figure 4: Proposed mechanism of the antibacterial mode of action of Caesalpinia bonducella seed oil against food-borne pathogenic bacteria | 4,293.8 | 2016-01-27T00:00:00.000 | [
"Biology"
] |
In situ synthesis of sub-nanometer metal particles on hierarchically porous metal–organic frameworks via interfacial control for highly efficient catalysis
Sub-nanometer metal particle/hierarchically mesoporous metal–organic framework composites can be synthesized in situ in bio-based surfactant emulsion.
The microscopy study of emulsions was conducted using a microscope (olympus, IX83, Japan) using Rhodamine B. For fluorescence microscopic analysis, water was stained with 5 µl Rhodamine B (0.1%, w/v), and the emulsion preparation was then followed as described above. The size distribution of the droplets in emulsion was obtained by using dynamic light scattering (DLS). Measurements were carried out using an LLS spectrometer (ALV/SP-125) with a multi-τ digital time correlater (ALV−5000). Light of λ = 632.8 nm from a solid-state He-Ne laser (22 mW) was used as the incident beam. The measurements were conducted at a scattering angle of 90°. The correlation function was analyzed from the scattering data via the CONTIN method to obtain the distribution of diffusion coefficients (D) of the solutes. All the measurements were performed at 25.0 (± 0.1 °C).
Synthesis of Metal-Organic Frameworks (MOFs) in the W/O Emulsion:
The method to synthesize MOFs were similar to that reported by other authors. 3 The main difference was that the MOFs were synthesized in emulsion. As example, preparation of the MOF formed by Zn 2+ and 2-MeIM (Zn-MOF) was discussed. 68 mg of ZnCl 2 was dissolved into 16 ml of water containing 60 mg surfactant and 62 ml of dichloromethane, stirring rapidly for 5 min to form emulsion. After that, 122 mg of 2-MeIM in 10 ml dichloromethane solution was added into the emulsion and stirred at room temperature for 8 h. Then, the obtained solid sample (white powders) was washed with ethanol/water alternately for five times and dried in the vacuum oven at 70 o C for 12 h.
In Situ Preparation of Au/Zn-MOF, Au/Cu-MOF, Ru/Zn-MOF, and Pd/Zn-MOF. The synthesis process was similar to that for preparation of the corresponding MOFs discussed above. The main differences were that various metal precursor HAuCl 4 /RuCl 3 ·3H 2 O or Pd(NO 3 ) 2 ·nH 2 O was added into the aqueous phase respectively during the emulsification process. We described the procedures to synthesize Au/Zn-MOF in detail because that the synthesis of Au/Cu-MOF, Ru/Zn-MOF and Pd/Zn-MOF were similar. In a typical experiment to prepare Au/Zn-MOF (0.8 wt% Au loading) with 0.8 nm Au particles, 0.5 mmol (68 mg) ZnCl 2 was dissolved into 14 ml water, and mixed with 2 ml 6 mmol/L HAuCl 4 aqueous solution (pH = 7.0, adjusted by NaOH freshly). Subsequently 62 ml dichloromethane solution containing 60 mg SAAS-C 12 was mixed to form emulsion with stirring rapidly for 5 min, after that another 10 ml dichloromethane solution containing 1.5 mmol ligand 2-MeIM (122 mg) was added into the emulsion. Firstly, Zn-MOF was synthesized at water droplets/dichloromethane interfaces at 25 o C for 7 h, then the emulsion was heated to 35 °C for 30 min for reducing Au 3+ . The obtained Au/Zn-MOF precipitates (pale yellow powder) were separated, washed with water/acetone alternately for 5 times, and then dried under vacuum at 70 °C for 12 h.
For comparison, Au/Zn-MOF composite was prepared using two-step method by adding HAuCl 4 solution (Au precursor) after Zn-MOF formation. The Zn-MOF was first synthesized in the emulsion at room temperature using the similar method discussed above. The only difference was that Au precursor was not added. After the formation of Zn-MOF in the emulsion, 2 ml of 6 mmol/L HAuCl 4 aqueous solution (pH = 7.0, adjusted by NaOH) was added to the emulsion dropwise. The emulsion was stirred at 25 o C for another 5 h and heated to 35 °C for 30 min to reduce Au 3+ . The obtained Au/Zn-MOF precipitates (pale yellow powder) were separated, washed with distilled water/acetone alternately for 5 times, and then dried under vacuum at 70 °C for 12 h.
To prepare Au/Zn-MOF with 1.0 nm Au particles (1.1 wt% Au loading), Au/Zn-MOF with 1.5 nm Au particles (1.3 wt% Au loading), and Au/Zn-MOF with 2.0 nm Au particles (2.0 wt% Au loading), 2 ml of 16 mmol/L, 2 ml of 25 mmol/L, and 3 ml of 30 mmol/L HAuCl 4 aqueous solutions were used (pH = 7.0, adjusted by NaOH freshly), and other conditions were the same as that for preparing Au/Zn-MOF with 0.8 nm Au particles. The procedures to synthesize Au/Cu-MOF with 0.8 wt% and 0.8 nm Au particles was similar to that for preparing Au/Zn-MOF with 0.8 nm Au particles. The different was that 0.5 mmol (85 mg) CuCl 2 ·2H 2 O was used to replace ZnCl 2 . For preparing the Ru/Zn-MOF and Pd/Zn-MOF, 2 ml 5 mmol/L RuCl 3 ·3H 2 O aqueous solution and 3 ml 30 mmol/L Pd(NO 3 ) 2 ·nH 2 O aqueous solutions were used, respectively.
We would like to mention a phenomenon that if the oxidation ability of the metal ions is very strong, the metal ions will be reduced spontaneously before MOF generation, and subnanometer particles were not easy to obtain in this method.
Light Microscopy. The microscopy study of emulsions was conducted using a microscope (olympus, IX83, Japan). For fluorescence microscopic analysis, water was stained with 5 µL Rhodamine B (0.1%, w/v), and the emulsion preparation was then followed as described above.
Droplet Size Distribution of the Emulsion. The size distribution of the droplets in emulsion was obtained by using dynamic light scattering (DLS). Measurements were carried out using an LLS spectrometer (ALV/SP-125) with a multi-τ digital time correlater (ALV−5000). Light of λ = 632.8 nm from a solid-state He-Ne laser (22 mW) was used as the incident beam. The measurements were conducted at a scattering angle of 90°. The correlation function was analyzed from the scattering data via the CONTIN method to obtain the distribution of diffusion coefficients (D) of the solutes. All the measurements were performed at 25.0 (± 0.1 °C).
Material Characterizations:
Powder XRD analysis of the samples was performed on the X-ray diffractometer (Model D/MAX2500, Rigaka) with Cu-Kα radiation, and the scan speed was 5 o /min. The morphologies of Zn-MOF, Cu-MOF, Au/Zn-MOF, Au/Cu-MOF, Ru/Zn-MOF and Pd/Zn-MOF were characterized by scanning electron microscope (SEM) (TECNAI 20PHILIPS electron microscope) and transmission electron microscope (TEM) (JEOL-1011 and JEOL-2100F). HAADF images were obtained by JEOL-2100F. The porosity properties of the materials were determined by N 2 adsorption-desorption isotherms using a Micromeritics ASAP 2020M system. The Au, Ru and Pd loadings in the catalysts were determined by ICP-AES method. Xray photoelectron spectroscopy (XPS) analysis was performed on the Thermo Scientific ESCALab 250Xi using a 200 W monochromated Al Kα radiation.
Typical procedure for the catalytic oxidation of cyclohexene over Au/Zn-MOF:
The aerobic oxidation of cyclohexene was conducted in a 20 ml stainless steel batch reactor. In the experiment, 30 mg catalyst, 0.5 mmol cyclohexene, and 2 ml dioxane were added into the reactor. O 2 was introduced into the system at 1 MPa and the pressure was maintained during the reaction. The reactor was placed in oil bath of desired temperature. The reaction mixture was stirred for desired time. Then, the reaction mixture was cooled in ice-water and O 2 was released slowly. The reaction mixture was analyzed using a gas chromatograph (GC, HP 4890) equipped with a flame ionization detector (FID), and isopropanol used as the internal standard. In the reuse experiments, the catalyst was separated by centrifugation and washed with acetone for four times, then used for the next run after drying at 70 o C under vacuum.
Typical procedure for the catalytic hydrogenation of diphenyl sulfoxide over Ru/Zn-MOF: The experimental procedure was similar to that discussed above. In the experiment, 40 mg Ru/Zn-MOF was placed into the 20 ml stainless steel batch reactor. 0.5 mmol diphenyl sulfoxide and 2 ml dioxane were added into the reactor. H 2 was introduced into the system at 5 atm and the pressure was maintained during the reaction. The reactor was placed in oil bath of desired temperature. The reaction mixture was cooled in ice-water and the gas was released. The reaction mixture was centrifuged to separation and analyzed using a gas chromatograph (GC, HP 4890) equipped with a flame ionization detector (FID), and isopropanol was used as the internal standard. In the reuse experiments, the catalyst was separated by centrifugation and washed with acetone for four times, then used for the next run after drying at 70 o C under vacuum.
Characterization of the emulsion formed by the bio-based surfactant (SAAS-C 12 ).
The formation of the emulsion was studied by visual observation and light microscopy. The size distribution of the water droplets in CH 2 Cl 2 /water emulsion was characterized by light microscopy with water soluble dye (Rhodamine B) and dynamic light scattering (DLS), as showed in Figure S1c. The size of the droplets was in the range of 5 ~ 60 nm. Figure S1. The optical micrographs of the emulsion stabilized by SAAS-C 12 in CH 2 Cl 2 /water (a) and the fluorescence microscopic analysis using Rhodamine B (b) respectively; (c) The particle size distribution of water droplets in the emulsion was determined by DLS.
2. The formation of MOFs using emulsion as template. Table 2 (a); XRD pattern (b) and TEM image of Ru/Zn-MOF catalyst after used for five times. | 2,168.6 | 2017-12-14T00:00:00.000 | [
"Materials Science"
] |
Best practices for authors of healthcare-related artificial intelligence manuscripts
Since its inception in 2017, npj Digital Medicine has attracted a disproportionate number of manuscripts reporting on uses of artificial intelligence. This field has matured rapidly in the past several years. There was initial fascination with the algorithms themselves (machine learning, deep learning, convoluted neural networks) and the use of these algorithms to make predictions that often surpassed prevailing benchmarks. As the discipline has matured, individuals have called attention to aberrancies in the output of these algorithms. In particular, criticisms have been widely circulated that algorithmically developed models may have limited generalizability due to overfitting to the training data and may systematically perpetuate various forms of biases inherent in the training data, including race, gender, age, and health state or fitness level (Challen et al. BMJ Qual. Saf. 28:231–237, 2019; O’neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Book, 2016). Given our interest in publishing the highest quality papers and the growing volume of submissions using AI algorithms, we offer a list of criteria that authors should consider before submitting papers to npj Digital Medicine.
Since its inception in 2017, npj Digital Medicine has attracted a disproportionate number of manuscripts reporting on uses of artificial intelligence. This field has matured rapidly in the past several years. There was initial fascination with the algorithms themselves (machine learning, deep learning, convoluted neural networks) and the use of these algorithms to make predictions that often surpassed prevailing benchmarks. As the discipline has matured, individuals have called attention to aberrancies in the output of these algorithms. In particular, criticisms have been widely circulated that algorithmically developed models may have limited generalizability due to overfitting to the training data and may systematically perpetuate various forms of biases inherent in the training data, including race, gender, age, and health state or fitness level (Challen et One key theme we hope to highlight in these guidelines is that npj Digital Medicine is a journal focused on innovation in digital medicine. As such we encourage authors to justify their choice of machine learning algorithms in the context of a clinical problem and clarify their methodological innovations.
In this editorial, we will lay out a series of priorities and considerations for submitting authors. First and foremost amongst these recommendations is choosing a topic and problem that has a clear health context. The model you create should have a clear diagnostic or prognostic relationship to an important health problem and there should be some explanation of how the strengths/limitations of existing models supported development of a new project.
IN ORDER TO QUALIFY FOR SUBMISSION TO NPJ DIGITAL MEDICINE, THE INNOVATION SHOULD ALSO BE A DIGITAL MEDICINE INNOVATION
Contributions outside of digital medicine-e.g., genetics, molecular, cardiac, radiology, etc.-that merely utilize machine learning algorithms on traditional data without justifying how such an application might add value given the status quo, should be sent to their respective specialty journals. Digital medicine innovations should provide some potential clinical benefit beyond the status quo in the realm of either diagnosis or treatment.
THE DATASETS USED FOR MODEL DEVELOPMENT, VALIDATION, AND TESTING SHOULD BE ADEQUATELY DESCRIBED
Describe the digital datasets used for training, validation, and testing, including any differences between these datasets 3 . A separate test dataset external to the ones used for model development and validation must be used to assess and report the final model performance. Include measures taken to ensure that the data in the test set and the training/validation sets are independent of each other (e.g., zero overlap between training and test sets). Overlap between training and test datasets could artificially inflate test set performance. Samples within a dataset that are interdependent (e.g., multiple pictures of the same skin lesion, from different angles) should be disclosed, contained within a single subset (e.g., training), and not split across train/ validation/test sets. Provide definitions, methods and relevant context for the input data variables and the output variables of the AI task(s), including justifications for any modifications made to the original data (e.g., changing continuous data to the discrete, exclusion of certain data points, handling of missing data, and so on) 4 . Describe what ground-truth label was used, why it was chosen, and its relationship to the clinical gold-standard where applicable. If labels are assigned by human experts, describe methods in detail. Describe any efforts to quantify, and mitigate, intra-and inter-observer labeling differences 5 . Also, describe how closely the temporal alignment of the labels relates to the data segments being assigned. Include any methodology used in preprocessing, post-processing, or otherwise altering the data, and how this would be done if deployed. Each dataset should be diverse in demographic and other relevant dimensions (e.g., vendor type) to allow for broad generalizability 2 . Explain why the test set is a representative sample and allows you to conclude the claims of the paper. Describe biases it may contain, and ethical considerations that could arise as a result of this bias 6,7 . Justify the sample size of the dataset; potential ways to justify sample size may include: statistical guidance 8 , comparison with sample size used in previous studies describing analogous models, empirical assessments of model performance by relative sample size, error bar analysis, using re-sampling techniques such as bootstrap sampling 9 , characterizations of out-of-distribution samples in the test set, or sufficiency of the sample size via model performance saturation with increase in the size of input data. Also identify and report limitations of the dataset relevant to the context of the problem (representativeness, bias, measurement error) 10 .
PROVIDE A DETAILED DESCRIPTION OF THE METHODS USED FOR MODEL DEVELOPMENT AND TESTING
First describe why a pattern to be identified by the model from the data is to be expected given current knowledge in the domain science. Describe the outcome to be predicted by the model (for example, the model classifies the presence or absence of a fracture on wrist X-rays). Describe different modeling choices and justification of the models eventually selected for comparisons. Specify the type of models and describe all model building procedures for replication studies. This should include: detailed description of the model architecture (inputs, outputs, filter sizes, layers, and cost functions), details of training approach, including data augmentation steps and parameters, network hyperparameters, number of models trained, regularization methods, and the process used to select final models, and descriptions of how weight parameters were initialized. (e.g., random or drawn from a particular distribution). Also, describe method and metrics used for internal validation of the model, as well as those used to guide parameter selection. Include the steps taken to avoid and assess overfitting, such as testing of the trained model on an independent dataset of comparable size to the training dataset 11 . Discuss the types of initialization methods used, if relevant, for any models.
DESCRIBE THE MODEL'S PERFORMANCE
Report all performance metrics with confidence intervals on validation and test datasets and report model calibration where applicable 6 . Compare performance with existing models, if possible 12 . If baseline methods are used for model comparison, explain why they are fair methods to compare against yours. If possible and reasonable, report results both in the context of model performance metrics (e.g., Dice, F-score, etc) and of clinical performance metrics (sensitivity, number needed to treat, etc) 13 . If possible and reasonable, benchmark against human performance. If possible and relevant, report false positive rates per time unit (e.g., per day, per week, etc.), instead of per data point, given wide variability in the length of data that may be used as an input unit. All comparisons of model performance (with humans; against other models, etc) need to be backed by statistics.
DISCUSS THE LIMITATIONS OF THE MODEL AND/OR THE METHODS USED
Describe how the robustness of the model was assessed and report any results from such experiments 14 . Address potential challenges involved in scaling data collection or applying the model to existing datasets. If the dataset and source code of the model are publicly available, guidelines for citation of publicly available datasets can be found at: https://www.nature.com/ documents/nr-data-availability-statements-data-citations.pdf. Clinical trials involving the use of machine learning-based solutions should report in accordance with CONSORT guidelines 15 . 16 Discuss the implications of errors made by the model on clinical and economic outcomes If the manuscript addresses potential cost-savings or quantitative clinical benefits, please provide sensitivity analyses. Also discuss and present failure cases and analysis of these failures.
DESCRIBE THE PROPOSED CLINICAL CONTEXT AND WORKFLOW WITH MODEL IMPLEMENTATION (A SCHEMATIC DIAGRAM IS RECOMMENDED)
Describe the generalizability of the model, Including the performance of the model on validation and testing datasets. Clarify whether transfer learning is applied to the model training and where applicable present details of the transfer learning process. Discuss the transferability of the model to other clinical cases Present clinical acceptability and user perceptions Describe the model's pertinence to humans. Where appropriate, report user perceptions on the models and their outputs, and describe the trustworthiness of the models. Where appropriate, also describe the integration of the models to clinical workflows.
Our hope is that these guidelines and best practices will help authors innovating in the area of digital medicine to focus their research and manuscripts. A keen sense of clinical applications, combined with a standardized discussion of methods and performance metrics may help us raise the quality of contributions in the field. | 2,195.2 | 2020-10-16T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Fracture study of organic–inorganic coatings using nanoindentation technique
The mechanical response of different coating-substrate systems are investigated using the nanoindentation technique. From the load–penetration depth curves, we determined the hardness H c and the elastic modulus E c of the coatings. Moreover, as the force increases, cracks, delamination and chipping can appear. These effects induce discontinuities on the indentation curves. Measuring crack lengths or calculating the dissipated energy during indentation allows the determination of residual stress in the coating and interface toughness. Two kinds of organic–inorganic coatings on different substrates (silicon and glass) are studied. The coatings were prepared by the sol–gel process and deposited using the spin–coating technique. The first coating is a mixture of methyltrimethoxysilane, colloidal silica and tetraethylorthosilicate and the second one is based on 3-(trimethoxysilyl)propyl-methacrylate. The first one reveals better general mechanical properties (lower residual stress, better adhesion, higher interfacial toughness) on silicon than on glass. For the second one, the elastic modulus and hardness are comparable with those of polymers. In contrast, coating toughness is lower. (cid:1) 2004 Published by Elsevier B.V.
Introduction
Hybrid organic-inorganic coatings find applications in different domains, particularly in optics with nonreflecting, anti-abrasion, or scratch resistant surfaces [1] and in optoelectronics with integrated optical circuits [2]. Such coatings are fabricated by the sol-gel process. Compared with others processing techniques, the great interest of sol-gel process is its relative tailoring simplicity. It is now well known that organic-inorganic hybrid precursors are very effective materials for such applications. However, for an industrial use, mechanical prop-erties of the film and of the interface between film and substrates have to be known because they play a crucial role in coating efficiency and aging.
The nanoindentation technique is well known to permit the mechanical characterization of coating-substrate systems. The principle of the experiments is to indent the sample and to record the force as a function of the penetration depth. From the force-indentation depth curves, the hardness H c and the elastic modulus E c of the coatings are parameters classically obtained when the indentation depth is small compared to the coating thickness (about less than 10%). As the force increases, cracks, delamination and chipping can appear. These effects induce discontinuities in the indentation curves. More recently, Malzbender and de With [3,4] showed that, by measuring cracks length or by calculating the dissipated energy during indentation, others mechanical parameters such as residual stress in the film, fracture toughness of the coating, fracture toughness of the interface between coating and substrate can be determined.
In this paper, two kinds of organic-inorganic coatings are studied. The first one is based on 3-(trimethoxysilyl)propyl-methacrylate and is used in integrated optical circuits fabrication. The second kind of coating is a mixture of methyltrimethoxysilane, colloidal silica and tetraethylorthosilicate and is used to make antiabrasion films. The structure of these coatings are different: the first one is a copolymer with an inorganic part containing Si and Zr and an important organic network. The second one has a highly mineral structure with a network made only by siloxane bonds. Consequently, the mechanical behaviour of these systems is expected to be different. The aim of this work is to evidence such a difference by nanoindentation technique.
Experimental
Experiments are performed by using two kinds of hybrid organic-inorganic coatings. The first one (named A) contains 30% (weight) solids components and 70% solvents. The precursors are methyltrimethoxysilane (MTMOS, assay > 98%), colloidal silica and tetraethylorthosilicate (TEOS, assay > 99%). The weight amounts of MTMOS and colloidal silica are equal and the amount of TEOS is 2% (weight) of MTMOS quantity. The solvents are methanol (64%), diethylene glycol (34%), H 2 O (1%) and ethanol (1%). The coatings are deposited by spin-coating with free evaporation at 600 rpm. Two types of substrates are used: silicon wafers with SiO 2 thermic layer (the coating-substrate system is named A1) and soda lime glasses (the coating-substrate system is named A2). Coating are dried a few minutes at 100°C to evaporate solvents and then heat treated at 250°C for 18 h to perform the coating densification.
The second kind of coating (named B) contains 3-(trimethoxysilyl)propylmetacrylate (MAPTMS, assay 99%), zirconium (IV) n-propoxide (Zr(O n C 3 H 7 ) 4 assay 70% in propanol), methacrylic acid (MAA, assay > 98%) and H 2 O in molar ratio of 10:1.5:1.5:20. The Irgacure 1800 (CIBA) is used as photoinitiator to perform the polymerization of methacrylate bonds. The complete synthesis of this coating solution is described in Ref. [5]. Coatings are spin-coated with free evaporation at 1800 rpm on silicon substrates (the system is named B). They are dried for 15 min at 60°C and UV cured for 30 s.
Indentation experiments are carried out using a home made instrumented microindentor [6]. The sample is maintained on a platen which can be moved horizontally and vertically with motors and also manually tilted to adjust the sample position perpendicular to the indentor. The indentor is a Berkovitch diamond. It is mounted on a force sensor working within the range 0-1000 mN with an accuracy of 1 lN. The penetration depth is recorded using a displacement sensor measuring the displacement of a skirt surrounding the indentor. The sensor works within the range 0-10 lm with an accuracy of 10 nm. The displacement rate may be chosen between 0.1 and a few lm/min. An optical microscope allows us to observe the sample surface, before and after the indentation.
Results
Coatings thicknesses have been measured on cleaved sample using an optical microscope. The mean values are 5.1 ± 0.2 lm for A coating and 14.5 ± 0.2 lm for B coating.
Hardness and elastic modulus
The coating hardness H c is defined by the ratio between the maximum load F and the contact area A. Knowing precisely the indentor geometry (by calibration), this area can be expressed in terms of contact depth h c directly determined from measurements.
The standard way to determine the coating elastic modulus E c is by using the initial slope S of the unloading curve [7,8] In this equation, E r is the reduced modulus given by [7,8] where m c , E c , m i , E i are the PoissonÕs ratio and the elastic modulus of coating and indentor.
To be sure that the substrate has no influence, the investigated depth corresponds to about 10% of the whole layer thickness. The results are, for A1, E c = 22 ± 3 GPa and H c = 2.0 ± 0.2 GPa, A2, E c = 17 ± 2 GPa and H c = 1.1 ± 0.2 GPa and B, E c = 1.6 ± 0.2 GPa and H c = 0.21 ± 0.05 GPa (see Table 1).
Coating toughness, residual stresses and interface toughness
When the indentation load increases, several kinds of damage appear: cracks (originating from the edges of indentor), delamination (loss of contact between coating and substrate) and chipping (removal of coating segments). These events have been observed by microscopy and also associated to changes in the loading curves. They can be used to determine coating and interface toughness and residual stresses in the coating. Basically, two kinds of approach are used: geometrical analysis and energetical one.
Geometrical approach
The most commonly used relationship between the length of the radials cracks c, the coating toughness K Ic and the residual stresses r r is given by the relation where v = 0.016 (E c /H c ) 0.5 for a Berkovitch indentor [10]. Therefore, by measuring c for different loads and plotting vF/c 3/2 versus c 0.5 , we can determine K Ic and r r . To obtain interfacial toughness K int , Rosenfeld et al. [11] provided a method based on the relationship between the size of the delaminated area and the corresponding load: with e, coating thickness and U d the diameter of delamination area.
Energy-based approach
Malzbender and de With [4] suggested a method based on the energy dissipated during indentation which is equivalent to the area between the loading and unloading curves. By plotting the dissipated energy U as a function of maximum load F during indentation, it is possible to separate the different events. The energy dissipated into chipping U c and delamination U d , gives an estimation of the fracture energy, respectively, of coating C c and interface C int Then, the coating and interface toughness can be determinated using with (C c , E c ) for K Ic and (C int , E int ) for K int . The interfacial modulus E int is defined in Ref. [12]. These methods have been applied to our coatingssubstrates systems. Fig. 1 shows the indentation curve for the A1 system. The three events occurring during indentation induce changes observed on the loading curve. Cracking, delamination and chipping appears at approximately 15, 20 and 100 mN. In Fig. 2 are the Scanning Electron Microscopy (SEM) images of the indentation prints at different maximum load. At 20 mN (Fig. 2(a)), radial cracks can be observed and white areas indicate that the delamination have just begun. With one delaminated and two chipped parts, the image for a load of 100 mN (Fig. 2(b)) shows the transition between delamination and chipping. On the last image, for F = 650 mN (Fig. 2(c)), we clearly evidence the three chipped areas. Fig. 3 shows the graph of vF/c 3/2 as a function of c 0.5 corresponding to the A1 system.The intercept with the ordinate axis and the slope give respectively coating toughness K Ic and residual stress r r . The values obtained are: K Ic = 0.5 ± 0.1 MPa m 1/2 and r r = 121 ± 25 MPa. r R value implies tensile stresses. The energetic approach previously described has been used for this system. The values of U as a function of indentation load F are shown in Fig. 4. The dissipated energies during chipping and delamination are respectively U c = 49 ± 5nJ and U d = 10 ± 1 nJ. This resulted in fracture energies of C c = 26 ± 2 J/m 2 for the coating and C int = 67 ± 7 J/m 2 for the interface. The diameters of chipped and delami- Fig. 1. Load-penetration depth curve for A1 system. Inset: details of cracking and delamination area. nated areas are U c = 19.7 ± 0.2 lm and U d = 6.9 ± 0.2 lm, respectively. Finally, the values of toughness are K Ic = 0.7 ± 0.1 MPa m 1/2 for the coating and K int = 1.5 ± 0.2 MPa m 1/2 for the interface. For this system, the interfacial toughness K int is also obtained with the geometrical method provided by Rosenfeld [12] and based on the size of delaminated area. The value obtained with this method is K int = 1.3 ± 0.2 MPa m 1/2 . The same analysis is done for the A2 system. Fig. 5 shows the graph vF/c 3/2 versus c 0.5 . For this system, cracking and delamination are observed but not chipping (due to the load limits of our instrument). It gives K Ic = 0.6 ± 0.2 MPa m 1/2 and r r = 157 ± 28 MPa. The value of interfacial toughness (by geometrical way) is K int = 0.31 ± 0.05 MPa m 1/2 with a diameter of delaminated area of U d = 6.4 ± 0.2 lm. To avoid problems due to the load limit, a new instrument is now in progress, based on a modified traction INSTRON machine which permits to reach much more important loads. For the B system, no delamination and chipping are evidenced. Cracks are observed but measured with difficulties because of the large elastic recovery which deforms the print. Only mechanical properties of coating can be inferred. Fig. 6 shows the graph of vF/c 3/2 as a function of c 0.5 for the B system. It permits to obtain K Ic = 0.21 ± 0.05 MPa m 1/2 and r r = À40 ± 10 MPa. This result implies compressive residual stresses.
For easier reading and comparison, all the results are collected in Table 1.
Hardnesss and elastic modulus
As measurements have been performed at indentation depth lower than 10% of coating thickness, hardness and elastic modulus values do depend neither on substrate type nor on coating thickness. However, H c and E c values obtained for A1 system (silicon substrate) are larger than those of A2 system (glass substrate). Coatings have been heat treated at 250°C and one possible explanation for lower mechanical properties is the diffusion of sodium ions from the soda lime glass substrate into the coating. Previous works [13] show that sodium ions diffuse very rapidly into the coating even for short time of heat treatment. For long time of heat treatment, they can accumulate at the coating surface, disturbing hardness and elastic modulus measurements. To confirm this assumption, same measurements are in progress on fused silica.
Although coating A has a highly mineral nature, its elastic modulus is much lower than the silica one (69 GPa). There is an effect of organic part which induces a decrease of network reticulation and then an increase of network flexibility. Moreover, this result can also be due to the incomplete condensation of silanols groups because of a low heat treatment temperature [14]. A decrease of inorganic network connectivity because of organic groups has a great influence on mechanical properties. This result is widely observed in hybrid [1].
Hardness and elastic modulus of B system are much lower than these of A systems. Due to its composition (MAPTMS and MAA), the B coating contains not only a more important quantity of organic matter but also an organic network. This latter completely drives elastic and plastic properties and leads to a behavior close to polymers one.
Residual stress and coating toughness
Residual stresses value of the A coating indicates that it undergoes a tensile stress. They can appear at different steps. First, after deposition, due to the solvent evaporation, the film undergoes shrinkage and the substrates sets against this shrinkage. They also can be due to the densification treatment. Finally, residual stresses can appear during cooling due to the thermal expansion coefficient mismatch between coating and substrate. We know from literature that thermal expansion coefficient is higher for soda lime glass (9 · 10 À6°CÀ1 ) than for silicon (3 · 10 À6°CÀ1 ). Even if the r r values are close for A1 and A2 systems (regarding to uncertainty), our results seems to show that coating on glass is more stressed than on silicon. More experiments are required to confirm this tendency and to obtain coating thermal expansion coefficient estimation. Finally, it is worth noting that the residual stresses obtained for the A coating are slightly higher than that of vitreous silica mechanical resistance in tension (about 100 MPa).
Unlike in A systems, residual stresses in the B coating are compressive. In this system, the densification heat treatment is replaced by UV curing of organic parts. Moreover, we previously evidenced that UV curing leads to a coating expansion [5]. This effect puts the coating in compression. As the final result is compressive residual stresses, it means that drying shrinkage effect is far lower than UV curing one.
The K Ic values for A1 and A2 systems are in good agreement each other and a bit lower than the silica one (K Ic silica = 0.75 MPa m 1/2 ). In the same way, the hardness and elastic modulus values are low compared to silica ones. However, even if bonds density of A system is lower than dense silica because of methyl groups and non-condensed Si-OH [15] and taking into account the measurements accuracy, the required energy to break the material seems to be on a same order.
For A1 system, a slightly difference exists between the K Ic values respectively obtained with geometrical and energetical methods. The overestimation induced by energetical way is due to the substrate effect. Indeed, this method uses coating chipping and then requires high indentation loads. The substrate influence has to be taken in account. A way (proposed by Malzbender [4]) to have a good approximation is to extrapolate the energy data to infinite coating thickness.
The B system exhibits a low K Ic value (0.21 ± 0.05 MPa m 1/2 ). In this coating, the hybrid precursors (MAPTMS and Zr-MAA) are network formers. The polycondensation of MAPTMS mineral entities induces a silica network and the photopolymerization of MAP-TMS and MAA methacrylate groups induces an organic network close to PMMA. Moreover, the Zr-MAA precursor leads to the formation of Zr-O-Zr clusters [16] and Zr-OH groups. With this texture, we might expect a K Ic value closer than those of polymers (PMMA for example, K Ic = 1.3 MPa m 1/2 ) or at less a value between silica and PMMA ones. Considering that B coating has a highly organic nature, we can assume that we are studying the fracture of an elastoplastic material. In this case, the dissipated energy during fracture is the energy required to generate two surfaces but also consider the plastic deformation at the crack tip. Not taking this plastic deformation into account (as in the geometrical method), induces underestimated K Ic values. We might confirm this assumption with experiments at higher load which permit to obtain the fracture energy directly from indentation curves. The K Ic values might be compared more accurately.
Moreover, as previously explained, the cracks have been difficult to observe and then to measure. Due to the geometry of final print, cracks length has probably been overestimated leading to an underestimation of K Ic and r r . For example, a c overestimation of 30% induces K Ic and r r underestimations of, respectively, 70% and 100%.
Interfacial toughness
In the A coating, interfacial fracture toughness is lower for A2 system (glass substrate) than for A1 system (silicon substrate). This result is probably in relation with two effects: the first one is the omission of residual stress in the RosenfeldÕs method used to calculate K int . These higher residual stresses in A2 system could explain a lower K int value. The second effect is the diffusion of sodium ions previously mentioned. This diffusion could be responsible of Si-O-Si breaking at the interface. In any case, the values of K int express the global adhesion between coating and substrate taking into account the system history. In these conditions, the results obtained for A coating seems to show a best adhesion on silicon.
Conclusion
Nanoindentation technique has been used to estimate mechanical properties of hybrid coatings on substrates.
Hardness and elastic modulus have been determinate from indentation curves at small load values. At higher load values, coating toughness and residual stress as well as interface toughness were estimated from cracks, delamination and chipping occurring in the coating on the basis of geometrical and energetical analyses. The two kinds of coating which have been studied, have different structure: one has a highly mineral structure and the other one contains an important organic network. The first one reveals better general mechanical properties (lower residual stress, better adhesion, higher interfacial toughness) on silicon than on glass. Sodium ions, which are known to diffuse very rapidly from substrate to coating seems to have an important influence on mechanical properties. To confirm this assumption, same measurements are in progress on dense silica substrates. For the second one, the elastic modulus and hardness are comparable with those of polymers. On the contrary, coating toughness is lower. However, the K Ic values may be underestimated because of plastic deformation at crack tip which is not taken into account with the geometrical method. Moreover, the important difficulty to measure cracks length because of elastic recovery in this coating shows that the geometrical method is probably not the best way to obtain toughness values. Finally, residual stresses are tensile stresses in system A and compressive stresses in system B. | 4,663.4 | 2004-09-15T00:00:00.000 | [
"Materials Science"
] |
Uniform Magnetic Field Characteristics Based UHF RFID Tag for Internet of Things Applications
: This paper presents a novel inkjet-printed near-field ultra-high-frequency (UHF) radio frequency identification (RFID) tag/sensor design with uniform magnetic field characteristics. The proposed tag is designed using the theory of characteristics mode (TCM). Moreover, the uniformity of current and magnetic field performance is achieved by further optimizing the design using particle swarm optimization (PSO). Compared to traditional electrically small near-field tags, this tag uses the logarithmic spiral as the radiating structure. The benefit of the logarithmic spiral structure lies in its magnetic field receiving area that can be extended to reach a higher reading distance. The combination of TCM and PSO is used to get the uniform magnetic field and desired resonant frequency. Moreover, the PSO was exploited to get a uniform magnetic field in the horizontal plane of the normal phase of the UHF RFID near-field reader antenna. As compared with the frequently-used commercial near field tag (Impinj J41), our design can be readable up to a three times greater read distance. Furthermore, the proposed near-field tag design shows great potential for commercial item-level tagging of expensive jewelry products and sensing applications, such as temperature monitoring of the human body.
Introduction
Internet of Things (IoT) has been emerging as the third most innovative wave in the information, communication, and technology (ICT) field, after the Internet and computer.However, some collaborative technologies are co-existing, such as radio frequency identification (RFID), to assist IoT for realizing low-cost item-level or things-level inexpensive tagging or sensing [1][2][3].The UHF RFID systems are most prevalent due to their low-cost inkjet printable tags, fast reading, and sensing capabilities [4][5][6][7][8][9][10].Compared with the traditional far-field UHF RFID systems, the near-field UHF RFID system is more suitable for item-level tagging [11].Similarly, the near-field UHF RFID systems are very useful in application scenarios where confined interrogation volume is required.Previously, most of the research work was done on flux-type UHF near-field systems, which are quite similar to the HF RFID system.In these designs, the in-phase current loop is used as the reader antenna to produce a uniform magnetic field in the normal plane, and the electrically small ring was adopted as the tag antenna to receive the magnetic energy [12][13][14].
For practical application, the near-field RFID systems usually require a read distance of about 0-10 cm.As we know, the RFID chip activation determines the reading distance of the whole RFID system.In previous researches, most of them use the commercial button tag as a test label to prove the uniformity of magnetic field in normal determined height.However, the electric-small antenna's energy reception capability is limited by its size, so if we want to achieve an expected height, the transmitting power of the reader antenna should be enhanced.Unfortunately, the in-phase current loop also has an about 2 dBi omnidirectional gain in the azimuth plane.In the case of high transmitting power usage, it is very easy to read other distant tags, which are unwilling for most of the practical application scenarios.So, the tag's energy reception capability should be enhanced.It is worth noting that just increasing the radius of the tag's antenna is not a good way because, when the radius of the antenna increases, the uniformity of the electric-small ring's current and magnetic field will be changed.Moreover, the near-field UHF RFID tag antennas [15,16] get very little attention compared with the reader antenna.Therefore, the design of a near-field tag antenna with uniform magnetic field characteristics, simple structure, and 100% reading rate is still challenging for many practical applications.
A quadruple loop near-field reader antenna with uniform magnetic field performance was designed in [17].This quadruple design consists of fan-shaped loops, feeding lines, and vias for connecting feed lines to those loops.The quadruple tag design achieved uniform magnetic field distribution by opposite direction current cancellation effects.However, this tag design is costly and difficult to fabricate due to the intervention of vias.Moreover, this tag design is fabricated on a 0.8 mm FR4 substrate.In order to overcome previously mentioned drawbacks, and by attaining motivation from [18], we know that a non-closed ring can also be excited by the normal magnetic field (H z ).
In addition to Electromagnetic (EM) techniques and solvers, the use of optimization techniques and algorithms are becoming prevalent both in the EM field and antenna design applications.There are several of algorithms used for antenna and RFID tag optimizations such as PSO, Artificial Bee Colony Optimization, and so forth [19][20][21][22][23][24].PSO is one of the best mainstream algorithms used for the optimization of antennas and RFID tags.PSO algorithm mimics the behavior of swarm of flying bees.Recently, PSO has been employed in a wide variety of electromagnetic structures and antennas.In Reference [23], a hybrid PSO algorithm was proposed for bandwidth improvement of inverted F (IFA) antenna.Similarly, PSO was employed for gain enhancement of RFID reader antenna [25], and the planning of a RFID-network by identifying mutual interference and mutual coverage of read-zones among the readers [26].
Therefore, in this paper, a novel inkjet-printed logarithmic spiral UHF RFID near-field tag, based on TCM and PSO, is presented.The benefit of the logarithmic spiral structure lies in its inherited uniform and large magnetic field receiving area that can be extended to achieve a higher reading distance.TCM can define the potential radiation capacity of a conductor with inherent structure along with near and far-field characteristics.Since the antennas' receiving and transmitting characteristics are reciprocal, the PSO is used to optimize the antenna structure to have a uniform magnetic field in the normal phase plane.After the radiation structure is determined, a T-match matching circuit is used to meet the conjugate match with RFID chip impedance [27].The performance of the proposed design was tested by adopting the same chip as used by the commercial tag (J41).Moreover, to get a robust comparison, the same reader antenna was used for tag reading by previously published researches [12].
The proposed tag can be readable from very close proximity.It achieves three times more read distance as compared to frequently-used commercial near field tag (Impinj J41) with 10 dBm transmission power, thereby greatly reducing the pressure on reader antenna designers.The proposed tag design is used to demonstrate tagging applications, such as item-level tagging of jewelry items, as shown in Figure 1.Furthermore, since the tag reading mechanism is based on magnetic field characteristics thereby, it can be used in many sensing and tagging applications [27][28][29][30] in collaboration with IoT technology, as shown in Figure 1.For IoT technology assistance, a raspberry pi based setup with LoRa IoT Chip HAT was utilized to connect the RFID reader.The LoRa device provides the connectivity using the LoRa gateway, which helps to store data on the IoT cloud.The data from the IoT cloud can be retrieved for illustration purposes using Web Apps.The proposed tag can be readable from very close proximity.It achieves three times more read distance as compared to frequently-used commercial near field tag (Impinj J41) with 10 dBm transmission power, thereby greatly reducing the pressure on reader antenna designers.The proposed tag design is used to demonstrate tagging applications, such as item-level tagging of jewelry items, as shown in Figure 1.Furthermore, since the tag reading mechanism is based on magnetic field characteristics thereby, it can be used in many sensing and tagging applications [27][28][29][30] in collaboration with IoT technology, as shown in Figure 1.For IoT technology assistance, a raspberry pi based setup with LoRa IoT Chip HAT was utilized to connect the RFID reader.The LoRa device provides the connectivity using the LoRa gateway, which helps to store data on the IoT cloud.The data from the IoT cloud can be retrieved for illustration purposes using Web Apps.The paper is organized as follows: Section 2 briefly explains the design principle and antenna configuration.Section 3 demonstrates the antenna optimization process using PSO.Section 4 elaborates the results and discusses several performance metrics, such as the read range of the proposed tag design.Moreover, it also discusses jewelry item-level tagging and reading experimental setup using proposed near-field tag design.Finally, conclusions are presented in Section 5.
Design Principle and Antenna Configuration
In the HF RFID systems, the read distance relates to the size of the tag's coil, which means the bigger size of the tag's coil, the higher the read distance.Whereas, in the UHF RFID systems, just the electrically small loop can meet this condition due to its small wavelength features.Additionally, if the size of the UHF loop tag is increased, the non-uniform and opposite currents will tend to appear.
As mentioned earlier, due to reciprocity property of antennas, the field generated by the same antenna as the receiver is consistent with that generated by the transmitter.If the field distribution of the receiving antenna is consistent with that of the transmitting antenna, the receiving antenna can extract more energy from the field.The higher the consistency between transmitting and receive antennas, the higher the energy captured by The paper is organized as follows: Section 2 briefly explains the design principle and antenna configuration.Section 3 demonstrates the antenna optimization process using PSO.Section 4 elaborates the results and discusses several performance metrics, such as the read range of the proposed tag design.Moreover, it also discusses jewelry item-level tagging and reading experimental setup using proposed near-field tag design.Finally, conclusions are presented in Section 5.
Design Principle and Antenna Configuration
In the HF RFID systems, the read distance relates to the size of the tag's coil, which means the bigger size of the tag's coil, the higher the read distance.Whereas, in the UHF RFID systems, just the electrically small loop can meet this condition due to its small wavelength features.Additionally, if the size of the UHF loop tag is increased, the non-uniform and opposite currents will tend to appear.
As mentioned earlier, due to reciprocity property of antennas, the field generated by the same antenna as the receiver is consistent with that generated by the transmitter.If the field distribution of the receiving antenna is consistent with that of the transmitting antenna, the receiving antenna can extract more energy from the field.The higher the consistency between transmitting and receive antennas, the higher the energy captured by the receiving antenna.It is quite similar to the fact that, when the polarization mismatch between transmitting and receiving antennas will be minimum, the associated energy transfer will be maximum.
Therefore, the structure UHF band near-field tag is not necessarily a closed-loop structure, as long as its near-field magnetic field distribution is consistent with the magnetic field distribution generated by the transmitting coil in space.More precisely, if the field distribution of the mode near-field magnetic field, obtained by the characteristic mode theory analysis, is consistent with the field distribution of the magnetic field in space, the structure has the potential ability to receive energy from the field distribution of the magnetic field.
By TCM theory, we can get physical insight into a metal object's radiation pattern and near field without any external feed excitation.So, we can search for a suitable structure that meets the requirement by analyzing its characteristic field.Figure 2 shows the modal Significance (MS) values of the first three characteristic modes of the metal strip (the length of metal strip is chosen as half-wavelength of 915 MHz), characteristic current distribution, and the characteristic near magnetic field associated with these modes (the relative dielectric constant of PET material is 4).Figure 3a-c show the vector distribution of the characteristic current corresponding to the maximum MS value of mode 1 to mode 3 and the corresponding Z component (H z ) of the characteristic magnetic field at Z = 1 mm (the positive value is taken in the +Z direction and the negative value is taken in the -Z direction).It is worth to mention here that the strip is placed in xy-plane.The characteristic current of mode 1 shows a similarity with the half-wave mode (associated to 915 MHz) current of a dipole.Moreover, mode 1 exhibits open characteristic magnetic field distribution, and its normal component has symmetrical distribution about metal strip structure in Z = 1 mm plane.
Electronics 2021, 10, 1603 4 of 15 the receiving antenna.It is quite similar to the fact that, when the polarization mismatch between transmitting and receiving antennas will be minimum, the associated energy transfer will be maximum.Therefore, the structure UHF band near-field tag is not necessarily a closed-loop structure, as long as its near-field magnetic field distribution is consistent with the magnetic field distribution generated by the transmitting coil in space.More precisely, if the field distribution of the mode near-field magnetic field, obtained by the characteristic mode theory analysis, is consistent with the field distribution of the magnetic field in space, the structure has the potential ability to receive energy from the field distribution of the magnetic field.
By TCM theory, we can get physical insight into a metal object's radiation pattern and near field without any external feed excitation.So, we can search for a suitable structure that meets the requirement by analyzing its characteristic field.Figure 2 shows the modal Significance (MS) values of the first three characteristic modes of the metal strip (the length of metal strip is chosen as half-wavelength of 915 MHz), characteristic current distribution, and the characteristic near magnetic field associated with these modes (the relative dielectric constant of PET material is 4).Figure 3a-c show the vector distribution of the characteristic current corresponding to the maximum MS value of mode 1 to mode 3 and the corresponding Z component (Hz) of the characteristic magnetic field at Z = 1 mm (the positive value is taken in the +Z direction and the negative value is taken in the -Z direction).It is worth to mention here that the strip is placed in xy-plane.The characteristic current of mode 1 shows a similarity with the half-wave mode (associated to 915 MHz) current of a dipole.Moreover, mode 1 exhibits open characteristic magnetic field distribution, and its normal component has symmetrical distribution about metal strip structure in Z = 1 mm plane.The characteristic current of mode 2 is the full-wave mode current of the dipole, and its characteristic magnetic field is also open distribution.In the plane (Z = 1 mm), the magnetic field is symmetrically distributed around the metal strip structure.Additionally, the magnetic field value is also symmetrical concerning the long side of the metal strip.The characteristic current of mode 3 shows that the mode current of 1.5 times the wavelength of the dipole.In the plane of Z = 1 mm, the magnetic field lies on both sides along the length of the strip with the same mode value, and opposite direction.Considering the UHF band RFID magnetic field coupled antenna system, the normal distribution of the magnetic field generated by the reader antenna is uniform and in phase in its normal plane.In contrast, the Z component of the magnetic field (near-field) around the three modes of the metal strip has the same mode value with opposite direction, indicating that the metal strip antenna cannot obtain energy from the magnetic field generated by the reader antenna.The characteristic current of mode 2 is the full-wave mode current of the dipole, and its characteristic magnetic field is also open distribution.In the plane (Z = 1 mm), the magnetic field is symmetrically distributed around the metal strip structure.Additionally, the magnetic field value is also symmetrical concerning the long side of the metal strip.The characteristic current of mode 3 shows that the mode current of 1.5 times the wavelength of the dipole.In the plane of Z = 1 mm, the magnetic field lies on both sides along the length of the strip with the same mode value, and opposite direction.Considering the UHF band RFID magnetic field coupled antenna system, the normal distribution of the magnetic field generated by the reader antenna is uniform and in phase in its normal plane.In contrast, the Z component of the magnetic field (near-field) around the three modes of the metal strip has the same mode value with opposite direction, indicating that the metal strip antenna cannot obtain energy from the magnetic field generated by the reader antenna.
Figure 4 shows the MS values of the first three characteristic modes of the closedloop, the distribution of the characteristic current and the magnetic field characteristics (the relative dielectric constant of PET material is 4).Figure 5a-c show the vector distribution of characteristic current at 800 MHz and Z component of the characteristic magnetic field at z = 1 mm for mode 1 to mode 3. The MS values of mode 1 and mode 2 of the closed metal ring reach the maximum at 800 MHz, and it can be seen from the current distribution that mode 1 and mode 2 are a pair of degenerate modes.It can be seen from Figure 5a,b that the magnetic field distribution of mode 1 and mode 2 in the closed metal ring is symmetrical about a midline, with the same amplitude and opposite direction.Therefore, the total magnetic flux in the closed metal ring is zero, since the two modes cannot obtain energy from the uniformly distributed normal magnetic field.Looking at Figure 5c, the current flow direction in mode 3 is not reversed, and the values are equal.Therefore, the magnetic field distribution in the closed-loop is in phase, and the corresponding magnetic field on concentric circles is the same.Considering that, in the UHF band RFID magnetic field coupled antenna system, the magnetic field distribution generated by the reader antenna is uniform and in phase in its normal plane.If only focusing on the field distribution, mode 3 can obtain energy from the magnetic field generated by the transmitting coil.Figure 4 shows the MS values of the first three characteristic modes of the closed-loop, the distribution of the characteristic current and the magnetic field characteristics (the relative dielectric constant of PET material is 4).Figure 5a-c show the vector distribution of characteristic current at 800 MHz and Z component of the characteristic magnetic field at z = 1 mm for mode 1 to mode 3. The MS values of mode 1 and mode 2 of the closed metal ring reach the maximum at 800 MHz, and it can be seen from the current distribution that mode 1 and mode 2 are a pair of degenerate modes.It can be seen from Figure 5a,b that the magnetic field distribution of mode 1 and mode 2 in the closed metal ring is symmetrical about a midline, with the same amplitude and opposite direction.Therefore, the total magnetic flux in the closed metal ring is zero, since the two modes cannot obtain energy from the uniformly distributed normal magnetic field.Looking at Figure 5c, the current flow direction in mode 3 is not reversed, and the values are equal.Therefore, the magnetic field distribution in the closed-loop is in phase, and the corresponding magnetic field on concentric circles is the same.Considering that, in the UHF band RFID magnetic field coupled antenna system, the magnetic field distribution generated by the reader antenna is uniform and in phase in its normal plane.If only focusing on the field distribution, mode 3 can obtain energy from the magnetic field generated by the transmitting coil.However, the MS value corresponding to mode 3 is very small in the whole frequency band.If the excitation is introduced, the current flow direction cannot be reversed in the small electric size.Therefore, for the closed ring structure, it is not possible to expand the size of the closed-loop without changing the current flow direction.Moreover, it is convenient to make a good conjugate matching in such a small size.
size of the closed-loop without changing the current flow direction.Moreover, it is convenient to make a good conjugate matching in such a small size.
A closed ring with a radius of 38.5mm and a width of 1mm PET substrate with thickness of 0.05mm size of the closed-loop without changing the current flow direction.Moreover, it is convenient to make a good conjugate matching in such a small size.
A closed ring with a radius of 38.5mm and a width of 1mm PET substrate with thickness of 0.05mm For choosing electrically small structure, we run CMA for open loop structure with overall length of 100 mm.After running some rigorous simulation tests, a structure with 105 mm was selected for demonstrating required performance at 1.5 GHz.The open structure dimensions were taken by folding 105 mm length wire to the center line.Figure 6 shows the MS values of the first characteristic mode of the open-loop structure.The proposed open-loop structure shows a peak MS performance at 1.5 GHz.Similarly, Figure 7 shows the distribution of the characteristic current and associated characteristic magnetic field (the relative dielectric constant of PET material is 4).In mode 1, MS reaches the maximum at 1.5 GHz.By observing the characteristic magnetic field distribution of mode 1, it can be concluded that the near-field magnetic field distribution of this structure is similar to that of mode 3 of the closed-loop.Therefore, the open loop structure has the ability to receive energy from the near-field magnetic field.For choosing electrically small structure, we run CMA for open loop structure with overall length of 100 mm.After running some rigorous simulation tests, a structure with 105 mm was selected for demonstrating required performance at 1.5 GHz.The open structure dimensions were taken by folding 105 mm length wire to the center line.Figure 6 shows the MS values of the first characteristic mode of the open-loop structure.The proposed open-loop structure shows a peak MS performance at 1.5 GHz.Similarly, Figure 7 shows the distribution of the characteristic current and associated characteristic magnetic field (the relative dielectric constant of PET material is 4).In mode 1, MS reaches the maximum at 1.5 GHz.By observing the characteristic magnetic field distribution of mode 1, it can be concluded that the near-field magnetic field distribution of this structure is similar to that of mode 3 of the closed-loop.Therefore, the open loop structure has the ability to receive energy from the near-field magnetic field.
Tag Antenna Optimization Using PSO
The magnetic field intensity of the tag antenna is related to the current intensity.Therefore, we can optimize the uniformity of the magnetic field by optimizing the superposition of the current at different angles.In order to optimize the uniformity of the current, the PSO algorithm is used.Equation (1) shows the mathematical expression of the For choosing electrically small structure, we run CMA for open loop structure with overall length of 100 mm.After running some rigorous simulation tests, a structure with 105 mm was selected for demonstrating required performance at 1.5 GHz.The open structure dimensions were taken by folding 105 mm length wire to the center line.Figure 6 shows the MS values of the first characteristic mode of the open-loop structure.The proposed open-loop structure shows a peak MS performance at 1.5 GHz.Similarly, Figure 7 shows the distribution of the characteristic current and associated characteristic magnetic field (the relative dielectric constant of PET material is 4).In mode 1, MS reaches the maximum at 1.5 GHz.By observing the characteristic magnetic field distribution of mode 1, it can be concluded that the near-field magnetic field distribution of this structure is similar to that of mode 3 of the closed-loop.Therefore, the open loop structure has the ability to receive energy from the near-field magnetic field.
Tag Antenna Optimization Using PSO
The magnetic field intensity of the tag antenna is related to the current intensity.Therefore, we can optimize the uniformity of the magnetic field by optimizing the superposition of the current at different angles.In order to optimize the uniformity of the current, the PSO algorithm is used.Equation (1) shows the mathematical expression of the
Tag Antenna Optimization Using PSO
The magnetic field intensity of the tag antenna is related to the current intensity.Therefore, we can optimize the uniformity of the magnetic field by optimizing the superposition of the current at different angles.In order to optimize the uniformity of the current, the PSO algorithm is used.Equation (1) shows the mathematical expression of the logarithmic spiral in polar coordinates.From Equation (1), we can know that three parameters (a, b, and θ max ) can be optimized.However, if the total length (L) of the logarithmic spiral is determined, the relationship between the three parameters will also be determined.Equations ( 2) and (3) show the relationship between the three parameters and the current expression with the θ as a parameter, respectively.For fitness function (Q), we define a new parameter I angle = I θ + I θ + 2π , noticeably, if the θ + 2π ≥ θ max , the I θ + 2π is 0. So, the square difference of the I angle is the fitness function, which is shown in Equation ( 4).The I angle,I represents the current value in the 4th point.The parameter n indicates the division accuracy from 0 to 2π. r = ae bθ (0 Figure 8 shows the flowchart of the proposed design methodology.In the first step, we give the overall length of the logarithmic spiral, and the PSO algorithm was utilized to define the structure for the uniform magnetic field in the normal plane.The PSO algorithm was implemented in MATLAB for achieving optimal tag design parameters to achieve uniform magnetic field performance.Figure 9a,b illustrates the fitness Curve of PSO-based optimization for the proposed near-field tag and the optimized tag parameters achieved after implementing the PSO algorithm in MATLAB, respectively.The optimal tag parameters achieved using the PSO algorithm was utilized further to construct the tag structure in CST Microwave Studio 2018. Figure 10 shows the structure of the proposed antenna, which consists of a logarithmic spiral and T-match structure.Table 1 gives the detailed parameters of the proposed design.To compare the performance, we chose the same antenna material, substrate, and chip (Monza 4 QT with impedance 11-j143 Ω) as the commercial tag.The substrate is PET material (relative permittivity is about 4) with a thickness of 0.05 mm.The antenna was In the second step, the structure using these parameters was calculated by TCM theory to determine whether it has reception capability in the required frequency band.After finalizing the design resulted from TCM, a T-match matching structure was designed, at the maxima of characteristic current, to meet the conjugate match with RFID chip impedance.In case, if design resulted from TCM does meet the requirement, we need to change the overall length and repeat the first step.
Figure 10 shows the structure of the proposed antenna, which consists of a logarithmic spiral and T-match structure.Table 1 gives the detailed parameters of the proposed design.To compare the performance, we chose the same antenna material, substrate, and chip (Monza 4 QT with impedance 11-j143 Ω) as the commercial tag.The substrate is PET material (relative permittivity is about 4) with a thickness of 0.05 mm.The antenna was printed using aluminum ink.The width of every line in this structure is 0.5 mm.As mentioned above, the excitation should place at the maximum of the characteristic current point, which can be estimated as θ = θ d .For the T-match matching structure, it should be flat to avoid influencing the characteristic mode of the logarithmic spiral.
Results and Discussion
Figure 11a shows the MS of proposed open loop structure that shows superior performance of mode 1 in term of significant mode and radiation capacity.The real and imaginary part of impedance associated with the proposed tag antenna is shown in Figure 11b.This tag design shows a good impedance match with the Impinj Monza 4 QT RFID chip with approximate impedance 11-j143 Ω. Figure 12 shows the current distribution and magnetic field (H z ) in the normal plane with height (h = 5 mm) at 915 MHz.The current distribution shows, the current remains in phase through the whole tag structure that further leads to the uniform magnetic field in an area equal to the size of the tag.A full-wave electromagnetic simulation has been carried out by means of the commercial software CST Microwave Studio Suite 2018 to prove the performance in terms of current, and magnetic field (H z ).To verify the performance of the proposed tag further, a prototype of the proposed tag, along with some other and commercial tags, was fabricated to compare their performance.Figure 13 shows five tested tags, which are numbered as tag1 to tag5.Tag1 is our proposed tag, and tag2 is the commercial button tag, which has the electric-small loop.The tag3, tag4, and tag5 are printed in the PCB substrate, and the chips of them are all H4 SOT packaging.Tag3 is based on the same theory as the proposed tag in this paper.The antenna of tag4 and tag5 are both full-wavelength loops, and tag4 is the miniaturization The tag antenna was fabricated using silver nanoparticle paste from Harima NPS series and Fujifilm Dimatix DMP-2831 inkjet printer with a 10 pL volume cartridge.Moreover, the tag was fabricated on 50 µm PET substrate with 4 layered printing approach.
Figure 13 shows five tested tags, which are numbered as tag1 to tag5.Tag1 is our proposed tag, and tag2 is the commercial button tag, which has the electric-small loop.The tag3, tag4, and tag5 are printed in the PCB substrate, and the chips of them are all H4 SOT packaging.Tag3 is based on the same theory as the proposed tag in this paper.The antenna of tag4 and tag5 are both full-wavelength loops, and tag4 is the miniaturization of tag5.A near-field antenna in the published paper is used as the reader antenna [5] to test the performance of the five tags.Figure 14 shows the test environment, which is a semi-open microwave anechoic chamber.The setup was explored using different transmit power levels for testing read range and further specifying the integration zone.The maximum read distance, which is tested by the center to center of the tested tag and the reader antenna is recorded in Table 2.
Electronics 2021, 10, 1603 of tag5.A near-field antenna in the published paper is used as the reader antenn test the performance of the five tags.Figure 14 shows the test environment, wh semi-open microwave anechoic chamber.The setup was explored using different t power levels for testing read range and further specifying the integration zone.T imum read distance, which is tested by the center to center of the tested tag and th antenna is recorded in Table 2. From Table 2, tag4 and tag5 have the biggest size in all tags, but they are hardly readable, which means it is not the bigger size, the more read distance for the closed-loop antenna.Tag1 has three times more read distance as compared to tag2, which means that a closed structure is not always necessary for receiving near field energy in the case of the UHF band.Furthermore, the proposed tag opened structure can provide both the flexibility of extending the receiving area as well as adjust the uniformity of the magnetic field, which is more suitable in this circumstance.Because of the differences in sensitivity (the sensitivity of Monza QT is −17.4 dBm and the H4 SOT is −20.5 dBm), the read distance of tag3 is more than tag1.The test results validate our proposed design very well.
Finally, the proposed tag antenna was used experimentally for item-level tagging of jewelry products, as shown in Figure 15.The setup consisted of a foam-based frame, nearfield reader antenna, RFID reader, and tagged jewelry products.The proposed tag/sensor exhibited 100% reading accuracy up to 46 cm with transmit power of 30 dBm.In addition to this, the proposed tag antenna can be exploited for temperature sensing of the human body as its main working principle is based on the magnetic field that is significantly less affected by permittivity changes.
Finally, the proposed tag antenna was used experimentally for item-level tagging of jewelry products, as shown in Figure 15.The setup consisted of a foam-based frame, nearfield reader antenna, RFID reader, and tagged jewelry products.The proposed tag/sensor exhibited 100% reading accuracy up to 46 cm with transmit power of 30 dBm.In addition to this, the proposed tag antenna can be exploited for temperature sensing of the human body as its main working principle is based on the magnetic field that is significantly less affected by permittivity changes.
Conclusions
This paper presents a novel structure UHF RFID tag based on a combination of PSO and TCM.As compared to traditional electric-small near-field tags, the proposed tag design uses the logarithmic spiral as the radiating structure.The benefit of the logarithmic spiral structure lies in its large magnetic field receiving area that can be extended to reach a higher limit reading distance.The combination of PSO and TCM is used to get the uniform magnetic field and desired resonant frequency.Five tags are tested to validate our opinion, and compared with the frequently-used commercial near-field tag (Impinj J41),
Conclusions
This paper presents a novel structure UHF RFID tag based on a combination of PSO and TCM.As compared to traditional electric-small near-field tags, the proposed tag design uses the logarithmic spiral as the radiating structure.The benefit of the logarithmic spiral structure lies in its large magnetic field receiving area that can be extended to reach a higher limit reading distance.The combination of PSO and TCM is used to get the uniform magnetic field and desired resonant frequency.Five tags are tested to validate our opinion, and compared with the frequently-used commercial near-field tag (Impinj J41), our proposed design can achieve three times more read distance with 10 dBm transmission power.This tag can reach an ideal read distance under a small transmitting power, which greatly reduces the possibility of far-field misreading under high transmitting power conditions.On the one hand, the design greatly relieves the pressure on reader antenna designers, and its read distance can further increase by using the latest chip.Furthermore, it gives a new idea for UHF RFID near-field tag design, which does not base on commonly used closed electric-small loop structure.For commercial item-level applications, this novel near field tag shows great potential towards practical near field UHF applications.
Figure 1 .
Figure 1.Demonstration of IoT application scenarios using proposed near-field tag/sensor-based IoT and UHF system.
Figure 1 .
Figure 1.Demonstration of IoT application scenarios using proposed near-field tag/sensor-based IoT and UHF system.
Figure 2 .
Figure 2. Structure and modal Significance (MS) of metal strip resulted from Characteristics Mode Analysis (CMA).
Figure 2 .
Figure 2. Structure and modal Significance (MS) of metal strip resulted from Characteristics Mode Analysis (CMA).
Figure 2 .Figure 3 .
Figure 2. Structure and modal Significance (MS) of metal strip resulted from Characteristics Mode Analysis (CMA).
Figure 3 .
Figure 3. Characteristics mode current and associated magnetic field profile of metal strip (observed at frequency with maximum MS value) (a) mode 1 (b) mode 2 (c) mode 3.
Figure 4 .
Figure 4. Structure and modal Significance (MS) of closed metal ring resulted from Characteristics Mode Analysis (CMA).
Figure 4 .
Figure 4. Structure and modal Significance (MS) of closed metal ring resulted from Characteristics Mode Analysis (CMA).
Figure 4 .Figure 5 .
Figure 4. Structure and modal Significance (MS) of closed metal ring resulted from Characteristics Mode Analysis (CMA).
Figure 6 .Figure 7 .
Figure 6.Structure and modal Significance (MS) of Open-loop ring structure resulted from Characteristics Mode Analysis (CMA).
Figure 6 . 15 Figure 5 .
Figure 6.Structure and modal Significance (MS) of Open-loop ring structure resulted from Characteristics Mode Analysis (CMA).
Figure 6 .Figure 7 .
Figure 6.Structure and modal Significance (MS) of Open-loop ring structure resulted from Characteristics Mode Analysis (CMA).
Figure 7 .
Figure 7. Characteristics mode current and associated magnetic field profile of open-loop ring structure for mode 1.
Figure 9 .
Figure 9. (a) The fitness Curve of PSO-based optimization for proposed near-field tag (b) The optimized tag parameters achieved after implementing PSO algorithm in MATLAB.
Electronics 2021 ,
10, 1603 10 of 15printed using aluminum ink.The width of every line in this structure is 0.5 mm.As mentioned above, the excitation should place at the maximum of the characteristic current point, which can be estimated as θ = θd.For the T-match matching structure, it should be flat to avoid influencing the characteristic mode of the logarithmic spiral.
Figure 10 .
Figure 10.Structure and dimensions of the proposed logarithmic near-field RFID tag.
Figure 10 .
Figure 10.Structure and dimensions of the proposed logarithmic near-field RFID tag.
Figure 11 .Figure 11 .Figure 11 .Figure 12 .
Figure 11.(a) Modal significance plot of proposed tag structure (b) The simulated real and imaginary impedance plot of proposed logarithmic near-field tag design.
Figure 12 .
Figure 12.The simulated current distribution and magnetic field (H z ) in the normal plane of proposed logarithmic near-field tag design.
Figure 13 .
Figure 13.Fabricated prototype of proposed tag (Tag1) along with some other tags fabrica comparison proposes.
Figure 13 .
Figure 13.Fabricated prototype of proposed tag (Tag1) along with some other tags fabricated for comparison proposes.
Figure 13 .Figure 14 .
Figure 13.Fabricated prototype of proposed tag (Tag1) along with some other tags fabricated for comparison proposes.
Figure 14 .
Figure 14.Read range testing environment setup in a semi-open microwave anechoic chamber.Figure 14.Read range testing environment setup in a semi-open microwave anechoic chamber.
Figure 15 .
Figure 15.Experimental testing regarding item-level tagging of jewelry products using proposed tag/sensor.
Figure 15 .
Figure 15.Experimental testing regarding item-level tagging of jewelry products using proposed tag/sensor.
Table 1 .
Dimension of the proposed near-field RFID tag design.
Table 1 .
Dimension of the proposed near-field RFID tag design.
Table 2 .
Comparison of read range and interrogation zone of proposed with other tag designs. | 8,663.2 | 2021-07-03T00:00:00.000 | [
"Computer Science"
] |
Influence of Machine-Made Sand Performance on Concrete
: This paper studied the influence of various performance indexes of machine-made sand on concrete work performance and admixture adaptability. The experimental results show: The MB value of machine-made sand has obvious negative effects on concrete performance and admixtures. The rock-powder content and the fineness module have relatively little influence on the working performance of concrete. The negative effect can also be reduced by adjusting the sand ratio, but the amount of external additives has a certain influence.
Introduction
With the rapid development of our country's national economy in recent years, infrastructure construction has proceeded on a large scale and gravel, the main raw material of concrete, has been consumed hugely. With the tightening of natural sand and gravel resources and increasing environmental protection, machine-made sand as an alternative resource for natural sand is also used in the production of concrete on a large scale. [1] The preparation method of machine-made sand makes its particle shape rough and sharp, large porosity, large specific surface area, and high content of stone powder, resulting in a big difference between machinemade sand concrete and natural river sand concrete, and the changes of its performance indicators will also affect the adaptability with admixtures and concrete workability. The research results show that stone powder can improve the pore structure of concrete and enhance the compactness of the interface transition zone, thereby enhancing the performance of concrete. The increase in mud content has the most significant deterioration of concrete performance. [2][3] Research by Song showed that as the content of flaky particles in machine-made sand increases, various performance indicators of concrete are decreasing, and the effect of flaky particles on strength is more significant in high-grade concrete. [4][5] The good gradation of machine-made sand has the characteristics of low porosity and stable structure, which can effectively improve the volume stability of concrete and is beneficial to the strength development of hardened concrete. [6][7] In the actual production process, it is difficult to change the performance characteristics of machine-made sand again. It is mainly to adjust the state of concrete in combination with the mix ratio and admixtures, and there are relatively few studies in this area. This paper mainly studies the influence of various performance indexes of machine-made sand on the adaptability of polycarboxylic acid and the work performance of concrete, which provides a theoretical basis for the control in actual production.
(3) The stones and test indicators used in the experiment are shown in the following table: (5) The machine-made sand used in the experiment and its basic characteristics and mineral composition are shown in Tables 2-4 It can be seen from Table 2-4 that there is no obvious linear relationship between the MB value of machinemade sand and the stone powder content.
and 2-5:
It can be seen from Table 2-5 that the selected machine-made sand has high calcium oxide content, all exceeding 40%, and the highest content is 51.43%. The silica content is the second, indicating that the machinemade sand is a calcium oxide rock.
Performance test method
The performance of concrete is implemented in accordance with GB 8076-2008 "Concrete Admixtures", and related performance indicators are tested.
The effect of MB value on concrete
Select machine-made sand with different MB values to determine the effect of machine-made sand MB value on the amount of water-reducing agent, concrete loss over time and strength changes. The test results are shown in Table 3-1, and the strength development trend is shown in Figure 3-1: From the analysis in Table 3-1, it can be seen that with the increase of MB value, the admixture content increases significantly, and its loss over time also increases significantly. This is because the adsorption performance of the machine-made sand, which is reflected by the MB value, will reduce the effective content of the admixture in the concrete, resulting in an increase in the admixture content and increasing the concrete loss over time. It can be seen from Figure 3-1 that the strength of concrete has a certain effect when the MB value is lower than 1.4 but is not obvious, and it will be significantly reduced when the MB value is above 1.4, indicating that the impact on the strength of concrete is not significant when the MB value is lower, and the strength will be significantly reduced when the MB value exceeds 1.4.
Choose different machine-made sand with different MB value, change the sand ratio of concrete mix, compare and analyze the data change and strength change of different machine-made sand MB value change sand ratio and water-reducing agent dosage and concrete loss over time. The test results are shown in Table 3-2: Analysis of the above table shows that as the MB value increases, the impact of increasing the sand rate on the concrete becomes more and more obvious, especially when the MB value>1.4. This is because when the MB value is small, the machine-made sand mainly contains stone powder, and the main factor affecting the admixture is that the specific surface area is larger and the water absorption is higher. At this time, increasing the sand rate has less effect on the admixture content and the concrete loss over time; When MB>1.4, the machinemade sand mainly contains mud, and the adsorption of the admixture is more serious, which will significantly reduce the effective content of the admixture. At this time, increasing the sand rate and the amount of the admixture and the loss of concrete over time are more significant.
Influence of stone powder content on concrete
Select machine-made sand with different stone powder content, adjust the concrete sand ratio to make the initial state of concrete similar, and determine the influence of machine-made sand powder content on the amount of water reducing agent, concrete loss over time and strength change. The test results are shown in Table 3-3, and the strength development trend is shown in Fig. 3 With reference to the comparative analysis of the experimental results in the above table, it can be seen that the admixture content increases with the increase of the powder content. This is because the higher the stone powder content, the larger the specific surface area of machine-made sand, which requires a higher water reducing rate, and the amount of water reducing agent used increases. When the content of stone powder increases, the flow rate of concrete slows down, which is caused by excessive slurry in the system. It can be seen from Figure 3-2 that the mechanism is that when the content of stone powder is less than 10%, the strength of concrete changes little, but when the content is higher than 10%, the strength of concrete decreases. This is because the right amount of stone powder has the effect of improving the gradation and filling the pores, but the stone powder does not participate in the hydration reaction of concrete, and too high content will affect its overall response and reduce its strength.
Influence of fineness modulus on concrete
Select machine-made sand with different fineness modulus, adjust the concrete sand ratio to make the initial state of concrete similar, determine the influence of machine-made sand fineness modulus on the amount of water reducing agent and the loss and strength of concrete over time. The test results are shown in Table 3-3, and the strength development trend is shown in Figure 3-3: The analysis of the above table shows that with the same sand rate, as the fineness modulus becomes larger, the overall and workability of the concrete becomes worse. This is because when the fineness modulus increases, the fine particles of concrete decrease, the gradation becomes worse, the amount of slurry in the concrete decreases, and the encapsulation of aggregates decreases. As the fineness modulus increases, the concrete state becomes worse. After increasing the sand rate to achieve a similar degree of expansion, the demand for external additives is greater, and the loss is more significant. This is because the increase in sand ratio leads to an increase in the content of stone powder in the concrete, an increase in the specific surface area, a higher water reduction rate is required, and the increase in the amount of cement in the concrete system that has a strong adsorption effect on the admixture makes the loss of concrete more obvious Combined with the analysis of Figure 3-3, it can be seen that by adjusting the sand ratio to improve the influence of the fineness modulus of the machine-made sand, the final concrete strength development changes less. This is because the fineness modulus mainly affects the concrete gradation. After adjusting the sand ratio, this effect is basically eliminated, and the final concrete working performance is affected less.
Based on the above experimental results and the analysis in Table 2-5, it can be seen that the machinemade sand is a rock made of calcium oxide. Although its calcium oxide and other components are obviously different, it has no obvious correlation with the concrete state, admixture content and strength development as a whole.
Conclusion
1)The MB value of machine-made sand has a greater impact on the state, strength and admixture content of concrete, especially when the MB value exceeds the national standard, the negative impact on concrete is more serious; 2) The stone powder content of machine-made sand has relatively little effect on the state and strength development of concrete. It can be adjusted by changing the sand ratio to reduce the negative effect, but it will affect the admixture content; 3) The fineness modulus of machine-made sand has a great influence on the state of concrete. It is necessary to significantly change the sand ratio to offset its negative effect, which will eventually affect the amount of admixtures, and the strength is relatively small. Captions should be typed in 9-point Times. They should be centred above the tables and flush left beneath the figures. | 2,310 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Coherent Spin Dependent Transport in QD-DTJ Systems
As the dimension of devices reduces to nano-scale regime, the spin-dependent transport (SDT) and spin effects in quantum dot (QD) based systems become significant. These QD based systems have attracted much interest, and can potentially be utilized for spintronic device applications. In this chapter, we consider nano-scale spintronic devices consisting of a QD with a double barrier tunnel junction (QD-DTJ)(schematically shown in Fig. 1). The DTJ couples the QD to two adjacent leads which can be ferromagnetic (FM) or non-magnetic (NM).
Introduction
As the dimension of devices reduces to nano-scale regime, the spin-dependent transport (SDT) and spin effects in quantum dot (QD) based systems become significant. These QD based systems have attracted much interest, and can potentially be utilized for spintronic device applications. In this chapter, we consider nano-scale spintronic devices consisting of a QD with a double barrier tunnel junction (QD-DTJ)(schematically shown in Fig. 1). The DTJ couples the QD to two adjacent leads which can be ferromagnetic (FM) or non-magnetic (NM). In a QD-DTJ system, the electron tunneling is affected by the quantized energy levels of the QD, and can thus be referred to as single electron tunneling.
The single electron tunneling process becomes spin-dependent when the leads or the QD is a spin polarizer, where the density of states (DOS) for spin-up and spin-down electrons are different. The interplay of SDT with quantum and/or single electron charging effects makes the QD-DTJ systems interesting. In such QD-DTJ systems, it is possible to observe several quantum spin phenomena, such as spin blockade (Shaji et al. (2008)), Coulomb
The theoretical study of the SDT through these DTJ systems are mainly based on two approaches, namely the master equation (ME) approach and the Keldysh nonequilibrium Green's function (NEGF) approach. For coherent transport across QD-DTJ devices, quantum transport methods are applied, such as the linear response (Kubo) method applicable for small bias voltage, and its generalization, the NEGF method for arbitrary bias voltage. Since the objective of the study in this Chapter is for device application over a wide voltage range, we focus on the latter. The NEGF method has been employed to analyze various transport properties of QD-DTJ systems, such as TMR, tunneling current (Weymann & Barnaś (2007)) and conductance. These analyses were conducted based on the Anderson model (Meir et al. (1993); Qi et al. (2008)), for collinear or noncollinear (Mu et al. (2006); Sergueev et al. (2002); Weymann & Barnaś (2007)) configurations of the magnetization of the two FM leads, or in the presence of spin-flip scattering in the QD (Lin & D.-S.Chuu (2005); Souza et al. (2004); Zhang et al. (2002)).
In this Chapter, based on the NEGF approach, we study the SDT through two QD-DTJ systems. In Section. 2, the electronic SDT through a single energy-level QD-DTJ is theoretically studied, where the two FM leads enable the electron transport spin-dependent. In the study, we systematically incorporate the effect of the spin-flip (SF) within the QD and the SF during tunneling the junction between the QD and each lead, and consider possible asymmetry between the coupling strengths of the two tunnel junctions. Based on the theoretical model, we first investigate the effects of both types of SF events on the tunneling current and TMR; subsequently, we analyze the effect of coupling asymmetry on the QD's electron occupancies and the charge and spin currents through the system (Ma et al. (2010)).
In Section. 3, we studied the SDT through a QD-DTJ system with finite Zeeman splitting (ZS) in the QD, where the two leads which sandwich the QD are NM. The spin-dependence of the electron transport is induced by the ZS caused by the FM gate attached to the QD. A fully polarized tunneling current is expected through this QD-DTJ system. The charge and spin currents are to be analyzed for the QD-DTJ systems with or without ZS.
Single energy level QD
The QD-DTJ device under consideration is shown in Fig. 2. It consists of two FM leads and a central QD in which a single energy level is involved in the electron tunneling process. The SDT through the QD-DTJ is to be theoretically modeled via the Keldysh NEGF approach (Caroli et al. (1971); Meir & Wingreen (1992)). In the transport model, the limit of small correlation energy is assumed, in the case where the energy due to electron-electron interaction in the QD is much smaller than the thermal energy or the separation between the discrete energy levels in the QD (Fransson & Zhu (2008)).
Theory
For the QD-DTJ device shown in Fig. 2 where In the above, ǫ σσ is the single energy level in the QD, ǫ σσ denotes the coupling energy of the spin-flip within quantum dot (SF-QD) from spin-σ to spin-σ state, t αkσ,σ (t αkσ,σ ) is the coupling between electrons of the same (opposite) spin states in the lead and the QD. α = {L, R} is the lead index for the left and right leads, σ = {↑, ↓} stands for up-and down-spin, and k is the momentum, ǫ αkσ represents the energy in the leads. The operators a † ν (a ν )anda † σ (a σ )arethe creation (annihilation) operators for the electrons in the leads and the QD, respectively.
Tunneling current and tunnel magnetoresistance
The tunneling current through the QD-DTJ system can be expressed as the rate of change of the occupation number N = ∑ σ a † σ a σ in the QD, Without loss of generality, we can calculate the tunneling current in Eq. (5) by considering the tunneling current I L through the left junction between the left lead and the QD. Evaluating the commutator in Eq. (5) in terms of creation and annihilation operators gives In Eq. (6), one may replace the creation and annihilation operators by the lesser Green's functions, which are defined as (Meir & Wingreen (1992)). Eq. (6) then takes the form of Will-be-set-by-IN-TECH Fig. 2. (a) Schematic diagram of the QD-DTJ structure consisting of a QD sandwiched by two FM leads; (b) the schematic energy diagram for the system in (a). In (a), the arrows in the leads indicate magnetization directions, which can either be in parallel (solid) or antiparallel (dashed) configuration, V b denotes the bias between the two leads, λ characterizes the strength of the SF-QD, t Lk↑,↓ describes the SF-TJ from the up-spin state in left lead and the down-spin state in the QD, t Lk↑,↑ shows the coupling between the same electron spin states in left lead and QD, and β = t Rkσ,σ /t Lkσ,σ represents the coupling asymmetry between the left and right tunneling junctions. In (b), μ L and μ R are the chemical potentials of left and right leads respectively, and ǫ d (ǫ d0 ) denotes the single energy level of the QD with or without bias voltage.
428
Fingerprints in the Optical and Transport Properties of Quantum Dots
www.intechopen.com
Coherent Spin Dependent Transport in QD-DTJ Systems 5 leads. Taking into account the contour-ordered integration over the time loop, the corresponding Dyson's equations for G < Lkσ,σ ′ (ǫ) can then be obtained (Mahan (1990) ,andtheg's are the corresponding unperturbed Green's functions of the leads, whose lesser Green's function and greater Green's function are in form ) −1 is the Fermi-Dirac function, μ L is the chemical potential, ǫ Lσ is the energy for electrons with spin σ in the left lead, k B is the Boltzmann constant and T is the temperature of the device. With this, the current in Eq. (7) can be expressed in terms of the Green's functions wholly of the leads and the QD, i.e., By applying the identities G t + G t = G < + G > and G > − G < = G r − G a to Eq. (8), we obtain after some algebra (Mahan (1990)): We now introduce the density of states for the electrons in the FM leads, denoted by ρ ασ (ǫ). For the electrons in the left FM lead, the density of states is ρ Lσ (ǫ) = 1 + (−1) σ p L ρ L0 (ǫ), while for the electrons in the right FM lead, it is ρ Rσ (ǫ)= 1 + (−1) a+σ p R ρ R0 (ǫ),whereσ = {0, 1} for spin-up/down electrons, a = {0, 1} for parallel/antiparallel alignment of the two FM leads' magnetization, ρ α0 = ρ α↑ + ρ α↓ /2, and p α is the polarization of the lead α.F o r the summation over k in Eq.(9), one may apply the continuous limit approximation ∑ {Lkσ} → ∑ {Lσ} dǫρ Lσ (ǫ). The current then can be expressed as where Γ ν and G (r,a,<) (ǫ) are (2 × 2) coupling and Green's function matrices, given by 429 Coherent Spin Dependent Transport in QD-DTJ Systems
www.intechopen.com
In Eq.(11), t Lσ,σ (t Lσ,σ ) applies for the case of spin-σ electron tunneling to the spin-σ (σ) state with (without) spin-flip. In low-bias approximation, Γ Lσ (ǫ)=2πρ Lσ (ǫ)|t * Lσ,σ ′ (ǫ) t Lσ,σ ′′(ǫ)| is taken to be constant (zero) within (beyond) the energy range close to the lead's electrochemical potential where most of the transport occurs, i.e., ǫ ∈ [μ α − D, μ α + D],w h e r eD is constant (Bruus & Flensberg (2004)). Based on the kinetic equation (Meir & Wingreen (1992)), the lesser Green's function is the Fermi-Dirac function of lead α,w i t hμ ασ being the chemical potential of that lead. When a bias voltage of V b is between the two leads, the leads' electrochemical potentials are, respectively, given by μ Considering that the current from the left lead to the QD is equal to the current from the QD to the right lead, one may calculate the current in a symmetric form, i.e., I = I L +I R 2 .T h efi n a l form for the total current is then given by In this QD-DTJ system, there exists the tunnel magnetoresistance (TMR) effect, which is caused by the difference between the resistance in parallel and antiparallel configurations of the two FM leads' magnetization. The TMR is given by where I P (I AP ) is the tunneling current in parallel (antiparallel) configuration of the two leads' magnetization.
During the course of analyses, we would also consider the state of the QD, which is characterized by its occupancy. The QD's occupancy with electrons of spin-σ can be obtained by considering the lesser Green's function of the QD, i.e.,
Retarded Green's function
To calculate the tunneling current in Eq. (13), one has to obtain the explicit expression for the retarded Green's functions G r σσ ′ (ǫ) o ft h eQ D .T h i sc a nb ed o n eb ym e a n so ft h e (equation-of-motion) EOM method. By definition, the general form of a retarded Green's function is given by Based on Eq. (16), for the QD-DTJ system with Hamiltonian in Eq.
(1), one may obtain a closed set of equations involving G r σ,σ ′ (ǫ) after Fourier transform, By solving the equation array of Eqs. (17) to (20), one reaches the explicit expressions for the retarded Green's functions (those in Eq. 12) of the QD: where the self energy Σ σ
Results and discussion
Based on the electron transport model developed in Sec. 2.1, one may analyze the SDT properties, such as the spectral functions, the tunneling charge current, spin current, the TMR and the electron occupancies of the QD. The SDT model enables one to investigate the effects of the SF-QD and SF-TJ events and the effect of the coupling asymmetry (CA) on the SDT properties as well.
Spin-flip effects
Firstly, one may evaluate the four elements of the retarded Green's function (GF) matrix [given in Eq. (12)], G r ↑↑ , G r ↑↓ , G r ↓↑ and G r ↓↓ . Based on Eqs. (21) and (22), one may obtain the respective spectral functions, −2ImG r ↑↑ , −2ImG r ↑↓ , −2ImG r ↓↑ ,and−2ImG r ↓↓ . Spectral functions provide information about the nature of the QD's electronic states which are involved in the tunneling process, regardless whether the states are occupied or not. The spectral functions can be considered as a generalized density of states.
If one neglects the SF-QD or SF-TJ events in the QD-DTJ system, there is no mixing of the spin-up and spin-down electron transport channels. In such QD-DTJ system, the two off-diagonal Green's functions, G r σσ (σ = {↑, ↓}) become zero [this can be confirmed by considering Eq. (22)], and so are their respective spectral functions. Thus, we focus on the spectral functions corresponding to the diagonal components of the retarded GF matrix. Those spectral functions are analyzed as a function of energy under both parallel and antiparallel configuration of the two FM leads' magnetization, in Figs. 3(a) to (d). A broad peak is observed corresponding to the QD's energy level (ǫ = ǫ d ). The broad peak can be referred to as "QD resonance". The broadening of the QD resonance is caused by the finite coupling between the QD and the leads, since the QD resonance is a δ function for an isolated QD with no coupling to leads. The width of the QD resonance reflects the strength of coupling between QD and leads; the stronger the coupling is, the broader the energy spread is, hence, awiderpeak. (b)], one may note three distinct features of the spectral functions: 1. A second resonance peak which corresponds to the leads' potentials, μ L = μ R = 0eV .The peak can be referred to as the "lead resonance". 2. The lead resonance for the spin-up spectral function (−2ImG r ↑↑ ) has a broader and lower profile compared to that of the spin-down spectral function, when the QD-DTJ system is in the parallel configuration. This indicates that the excitation at the lead energy has a larger energy spread for spin-up carriers due to the polarization of the lead. 3. The spin-up and spin-down spectral functions are identical in the antiparallel alignment, due to the spin symmetry of the system in antiparallel configuration.
Under zero-bias [shown in Figs. 3 (a) and
The spectral functions under an finite bias voltage (V b = 0.2 eV) are shown in Figs. 3 (c) and (d). It is observed that, 1. the lead resonance splits into two peaks at the respective left lead and right lead potentials, In the parallel configuration, the lead resonance of the spin-down electrons is higher (lower) than that of the spin-up electrons at μ L (μ R ). This is due to the spin-dependence of the electron tunneling between leads and QD. 3. The antiparallel alignment of leads' magnetization gives rise to similar magnitude of the two lead resonances for both spin-up and down spectral functions, due to the spin symmetry of the two spin channels.
Next, one may investigate the SF-TJ effects on the electron transport through the QD-DTJ system, where the SF-TJ strength v = 0. Figure 4 shows the effect of SF-TJ on the spectral function of diagonal GFs. With the SF-TJ effects, both the QD resonance and the lead resonance at ǫ = 0 are enhanced while the lead resonance at ǫ = −eV b is suppressed. This indicates that the increasing SF-TJ helps the tunneling to proceed primarily in the vicinity of the QD's energy level, resulting in an effective decrease in the coupling between the same spin states in leads and QD.
Based on the SDT model, one may analyze the effects of the SF-QD events (denoted by λ) on the spectral functions of the diagonal retarded GFs( G r ↑↑ and G r ↓↓ )oft h eQD-DTJsyst em, for both parallel and antiparallel alignments, as shown in Figure 5. At the QD energy level ǫ d = 0.2 eV, the presence of the SF-QD causes a symmetric split of the QD resonance, resulting in the suppression of tunneling via the lead resonances. The splitting of the QD resonance indicates that the two effective energy levels within the QD are involved in the tunneling process. This split translates to an additional step in the I − V characteristics, which will be discussed later in Fig. 7.
Considering the off-diagonal GF's (G r σσ ), the spectral functions are ploted in Figure 6, for both parallel and antiparallel alignment, under varying SF-TJ strengths (ν) and SF-QD strengths (λ). As shown in Figs. 6(a)-(d), without SF-TJ or SF-QD effects, the off-diagonal spectral functions vanish (the solid lines), i.e., the transport proceeds independently in the spin-up and spin-down channels. The presence of either the SF-TJ (ν > 0) or the SF-QD (λ = 0) enhances the magnitudes of the off-diagonal spectral functions monotonically, indicating stronger mixing of the tunneling transport through the two spin channels. 1. Within the sub-threshold bias range (V < V th ), the current is still finite due to thermally assisted tunneling at finite temperature. 2. The sub-threshold current is particularly large in the parallel configuration, due to the stronger lead-QD coupling and hence a greater energy broadening of the QD's level. 3. Overall, the parallel current exceeds the antiparallel current for the entire voltage range considered, due to the nonzero spin polarization of the FM lead. 4. Beyond the threshold voltage (i.e. V b ≫ V th ), the tunneling current saturates since only a single QD level is assumed to participate in the tunneling transport.
In the presence of SF-TJ, the tunneling currents in the parallel and antiparallel configurations are found to be significantly enhanced for bias voltage exceeding the threshold (V b > V th ), as shown in Figs. 7(a) and (b). The enhancement in current stems from the overall stronger coupling between the lead and the QD. Additionally, the degree of enhancement of the . First, the current step at the threshold bias V th splits into two, at V b = V th ± λ, respectively. The presence of the additional step is due to the splitting in the QD resonances observed in the spectral functions of Fig. 5. Secondly, the presence of SF-QD suppresses the current saturation value at large bias voltage (i.e., V b ≫ V th + λ). The decrease is more pronounced in the antiparallel configuration, resulting in the enhancement of the TMR with the increasing SF-QD probability, as shown in Fig. 7(f).
When both SF processes (Fig. 8) exist in the QD-DTJ system, the two types of SF have competing effects on the tunneling current at large bias voltage exceeding the threshold. The SF-TJ (SF-QD) tends to enhance (suppress) the tunneling current within the bias voltage region exceeding the threshold voltage. This competitive effect is shown for the overall I-V b characteristics in Figs. 8 (a)-(b). Evidently, the effect caused by one SF mechanism is mitigated by the other for both parallel and antiparallel alignments. However, both SF mechanism contribute to the asymmetry of tunneling current between the parallel and antiparallel cases, leading to an additive effect on the TMR for voltage bias region beyond the threshold voltage, as shown in Fig. 8 (c). The competitive effect on current and collaborative effect on TMR make it possible to attain simultaneously a high TMR and tunneling current density.
Coupling asymmetry effects
Recent experimental studies (Hamaya et al. (2009;2007)) of QD-DTJ structures revealed that the SDT characteristics are strongly dependent on the coupling asymmetry (CA) between the two junctions. Such asymmetry is inherent in the sandwich structure, given the exponential dependence of the coupling strength on the tunnel barrier width.
One may study the effect of the junction CA on the overall spin and charge current characteristics of the QD-DTJ system. The degree of the CA is characterized by the ratio of the right and left junction coupling parameter. The CA is denoted by β and β = t Rkσ,σ /t Lkσ,σ . The spin-up (spin-down) components of the tunneling current can be represented as I ↑ I ↓ , based on which the spin current is defined to be the difference between the two components, I s = I ↑ − I ↓ . In the following, one may focus on the parallel alignment of the magnetization of the two leads of the QD-DTJ system, since the magnitude of the spin current is the greatest in this case (see Mu et al. (2006)).
If assuming identical intrinsic electron density of states and identical polarization of the two leads, i.e., ρ α0 = ρ 0 , p α = p, one may obtain that We consider the I-V characteristics for the charge current and spin current, shown in Fig. 9 for two different CA values. These two values were chosen so that β 1 = 1/β 2 , meaning Other parameters are Γ L0 = 0.012 eV and Γ R0 = Γ L0 × β 1 2 for β 1 case, Γ L0 = 0.006 eV and Γ R0 = Γ L0 × β 1 2 for β 2 case, ǫ d0 = 0.3eV, p L = p R = 0.7, T = 100 K, υ=0 eV, λ=0 eV. that the left (right) junction in β 1 system is the right (left) junction for β 2 system. It is found that when the coupling strength of the right junction is four times as strong as that of the left junction, i.e., β = 2, both the magnitude of the charge and spin currents beyond the threshold voltage are the same as those for the reverse case (β = 0.5). This is due to the fact that the total resistance of the two QD-DTJ system is maintained regardless of the coupling asymmetry reversal. However, the CA affects the threshold voltage V th . T h i si sd u et ot h e different shifts of the QD energy level under positive and negative bias voltage, i.e., V th = 2ǫ d , 1+β 2 eV b . The CA effect on the charge current I-V characteristics is consistent with the experimental results observed by K. Hamaya et al. for an asymmetric Co/InAs/Co QD-DTJ system ( Fig. 2(a) of Ref. Hamaya et al. (2007)).
Next, one may investigate the CA effect on the QD occupancies, which are obtained by integrating the spectral function in Eq. (15). The QD occupancies for both spin-up and spin-down electrons are shown in Fig. 10. The occupancies for spin-up and spin-down electrons in the QD actually coincide since the QD-DTJ system is operated in the parallel configuration of the leads' magnetization. Moreover, as β is increased from 0.5 to 2, the QD occupancies of both spin orientations decrease. This decrease is reasonable since as Γ L is decreased with respect to Γ R , the coupling which allows the electron to tunnel to the QD from the source (left lead) is reduced, while the coupling which allows the electron to tunnel out of the QD to the drain (right lead) is enhanced. In this circumstance, electrons start to have a higher occupancy in the QD for β < 1 case, where Γ L > Γ R .
Summary
In summary, the SDT through a QD-DTJ system is theoretically studied. In the SDT model described in Sec.2.1, well-separated QD levels are assumed such that only a single energy level are involved in the SDT process, and the correlation between different energy levels is then neglected. The spectral functions, QD electron occupancies, tunneling charge current, spin current as well as TMR are evaluated based on the Keldysh NEGF formulism and EOM method, with consideration of the effects of the SF-TJ events, SF-QD events, and the CA between the two tunnel junctions on the SDT of the system.
QD with Zeeman splitting
In the last section, the SDT is studied for the QD-DTJ system where the spin dependence of the electron transport is caused by the spin polarization in the FM leads. In this section, one may analyze the SDT through the QD-DTJ system where the leads sandwiching the QD are non-magnetic (NM), and a FM gate is applied above the QD. The electron transport through this QD-DTJ system is spin-dependent due to the Zeeman splitting (ZS) generated in the QD. In this QD-DTJ system, one may expect a fully polarized current to tunnel through (Recher et al. (2000)). A fully spin-polarized current is important for detecting or generating single spin states (Prinz (1995;1998)), and thus is of great importance in the realization of quantum computing (Hanson et al. (2007); Kroutvar et al. (2004); Loss & DiVincenzo (1998); Moodera et al. (2007); Petta et al. (2005); Wabnig & Lovett (2009)).
The QD-DTJ system is schematically shown in Fig. 11. The magnetic field generated by the FM gate is assumed to be spatially localized such that it gives rise to a ZS of the discrete energy levels of the QD, but negligible ZS in the energy levels of the NM electrodes. When the bias voltage V b between the two NM electrodes, and the size of the ZS in the QD are appropriately tuned, a fully polarized spin current is observed in this QD-DTJ system. The polarization of the current depends on the magnetization direction of the FM gate. Here, the down (up)-spin electrons have spins which are aligned parallel (antiparallel) to the magnetization direction of the FM gate.
Theory
The Hamiltonian of the QD-DTJ system is in form of where where α = {L, R} is the lead index for the left and right leads, k is the momentum, σ = {↑, ↓} is the spin-up and spin-down index, a † and a are the electron creation and annihilation operators, ǫ σ is the energy level in the QD for electrons with spin-σ, U is the Coulomb blockade energy when the QD is doubly occupied by two electrons with opposite spins, and t αkσ,σ describes the coupling between the electron states with spin-σ in the lead α and the QD.
In our model, we consider only the lowest unoccupied energy level of the QD ǫ σ since most of the overall transport occurs via that level. With the presence of an applied magnetic field B, the lowest unoccupied energy level is given by ǫ σ = ǫ d + (−1) σ ) is the energy level for spin-down(up) electrons, respectively, gμ B B is the Zeeman splitting between ǫ ↓ and ǫ ↑ , g is theelectronsping-factor ,μ B is the Bohr magneton, B is the applied magnetic field generated by the FM gate, and ǫ d = ǫ d0 − eV b β 2 /(1 + β 2 ) is the single energy level of the QD without applied magnetic field, with ǫ d0 being the single energy level under zero bias voltage and β being the coupling asymmetry between the two tunnel junctions. We assume a symmetrical QD-DTJ system where β = 1.
leads and in the absence of B-field, the QD's energy level is modified as ǫ d = ǫ d0 − eV b β 2 /(1 + β 2 ),w h e r eǫ d0 is the energy level at zero bias voltage, and β = t Rkσ,σ /t Lkσ,σ denotes the asymmetry of the coupling in the left and right tunnel junctions. In the following, a symmetric QD-DTJ system is assumed where β = 1.
Based on the Hamiltonian, the tunneling current is evaluated via the NEGF formalism introduced in Section. 2.1. The charge and spin current are defined as I c = I ↓ + I ↑ and I s = I ↓ − I ↑ , respectively, where the tunneling current of spin-σ electron tunneling through the QD-DTJ system is given by ǫ+iη−ǫ αkσ . The coupling coefficients t αkσ,σ and t * αkσ,σ are spin-independent since the two leads are NM. Based on Eqs. (27) and (28), one can then calculate the spin-σ current I ↑ and I ↓ , and hence the charge and spin current, which are defined as I c = I ↓ + I ↑ and I s = I ↓ − I ↑ , respectively.
Spin polarized current
Based on the SDT model in Sec. 3.1, one may obtain the I − V characteristics of the system for both spin current I s and charge current I c , as shown in Fig. 12. In the absence of a FM gate, i.e., with zero magnetic field (B = 0) applied to the QD, the magnitude of the charge current I c is the same as that of the system with a FM gate, within the bias region μ R < ǫ ↓ < μ L < ǫ ↑ + U. In this region, the spin current I s is zero for the system without a FM gate, where the device is spin-symmetric and the transport across it is spin-independent.
442
Fingerprints in the Optical and Transport Properties of Quantum Dots
www.intechopen.com
Coherent Spin Dependent Transport in QD-DTJ Systems For the system with a FM gate, both the charge current I c and spin current I s of the system show the three distinct regions with respect to the bias voltage, 1. μ R < μ L < ǫ ↓ < ǫ ↑ ,wherebothI c and I s are negligible due to the suppression of electron tunneling by Coulomb blockade; 2. μ R < ǫ ↓ < μ L < ǫ ↑ + U, where due to spin blockade, only the spin-down channel contributes to the transport across the system, resulting in a fully spin-down polarized current with I s = I c ; 3. and μ R < ǫ ↓ < ǫ ↑ + U < μ L , where it is energetically favorable for both types of spins to tunnel across the device, leading to zero spin current.
The sign of the spin polarization of the tunneling current can be electrically modulated, i.e., by means of a gate voltage V g . The gate voltage modulation of the QD energy level ǫ d can result in the switching of the spin polarization of current, without requiring any corresponding change to the magnetization of the FM gate. If the energy diagram of the system satisfies ǫ ↓ − eV g < μ R < ǫ ↑ − eV g < μ L , a fully spin-up polarized current will thus flow continuously through the system.
Summary
In summary, the SDT through a QD-DTJ system is analyzed with NM leads and FM gate. Under the applied magnetic field from the FM gate, the energy level in the QD splits to two due to ZS effect. The two energy levels can be modulated by the gate voltage applied to the FM gate. Based on the SDT model developed by NEGF formulism and EOM method, the I − V b characteristics is analyzed, and a fully spin-down polarized current is obtained when the system is operated under a proper bias voltage between the two leads. Additionally, by utilizing the gate voltage modulation instead of switching the magnetization of the FM gate, the polarization of the current can be reversed from spin-down to spin-up by electrical means.
Conclusion
In conclusion, the SDT is theoretically studied for the QD-DTJ systems where a QD is sandwiched by two adjacent leads. The tunneling current through the systems is shown to be rigorously derived via the Keldysh NEGF approach and EOM method. The SF events, CA, ZS and FM gating are systematically incorporated in the SDT models. Considering these effects, one may analyze the SDT properties of QD-DTJ systems, including the tunneling current (charge current and spin current), the TMR, the spectral functions and the occupancies of the QD. The SF-TJ and the SF-QD events are found to have competitive effects on the tunneling current. The presence of CA effectively modifies the threshold voltage, and gives rise to additional bias voltage dependence of the QD's electron occupancy and the charge and spin currents. The FM gate attached to the QD can be utilized to generate a bipolar spin polarization of the current through QD-DTJ systems. The above investigations done have yielded a better understanding of the SDT in QD-DTJ systems.
Acknowledgement
We gratefully acknowledge the SERC Grant No. 092 101 0060 (R-398-000-061-305) for financially supporting this work. The book "Fingerprints in the optical and transport properties of quantum dots" provides novel and efficient methods for the calculation and investigating of the optical and transport properties of quantum dot systems.
This book is divided into two sections. In section 1 includes ten chapters where novel optical properties are discussed. In section 2 involve eight chapters that investigate and model the most important effects of transport and electronics properties of quantum dot systems This is a collaborative book sharing and providing fundamental research such as the one conducted in Physics, Chemistry, Material Science, with a base text that could serve as a reference in research by presenting up-to-date research work on the field of quantum dot systems. | 7,712.6 | 2012-06-13T00:00:00.000 | [
"Physics",
"Engineering"
] |
Currents and Voltages Induced by Electric Field in Two Converging Single-wire Overhead Transmission Lines
Three methods for specifying the function of partial mutual capacitance between converging single-wire overhead transmission lines (OTL) $i$ and $k$ depending on the distance along the line are considered: in graphic, analytical and interpolation modes. The practical coincidence of the mutual capacitance values along the converging lines calculation results for the considered methods is shown. Schemes and algorithms for current and voltage distribution along $i$ and $k$ converging single-wire OTL calculation are developed using Carson's equation for line self-inductance, taking into account the ground finite conductivity, and with the replacement of partial mutual conductance and phase’ electromotive force (EMF) to elementary current source function. Calculations of the induced current and voltage distribution along $i$ disconnected line under its grounding at near and far from $k$ line end were carried out. It is shown that induced voltage at the ungrounded end of disconnected line is higher than at the grounded one. It is noted that when $i$ line is grounded at far end from $k$ line, the induced voltage at the ungrounded end is higher than when $i$ line is grounded at near from $k$ line end. Current voltage values induced by electric field in $i$ line grounded an no grounded ends depending on the line convergence “p” section length, minimum “a0” distance between the converging lines and “Θ” angle of the line convergence are carried out. Boundary p, a0 and Θ values, one of which violation allows us to neglect the converging OTL electric effect.
INTRODUCTION
Personnel working on overhead transmission lines (OTL) under induced voltage health maintenance requires these voltages limit values compliance. These requirements compliance ensure is possible by use the developed algorithms for calculation the current and voltage distribution along the disconnected grounded single-wire (single-phase) i line, induced by electric field (EF) of operating single-wire (single-phase) k line, converging with the i line under Θ angle.
PARTIAL CAPACITANCE BETWEEN CONVERGENT SINGLE-WIRE I AND K LINES
Consider first two parallel single-wire i and k OTLs located from each other at a distance and at h i = h and h k = H height above the ground (Figure 1, 1(a)). Figure 1(b) shows the scheme of i and k single-wire OTLs connection between lines and with the ground by partial capacities.
C ii0 , C kk0 and C ki0 specific partial capacities are determined by Maxwell's formulas first group potential factors use (1).
where: H and h are the heights above the ground; R and r are k and i line wires radiuses, respectively. Potential factors α matrix has the form (2): (2) Matrix α inversion leads to obtain the Maxwell's formulas second group potential factors β matrix (3): Potential factors have dimension is meter per farad [m/F], and capacity factors dimension is farad per meter [F/m].
Consider lines k and i, converging at Θ angle ( Figure 2). We know: i line length l i = p, the distance between k and i lines at the beginning a 0 and at the end a(p) of i line, and so at the and a(p) of line we know k and iΘ converging angle, because: Let's count, that a 0 = 5 m and a(p) = 100 m. Lines k and i parameters are: H = 19 m, h = 17.5 m, R = r = 0.014 m. Partial capacity values between k and i lines are determined by the expression [1, 2] C ki0 = −β ki = −β 12 = −β 21 , then on expressions (1) ÷ (3) we obtain C ki0 (a) values with a change from 5 m to 100 m in 5 m increments, as it is shown in Table 1.
It should be noted that to obtain the values of partial capacities between phases and lighting wires of converging working and disconnected three-or more-phase power lines, C ki0 (a) value calculation must be carried out taking into account all of these OTL phases and lighting wires. C ki0 (a) values can be calculated for case from one to six three-phase OTL by computer program "OTL EMF" [3].
To represent C ki0 (a) capacity value changes from a distance in the graphical form, fill in the MathCAD the matrix named Z, containing one row and c = 20 columns, C ki0 values from the Table 1. Next, we transform it into column matrix and, choosing the boundaries and the step of ψ j argument changing we obtain Z j curve with C ki0 values linear interpolation. To smooth Z j curve, create ZZ(y) function and set the limits and the step of argument a changing (4): ZZ(a) function allows carrying out numerical mathematical operations with it, but it is not analytically determined. Define an analytical expression for C ki0 (a) capacity value.
For this purpose, in MathCAD, using the expressions (1) and (2) at ε 0 ≡ ε, we carry out the operation of matrix α analytical inversion analytical equal sign (→) using: Then analytically obtained C An ki0 (a) capacity value can be written (5): C An ki0 (a) function disadvantage is complexity, and complexity degree increases dramatically with increasing number of converging lines.
Will hold a rough interpolation (approximation) ZZ(a) curve function (6) using: The coincidence of ZZ(a), C An ki0 (a) and C Int ki0 (a) curves is good. Thus, when calculating transverse voltage induced on the disconnected and grounded single-wire i line by converging single-wire k line electric field, partial capacity between these lines, depending on the distance between them, can be described by C ki0 (a) = ZZ(a), C An ki0 (a) and C Int ki0 (a) functions.
CURRENTS AND VOLTAGES DISTRIBUTION ALONG GROUNDED AT ONE END I LINE CONVERGING TO K LINE
OTL k l k length is in idle mode under voltageU k =Ė k = 127 kV, and i line l i < l k length is disconnected and grounded at one end remoted from k line substation (in this case, the left end) (Figure 4). Figure 4:
The distance a(l) between k and i OTLs change (Figures 2 and 5) is described by the expression (7): Substituting in C ki0 (a) expression as well as expressions (4) ÷ (6) instead of variable a a(l) function, get the equations for C ki0 (l) = ZZ(a(l)), C An ki0 (l) and C Int ki0 (l). Let us replace electromotive forceĖ k and specific capacitance jωC ki0 (l)dl with elementary current source dİ(l) ( Figure 5): dİ(l) = jωC ki0 (l)Ė k dl. Because dİ(l) = jωC ki0 (l)Ė k dl, the current in iline find by expressions in graphic, analytical and interpolation forms: VoltageU A / (l) value to point A / , which potential is assumed to be 0, and voltageU a b (l) value along i line find by equations: I(l) current in graphic, analytical and interpolation forms are presented in (8): Since there is no current in Z e0 resistance, the voltage isU a b (l) =U A / (l): (Figure 7(a)) when it is grounded at near to at k line end ( Figure 5). This is mathematically explained by larger area under the curveİ(l) in Figure 9(a), than in Figure 7(a), calculated by the integral´İ(l)dl, included in the expression of voltageU A / (l) andU a b (l). The physical explanation is that in case of i line grounded at far from k end induced currentİ(l) module from the beginning to the end of i line is higher than in case of i line grounded at near k line end. In expression (7) a function variables will be a 0 , l and Θ: a(a 0 , l, Θ). Partial capacity between lines we present in analytical form: C ki0 (a) = C An ki0 (a). Substituting a(a 0 , l, Θ) function in expression (5) instead of variable a get C An ki0 (a 0 , l, Θ). Similarly, by introducing additional variables into formulas (7) and (8), we obtain equations for current and voltage: İ(a 0 , l, Θ) andU a b (a 0 , l, Θ).
Take the angle Θ = 0.0573 • . Figure 11 shows the distribution of current modulesİ(a 0 , l = p, Θ) (a) and voltage modulesU A / (a 0 , l = p, Θ) =U a b (a 0 , l = p, Θ) (b), for case of grounded (l = p) i line, as well as Figure 12 shows voltageU A / (a 0 , l = 0, Θ) =U a b (a 0 , l = 0, Θ) module values for case of ungrounded (l = 0) i line end l i = p = 100, 50 and 10 km length with minimum distance a 0 between k and i lines change from 5 to 100 m.
CONCLUSION
The results of calculation show that transverse voltage induced in unconnected i line by electric field of converging with itk line under voltage can be neglected in case of violation from the values of any tested parameters presented in Table 2 as well as in [7]: the length of i line converging part l i = p, minimal distance a 0 between converging lines, convergence angle Θ. | 2,156 | 2019-06-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Random Projection using Random Quantum Circuits
The random sampling task performed by Google's Sycamore processor gave us a glimpse of the"Quantum Supremacy era". This has definitely shed some spotlight on the power of random quantum circuits in this abstract task of sampling outputs from the (pseudo-) random circuits. In this manuscript, we explore a practical near-term use of local random quantum circuits in dimensional reduction of large low-rank data sets. We make use of the well-studied dimensionality reduction technique called the random projection method. This method has been extensively used in various applications such as image processing, logistic regression, entropy computation of low-rank matrices, etc. We prove that the matrix representations of local random quantum circuits with sufficiently shorter depths ($\sim O(n)$) serve as good candidates for random projection. We demonstrate numerically that their projection abilities are not far off from the computationally expensive classical principal components analysis on MNIST and CIFAR-100 image data sets. We also benchmark the performance of quantum random projection against the commonly used classical random projection in the tasks of dimensionality reduction of image datasets and computing Von Neumann entropies of large low-rank density matrices. And finally using variational quantum singular value decomposition, we demonstrate a near-term implementation of extracting the singular vectors with dominant singular values after quantum random projecting a large low-rank matrix to lower dimensions. All such numerical experiments unequivocally demonstrate the ability of local random circuits to randomize a large Hilbert space at sufficiently shorter depths with robust retention of properties of large datasets in reduced dimensions.
I. INTRODUCTION
Many problems in machine learning and data science involve the dimensional reduction of large data sets with low ranks [1] (e.g.Image processing).Dimensional reduction as a preprocessing step reduces computational complexity in the later stages of processing.Principal Components Analysis (PCA) [2], reliant on Singular Value Decomposition (SVD), is one such method to reduce the dimension of data sets by retaining only the singular vectors with dominant singular values.There are quantum circuit implementations for PCA (and SVD) [3][4][5][6] and for related applications [7], some of which are near-term (Noisy Intermediate Scale Quantum (NISQ) technologies era [8]) algorithms [6].
Techniques like PCA (and SVD) involve a complexity of ( 3 ), where is the size (or the dimension) of data vectors.An alternative to such computationally expensive dimensional reduction methods is the random projection method [9][10][11].In the random projection method, we multiply the data sets with certain random matrices and project them to a lower dimensional subspace.Recent years have witnessed fruitful usage of an especially thoughtful variant of such random projections which are known to preserve the distance between any two vectors in the data set (say ⃗ 1 and ⃗ 2 ) in the projected subspace up to an error that scales as ( √ log() ) where is the original dimension and is the reduced dimension of each data vector.This choice is motivated by the Johnson Lindenstrauss lemma (JL lemma) [12] introduced at the end of the last century.Since *<EMAIL_ADDRESS>manuscript will exclusively use such transformations to validate all the key results, we shall hereafter refer to such candidates as good random projectors.Such projection techniques are beneficial to myriad applications because the preservation of distances between data vectors ensures that their distinctiveness is uncompromised thereby rendering them usable for discriminative tasks such as classification schemes like logistic regression ( [13]).
Classically, this is advantageous compared to other methods like PCA because the random matrix used for projection is independent of the data set considered.The time complexity involved in the random projection arises from matrix multiplication complexity ( 2.37 ) [14] followed by the usual SVD complexity of ( 2 poly log()) making the resulting scheme cheaper than the PCA (or SVD).It must be emphasized that a further reduction in the time complexity to (poly log(N)) can be afforded using the Fast Johnson-Lindenstrauss transforms [15].Several candidates have been studied in classical random projection including Haar random matrices, Gaussian random matrices etc.But the memory complexity of storing such matrices can be potentially huge (proportional to 2 times the precision of each matrix entry).This has engendered the introduction of several competing candidates with better memory complexity(containing sparse matrices with random integer entries) and multiplication complexity.The latter category is mainly considered in practical applications today [10] and will also be used to compare the results of the quantum variants in this manuscript.
Classically random projections performed by using projectors sampled from Haar random unitaries suffer from the innate problem of storage due to its exceptionally high memory usage.Even in the quantum setting, implementing Haar random unitaries requires exponential resources as shown in some counting arguments [16].As a result, it is natural to consider unitary −designs which only match the Haar measure up to -th moments.Quantum implementation of such − designs, as has been studied in this manuscript, is efficient owing to the fact that local random quantum circuits approach approximate unitary − designs [17][18][19] in sufficiently shorter ((log() 10.5 ) depths [20,21] (Here, we have assumed that the number of qubits required to encode a data vector or a wave vector of size is ∼ log()).It was shown recently that even shorter depths suffice [22].The primary workhorse of this manuscript will be based on such quantum circuits which as we shall eventually show not only performs better in accuracy than standard more commonly used classical variants but also require a lesser number of single qubit random rotation gates (poly(log())) for implementation.
The flow of the paper is as follows.In Sec.II, we begin with an introduction to the JL Lemma and how that makes the random projection method effective.This is followed by a brief introduction to the Haar measure and approximate Haar unitaries generated from local random quantum circuits.Then, we explicitly prove that the local random quantum circuits which are exact unitary 2− designs can satisfy the JL lemma with the same high probability as Haar random matrices thereby making them good random projectors.We then extend the results to approximate unitary 2− designs and discuss the bounds on depths to achieve a certain error threshold in the JL lemma and derive a slightly different probability of the satisfaction of the latter.We would note that the quantum memory required to store a 2-design or approximate 2-design is (poly(log )) where is the size of the data vector.It is worth noting that it has been previously shown in Ref. [23] that approximate unitary − designs with = () can be used to satisfy the JL lemma, thus corroborating our assertions that even they are good candidates for random projection.The exponentially low limit obtained in Ref [23] is better than the limit derived in this paper only for system sizes ≥ (10 4 ).For ∼ (10 3 ), limits derived in this manuscript for unitary 2− designs are tighter.
For numerical quantification of the key assertions, we first use the MNIST, CIFAR-100 image data sets( [24,25]) and show that the quantum random projection preserves distances post-projection not far off from the computationally expensive algorithms like PCA (along the lines similar to Ref. [11]) and is similar to the classical random projection.This task doesn't require one to know the singular values or the singular vectors explicitly.We compare the performance of quantum random projection with the commonly used classical random projection technique.Instead of benchmarking it against the Haar random matrices generated classically, we make use of classical random projectors whose storage and multiplications are efficient.To this end, we use Subsampled Randomised Hadamard Transform (SRHT)( [15]) for different sizes of data sets (1024, 2048 corresponding to 10 and 11 qubits, respectively).As a second instance, we look at a task that requires us to calculate the singular values of large low-rank data matrices and the singular vectors associated with them.In this regard, we perform the computation of entropies of low-rank density matrices by randomly projecting them to reduced subspace (along the lines of (Refs.[26,27]) to get the dominant singular values post-projection.We also demonstrate that one can construct the simplest quantum random projector by performing quantum random projection and extracting the dominant singular values using the variational quantum singular value decomposition (VQSVD) [6].Here, random projection to a lower dimension allows us to optimize using a lower dimensional variational ansatz at one end.The combined effect of the variational nature of the algorithm and the fact that unitary -designs are short-depth establishes good testing grounds for the implementation of this demonstration in near-term devices [8] .These demonstrations highlight the ability of local random circuits to efficiently randomize a large Hilbert space(and hence require exponentially lesser parameters to create a random projector) and serve as good random projectors for dimensionality reduction.
A. Random Projection
The random projection method is a computationally efficient technique for dimensionality reduction and is useful in many problems in data science, signal processing, machine learning, etc (See, for example,[ [10,11]]).The reason behind the effectiveness of the method stems from the Johnson-Lindenstrauss lemma [12].Lemma 1.For any 0 < < 1 and ∈ ℤ + .Let us also consider ∈ ℤ + s.t.
Then, for any set of vectors where || 2 refers to the 2 norm.
Proof.See Lemma Ref. [12] 1.Definition 1: random projections vs Good random projections Multiplication with Gaussian or Haar random matrices along with a scaling factor followed by projection to a reduced subspace is one function that obeys Eq.2 [28,29].Essentially, it follows from the fact that the expected value of Euclidean distance post-random projection is equal to the Euclidean distance in the original subspace.And the distances post-random projection are not distorted beyond an factor with high probability because the variance of the distances post-random projection is sufficiently low.
From now on, we will consider random projections that satisfy the JL lemma in Eq.2 to be good random projections.In this regard, the JL lemma says that any set of points in a high-dimensional Euclidean space (say, ℝ )can be embedded into a lower number of dimensions (say, = ( −2 log )) by a random projection, preserving all the pairwise distances to within a multiplicative factor of 1 ± .This is also equivalent to preserving all the pairwise inner products (or angles).Formally, where || 2 refers to the 2 norm and ⃗ ∈ ℝ ∀ and Π denote the random projection matrix of size × (or × in which case the random matrix multiplies the data vectors from the right) which obeys Eq.3 and will be called good random projectors from now onwards.
Several other candidates which satisfy JL lemma have been considered for random projection in various applications.These random matrices include Subsampled Randomised Hadamard Transform (SRHT) and Input Sparsity Transform (IST).( [10,15,30,31]).These random projectors are database friendly because, unlike Gaussian or Haar random matrices whose storage memory cost is proportional to the number of matrix entries and precision, these could be retrieved by matrices that are sparse and have whole number entries.
For benchmarking the quantum random projection in our analysis later, we will be using the SRHT ( [32]) to compare the performances of random projection using random quantum circuits.We picked SRHT because we want to compare different random matrices that could be efficiently stored and multiplied.In a classical setting, that would be the SRHT and in a quantum setting, it would be the 2-designs that act as quantum random projectors.We construct a SRHT random projector as in the Algorithm
B. Approximate Unitary 𝑡-designs
In the next section, we will show that the random matrices sampled uniformly from the Haar measure satisfy JL lemma.
Though the exact replication of Haar random unitaries is not possible as a quantum circuit because of the fact that they require exponential resources [16], we will show that to satisfy JL lemma, exact or approximate -designs [19], which matches the Haar measure only until 2nd moment would suffice.We will introduce the definitions related to the approximate -designs in this section and provide theorems on approximate -(or 2) designs satisfying the JL lemma.
Definition 1: Moment Operator
The ℎ moment of a superoperator defined with respect to a probability distribution () defined on the unitary group () is defined as where ( ) is the volume element of the probability distribution ().
Definition 2: Exact Unitary 𝑡-design
Let us define Δ () () (⋅) [31][33] [34] as where refers to the uniform distribution over the Haar measure.Unitaries like sampled from a distribution () are said to form a This essentially means that the () mimics the Haar measure up to the -th moment.
Definition 3: 𝛼 Approximate unitary 𝑡-designs
The unitary group () is said to form an approximate unitary − design iff where || ⋅ || ⋄ refers to the diamond norm (see for example [22]).Though the approximate unitary design definition here involves the diamond norm, formulations using other norms exist [35], and the theorems in the following section generalize for those formulations as well.Local random quantum circuits with lengths (log()(log() + log(1∕))) become approximate 2-designs [22].
III. RANDOM QUANTUM CIRCUITS AS RANDOM PROJECTORS
In this section, we show that local random quantum circuits which are approximate unitary 2-designs(or exact unitary 2-designs) are suitable candidates for the random projection (will be called quantum projectors from now on).We show that quantum projectors satisfy Johnson Lindenstrauss's lemma so that their random projection is a 2 subspace embedding with a very high probability of having a very low error.And if one were to compute specific quantities like entropy, one should quantify whether such random matrices produce projected singular values that are closer to the true singular values with higher probability.In a later section, we will discuss how the projection can be done on real quantum computers and how the reduced dimensional vectors and their singular values can be read out from near-term quantum computers.In the following theorems, let us denote the Haar measure distribution as and the distribution corresponding to approximate = 2 design as 2, .Proofs of the theorems can be found in the AppendixA.
Proof. See Appendix A
It is worth mentioning that upper bounds on the distortion have been obtained before with exponential scaling ( [23]), namely 2 4 (−2 −4 2 ) for Haar measure and 2 10 (−2 −10 2 ) for approximate −designs.These limits are better than the limits obtained here only for (, ) > 10 4 .For the cases that are to be explored in the paper, our limits are tighter than the exponential limits.For the plots in the experiments section of the paper, we use the ansatz used in [36] which is assumed to be an exact 2-design ansatz beyond a certain depth Fig. 1.The main text contains the depths at which the ansatz matches the exact 2design limit(∼ 150).There are many candidate local random circuit architectures which are approximate 2-designs [22].Instead of studying the projection abilities of different local random circuits architecture, the appendix E contains some experiments where we look at less expensive ansatz and hence is in an approximate unitary 2-design regime by choosing a lower depth (∼ 50) (analogous to [34]) of the same circuit Fig. 1.
FIG. 1.The local random quantum circuit used in preparing a quantum random projector is the ansatz that has been used in [36] and is known to converge to an exact 2-design limit of the variance of the local cost function beyond a certain depth.The circuit contains a layer of (∕4) rotations (often used to make all the directions symmetric in a variational training procedure.We don't necessarily need to have this component).This is followed by alternating random single-qubit rotations and ladders of CPHASE operations repeated D times.For = 10,11 (the dimensions studied in this paper), the circuit reaches the exact 2-design limit (variance limit) at D ≥ 150.
IV. EXPERIMENTS ON QUANTUM RANDOM PROJECTORS
In this section, we consider two different experiments to benchmark the performance of quantum random projection discussed in the previous section against the SRHT projection which will be labeled as classical random projection in the plots.This should be looked at as a comparison of random projectors' that can be stored and applied efficiently in terms of memory and time complexity in classical vs quantum settings.Since quantum random projectors approximate the Haar measure, their projection abilities are expected to be better than the SRHT projectors because the latter is less random compared to the Haar measure.However, in certain applications, it is known that they both converge to similar performance when the size of the data set tends to infinity [29].We see in the appendix D that their performances start becoming closer when we increase the size of the data matrices, and vectors from 1024 to 2048 (corresponding to 10 and 11 qubits, respectively).
We initially consider the task that doesn't require us to know the singular values and is concerned with only dimensionality reduction.In this regard, we reduce the dimensions of the MNIST [24] and CIFAR 100 [25] image datasets and benchmark the performance of quantum random projection against classical random projection.We also compare it with the computationally expensive principal components analysis (PCA) which is supposed to give the exact projection to the dominant singular vectors of the datasets and cannot be outperformed beyond a certain rank.
In the second task, we calculate the Von Neumann entropy of low-rank density matrices (along the lines of [26]) which requires us to know the singular values after random projection in addition to the dimensionality reduction.We compare the performance of quantum random projection (QRP) vs classical random projection (CRP) for this task over different ranks (r) of the density matrices.
In this section, we pick the local random quantum circuit from Fig. 1 and we assume that we can make arbitrary projection operators with any number of basis vectors, i.e, if the operator projects to the first basis state (| 1 ⟩ , | 2 ⟩ , .., | ⟩) in any basis.Then, where we don't have any restriction on what basis we pick and what values k can take.In a later section, we discuss the simplest projection operator one can construct by measuring one or more qubits and restricting to particular outputs (0 or 1) in those qubits, as shown in Fig. 2. It is worth noting that this scheme has a structure similar to the quantum autoencoders ( [37]) but the circuit here is data-agnostic.
A. Dimensionality reduction of Image data sets
In this subsection, we benchmark the performance of the QRP against CRP in the task of dimension reduction of subsets of two different image datasets, MNIST and CIFAR-100.We also plot the performance of the computationally expensive PCA which is supposed to capture all the nonzero singular valued singular vectors.When the reduced dimension is greater than the rank of the system, PCA could never be outperformed.
MNIST contains 28x28 grayscale images.The matrix representations of the images were boosted to 32x32 so FIG. 2. The figure shows the schematic of performing the quantum random projection.The data vector has to be encoded into the circuit through one of the existing encoding schemes (See main text).This is followed by the local random quantum circuit and partial measurements (the number of qubits measured depends on how low your final reduced dimensions are) or an arbitrary projection operator.For partial measurements, the algorithm proceeds only if the measurement results in qubits in only 0 (or only 1).This is equivalent to reducing the data set's size by 1/2,1/4,1/8 and so on depending on how many qubits are measured.that they can be reshaped into 1024x1 normalised vectors by adding zeros(Note that this is not a common quantum encoding scheme.We use QRP on the normalized data vectors for a direct comparison with CRP).We have to do this preprocessing step because the projectors that we consider (both CRP and QRP) are of the form 2 × and hence take only 2 dimensional vectors as the input.
CIFAR-100, in addition to being 28x28 images, are also colored images and had to be converted to 32x32 grayscale so that they can be fed as input to our projectors.But unlike MNIST, which contains handwritten integers from 0 to 9, the CIFAR-100 dataset contains images belonging to 100 different classes including Airplanes, Automobiles, birds, cats, trucks, etc.As a result, CIFAR-100 is expected to have more features in their datasets and hence greater rank compared to MNIST if we consider subsets from each of these datasets.
To perform the comparison between CRP and QRP, we took 1000 images from each of these datasets.And in each of these subsets, we reshaped the images into 1024x1 normalized vectors and performed random projection to lower dimensions (x-axis of the Fig. 3).Then, we randomly sampled two data vectors and compared the error percentage in their 2 norm (Euclidean distance) between them in the original space and the reduced dimensional space obtained after random projection.This procedure is repeated 10,000 times and the mean error percentages and their 95 % confidence intervals for different reduced dimensions has been reported in the plots Fig. 3.
The random projections are performed by multiplying the vectors with random matrices (see Algorithm 1 and Algo-rithm 2) where Π is the SRHT projector and , are the sampled from the matrix representation of local random quantum circuit and projectors used.And PCA projection is obtained by first computing singular value decomposition on the dataset and projecting them to the subspace of dominant singular vectors.
Projection errors on MNIST image dataset using different schemes FIG. 3. The plots show the mean percentage errors in the distance between 10,000 different random pair of data vectors in the MNIST and CIFAR-100 data sets.The envelopes represent their 95 % confidence intervals.We see that PCA outperforms the random projection methods beyond a certain rank.Amongst the random projection methods, though there is not much difference between the classical random projection (CRP) and quantum random projection(QRP), we observe that the latter performs slightly better.
Fig. 3 show that the PCA outperforms the random projection methods beyond a certain rank.This is because, beyond the rank of the dataset considered, PCA projects exactly to the subspace with non-zero singular values.Despite that, we see that random projection methods which are not computationally extensive (because they don't compute the subspace with non-zero singular values) perform to the same extent and even better than the PCA at lower reduced dimensions.This dominance in performance at lower reduced dimensions is visible in larger dimensional datasets (see Appendix D).We also see that PCA takes more reduced dimensional vectors to catch up with the random projection algorithms in the case of CIFAR-100 because it has comparatively more rank (loosely because it has more features) than the MNIST dataset.These data vectors dimensionally reduced via quantum random projection could be used in quantum machine learning applications such as training an image recognition/classification model (See for example [38]).
Within the random projection methods, quantum random projection performs slightly better than the classical random projection mainly because Haar random matrices are more random and have tighter JL lemma bounds than the classical random projector.The performance of quantum random projectors which are away from the exact 2-design limit has been analyzed in the Appendix E by looking at shorter depths (∼ 50) of Fig. 1 and hence lesser expressive ansatz.
The discussion in this section assumed the existence of an exact amplitude encoding scheme for the data vectors.This would require impractical depths of (2 ) unless the data vectors are genuinely quantum, e.g., groundstates of a family of local Hamiltonians.However, for general data vectors like image data vectors, we do not necessarily need exact encoding.Preserving the distinctness of image data vectors (⃗ , ⃗ ) to a good enough accuracy enables us to use them for many image processing applications such as recognition, and classification.In this regard, there has been substantial work on Approximate amplitude encoding.These schemes encompass approximately encoding data vectors whose amplitudes are all positive [39], real [40], and even complex data vectors [41] using shallow parametrized quantum circuits.
With the plots in Fig. 3, we showed that for exactly encoded data vectors (⃗ , ⃗ ) and quantum random projected vectors ( ⃗ x, ⃗ ỹ) on average for pairs of images in the data set used.Here is a very small fraction of |⃗ − ⃗ |.A good approximate amplitude encoding scheme is bound to preserve this distance with minimal error, since it preserves the distinctness of the samples as well as (calling ⃗ , ⃗ approximate encoded vectors) on average.Here Δ is a small fraction of |⃗ − ⃗ |.
With the equations Eqs.13 and 14, it is clear that even with the approximate amplitude encoding, quantum random projection would preserve the distinctness of samples (up to a perturbation of + Δ) and be useful for image processing applications.The exact value of Δ depends on the efficiency of the approximate encoding used.
The other alternative to circumvent the impractical depths of the exact data encoding issue is by adopting different encoding schemes.One can start by reducing the resolution of the images (equivalent to reducing the pixels), which results in reduced classical image data vector dimension (to say < 2 ) and use any other existing data encoding schemes that use qubits' dimensions greater than but with polynomial depths(For example [42], [43]).
If Φ(.) is the encoding function that takes the original data vector and encodes it as a data vector of dimension 2 .Then, to check how well the distinctness is preserved, experiments need to be run on the qubits with a quantum random projector corresponding to qubits.Mathematically, we need to check how low the following values are (on average) for two data vectors ⃗ , ⃗ from the original dataset where Φ(⃗ ) , Φ( ⃗ ) are reduced randomly projected encoded vectors.
In this work, we confined ourselves to experiments involving an exact encoding scheme despite impractical depths because the preservation of distance for the exact encoding scheme implies the same for the approximate encoding schemes as described earlier.Checking the preservation of distance for other encoding schemes would require knowing the exact form of encoding in Eq.15.
Just like its classical counterpart, we can also reconstruct the images back to the original size after the random projection.For classical methods (for a data vector ⃗ and its reduced data vector ⃗ x) For the quantum case, we need to put in the extra qubits or the subspace to which we projected to get back to the original size and then apply the inverse of the unitary circuit used for projection.For example, if we had projected to the subspace where one of the qubits is in |0⟩ state.We boost the size back to the original size by having a new qubit at |0⟩ and add the inverse unitary circuit ( † ) on this new system.For a general projector, | x⟩ →| x⟩ ⊗ | p⟩ where tensor product with the | p⟩ ensures that we get back the original size of the data(image).And the reconstruction is done as follows (For the dataset |⟩) This is similar to the reconstruction done in [37].These reconstructions work on the premise that the product Π † Π ∼ of the original data dimension.It is trivial to see that this holds for the PCA projector.It turns out that this also holds for the random projectors.This is because in a larger dimensional space, finding almost orthogonal vectors becomes more common, this has been studied in [44] and was used in the discussion of [11].Fig. 4 shows how one would reconstruct an image from the MNIST dataset after dimensionally reducing a subset of the MNIST images.
With the reconstructed quantum data vectors, there are processing applications which have computational advantage over classical processing.For example, the complexity of the quantum edge detection algorithm [45] is polynomial and doesn't require exponential resources if we have an image encoded either exactly or approximately.The measurement outputs of the edge detection algorithm contain information about the edges.To get the outputs of this experiment, one can also adopt the classical shadows [46,47] approach to get the probabilities of all the bit strings with less number of measurements than full-state tomography.
B. Entropy estimation of low-rank density matrices
In this subsection, we compare the performance of the quantum random projectors against the classical random projector, SRHT on a task that requires one to obtain the approximate singular values of the dominant singular vectors of a data matrix after the dimensionality reduction.Unlike the previous task, this is concerned with reducing the dimensions of a large data matrix instead of individual data vectors by random projection.After the dimensionality reduction, we check how well the system captures the properties of the dataset by computing the error percentage in a particular property of the data matrix which requires the knowledge of all its singular values.Specifically, we will consider randomly generated semipositive definite density matrices with random singular vectors but their singular values follow a certain profile.We then compute their entropy after quantum random projection and check their accuracy(along the lines of [26]).The exact singular values profile of these density matrices depends on the nature of the system.In this experiment, we consider singular values which are linearly decaying and exponentially decaying until the rank of the system and zero afterward.These profiles could be motivated through the existence of physical systems with such profiles.A thermal ensemble of a simple harmonic oscillator mode of frequency with N internal degrees of freedom has an exponentially decaying profile.Here, singular values of will be proportional to 1, −ℎ , −2ℎ , −3ℎ , .. and so on.And, it is known that maximal second order Rényi entropy ensemble of a system with a simple harmonic oscillator mode of frequency and N internal degrees of freedom follows a linearly decaying singular value profile for its density matrix [48].The main text contains the plots related to the linearly decaying singular profile and the appendix C contains the plots related to the exponential decay profile.
Here is the procedure to perform random projection given a semi-positive definite matrix of dimension × • Project the original density matrix of size × to a lower dimension × using Π and Π • Perform SVD (classical) or QSVD (quantum) on the lower dimensional matrix to get the singular vectors with singular values p1 , p2 , p3 ,.... p which are approximations to 1 , 2 , ... .
• Then we obtain an approximation to entropy (S) using The accuracy in the approximated entropy is bounded in the following theorem IV.1 Theorem IV.1.For a random matrix Π satisfying the Johnson-Lindenstrauss (JL) lemma with a distortion √ , where ≤ 1∕6 and ≤ 1∕2, the difference in the Von Neumann entropy of a density matrix computed using the random projection with Π (denoted as S()) and the true entropy (()) can be bounded as follows with probability at least (1 − ).
Proof. See Appendix A
The Fig. 5 shows the error percentage in the computed entropy after random projection for density matrices of size 1024 × 1024 with linearly decaying singular values until a certain rank ( = 10, 50, 100, 40 in the plot) and zero afterward.The x-axis represents the different reduced dimensions ().The accuracies are better for low ranks as expected and get worse for larger ranks.We observe that the quantum random Projector and the classical random projector perform to similar extents (if not a better quantum performance than classical performance for density matrices with very low rank) in this task.This matches the trends reported in [26] where they reported similar performance for various other classical random projection matrices like Gaussian, SRHT, and IST.We also show the accuracies with which the quantum and classical random projectors capture the singular values of the system for the rank = 10 when the system's size Error % in Singular Values Singular Value (linear profile) errors for (r=10, N=1024, k=512) Quantum Random Projection Classical Random Projection FIG. 6.The plot shows the accuracy with which the quantum random projector and the classical random projector pick the singular values of the density matrix for = 10 when reducing the system size by half.The envelope represents 95 % confidence intervals by running the experiments over 10,000 randomly generated density matrices has been reduced by half in figure 6.Here, we see that the quantum random projectors perform better than their classical counterpart mainly because Haar random matrices that the random circuits try to approximate is more random than any classical random projectors that could be stored with similar or comparable complexity.The appendix contains a discussion regarding how the accuracy improves when we increase the size of the original datasets from 1024 × 1024 to 2048 × 2048.
We discuss the same plots for density matrices with exponentially decaying singular value profile until a certain rank in the Appendix C. We observe there that increasing the rank doesn't change the singular value profile as much and hence the accuracy with which the random projection algorithms work remains pretty much constant.The appendix E contains the error plots for the accuracy in individual singular values for the case = 10 obtained using lesser expressive random circuits (depth ∼ 50).
V. HOW TO PROJECT IN A REAL QUANTUM COMPUTER?
For the quantum random projection to work, in addition to sampling a unitary from the exact (or approximate 2 designs), we also need to have a circuit component for projector operators.In one of the previous sections, we considered arbitrary projection operators which might not be able to be efficiently implemented in a quantum computer with polynomial resources.However, we can look at the simplest projection operations that one can use for the quantum random projection.In Fig. 2, we looked at the simplest projection operation, which is measuring some of the qubits and proceeding only if the qubits are in a certain state (|0⟩ or |1⟩).This is equivalent to projecting it to the subspace where those qubits take that specific value.For example, when you have a circuit of 10 qubits, measuring one of the qubits and proceeding only when that qubit is in |0⟩ means a reduction in the data vector (or ket) dimension from 1024 to 512.However, the projection through measurement discussed above is different compared to the classical projection because a quantum measurement (wavefunction collapse) automatically takes care of the normalization factor and the extra √ is not needed.Since the Hilbert space we consider here is large and the ket entries are randomized, the normalization that happens because of the wavefunction is the same as the prefactor we would get in a classical random projection.
To demonstrate the quantum random projection with a simple projection operation, we will consider projection operators of the form 1 2 (1+ ) which is a projection operator to the space where ℎ qubit is at |0⟩ state.To demonstrate this, we perform such a quantum random projection for a large data matrix with a linearly decaying singular values profile of size 1024 × 1024 and rank = 5 by reducing the data vectors to sizes 512, 256 and 128 by projecting out 1, 2 and 3 qubits respectively.Then, we retrieve the dominant singular vectors by performing a variational quantum singular value decomposition (VQSVD) [6].But since the data matrix has been dimensionally reduced, the ansatz we use for finding the right singular vectors is also of reduced size (Fig. 7).The details regarding the implementation of VQSVD, and the ansatz type used can be found in the Appendix F. FIG. 7. The figure shows the schematic of the variational quantum SVD post quantum random projection to lower dimensions.The data matrix needs to be loaded using a set of unitary gates with techniques like importance sampling(see related discussion in the appendix of [6]).Similar to the setup in Fig. 2, we perform projection by measuring a few qubits at the top.This is followed by a training procedure to obtain the dominant singular vectors and their singular values.The singular vectors on the right end belong to the lower dimensional space and hence require a lower dimensional ansatz.
The Fig. 8 shows the accuracy with which we were able to reconstruct the singular vectors after quantum random projection for individual singular vectors.This demonstrates how one can perform quantum random projection in near-term devices as the VQSVD algorithm used to retrieve the dominant vectors is a near-term algorithm.The accuracy with which the singular values have been retrieved depends on the expressivity of the ansatz and whether or not it falls into a barren plateau during the training procedure.We haven't discussed the most accurate retrieval of the singular vectors, as that is beyond the scope of this paper.There are many strategies to avoid falling into the barren plateau and improve the convergence rate [49][50][51].We used the identity block strategy [51] to avoid barren plateaus (more details on that in the appendix F.)
VI. CONCLUSION
In this work, we explored a practically useful application of local random quantum circuits in the task of dimensional reduction of large low-rank data sets.The main essence of the applicability of the local random circuits in this task is their ability to anticoncentrate rapidly at linear or sub-linear depths [52,53].This makes them a good random projector to lower dimensions, meaning they preserve the distinctness of different dominant data vectors in a large dataset after dimensional reduction.
The theorems discussed in the paper show that just like the Haar random matrices which are good random projectors, their approximate quantum implementations, the exact and approximate t-designs are also good random projectors.The rapid anti-concentration of Hilbert space at linear depths means that the number of random parameters(the random rotation parameters) required to create and reproduce a random projector is logarithmic in the size of the data sets.Such efficiency in the storage complexity of classically generated Haar random matrices or in any classical random projector is not possible.We then, benchmarked its performance against the commonly used classical random projector, Subsampled Randomised Hadamard Transform (SRHT).The quantum random projectors performed slightly better than this classical candidate because they are trying to approximate Haar random matrices which are more random than the classical candidate.We then demonstrated these comparisons for various tasks such as image compression, reconstruction, retrieving the singular values of the dominant singular vectors post-dimension reduction, etc.
Though the initial discussion assumed arbitrary projection operators to arbitrary subspaces, we showed simplest projection operators and projection subspaces exist.We demonstrated this simplest quantum random projection and retrieved the dominant singular vectors post-quantum random projection via VQSVD [6].This shows the applicability of such quantum random projections and their retrievals in near-term devices.
Dimensionality reduction facilitated by random projections as discussed in this work can also precede kernel-based variants of PCA wherein eigenvalue decomposition of the Gram matrix associated with the higher-dimensional embedding (often called kernel) is sought [54] especially if the said Gram matrix is low-rank.Beyond the precincts of classical data, such a technique can act as an effective precursor to improve the efficiency of simulation even on quantum data as has been studied in recent work [55].The essential crux of the idea is heavily rooted on PCA but applied to quantum data wherein repeated Schmidt decomposition of the states and vectorized form of arbitrary operators are performed followed by subsequent removal of singular vectors associated with non-dominant singular values akin to PCA.The techniques explored in this work involving good random projections can be used in conjunction prior to the application of such a protocol to contract the effective space of the states/operators involved.Owing to the demonstrated near-term applicability, similar reduction can also be afforded as a preprocessing step in a host of quantum algorithms manipulating quantum data [56] on noisy hardwares.Such protocols are of active interest to the scientific community due to their profound physicochemical applications ranging from exotic condensed-matter physical systems like Rydberg excitonic arrays [57], modeling higher dimensional spin-graphical architectures in quantum gravity [58] and in learning theory of neural networks [59], constructing unknown hamiltonians through time-series analysis [60,61], tomographic estimation of quantum state [62,63], in electronic structure of molecules and periodic materials [64], quantum preparation of low energy states of desired symmetry [64,65] or even order-disorder transitions in conventional Ising spin glass using quantum annealers [66] and quantum variants of Sherrington-Kirkpatrick model [67] to name a few.
We did an extensive comparison using deep (∼ 150) exact 2-design ansatz and deferred the discussion about circuits away from the exact 2-design limit to the appendix E. This is because there exist various random circuit architectures which anti concentrate just like the exact 2-design ansatz and hence could be good candidates for random projection.This could be a good starting point for future study.Also, it is worth studying and constructing quantum random projectors suited for specific applications and datasets( for example, the datasets in health care [68,69]).It has to be noted that the results de-rived in the main text assumed noiseless quantum gates and measurements.Similar theorems need to be understood for real quantum computers where different noise sources are unavoidable.This leads to a possible future study to understand the extent to which the theorems in the main text are valid on real quantum computers by performing statistical analysis on the bitstrings from the output of real quantum computers (See Refs.[70,71]).
VII. CODE AVAILABILITY
The classical and the quantum random projection matrix(and the rotation parameters used to generate the circuit) used for the comparisons, the data matrix used to generate Fig. 8 along with the code for generating the plots in this paper will be made available upon reasonable request.The simulation for the retrieval of dominant singular vectors through VQSVD was done in the Paddle quantum framework [72].
projection is made such that one of the qubits is projected to say |0⟩ state.Then, reconstruction is done by appending the † circuit to the reduced quantum state + measured qubit in |0⟩ (This is equivalent to adding zeros to the basis elements where the measured qubit is in |1⟩ state just like how we boosted dimensions from k=400 or 700 to 1024.)
Theorem III. 1 .
Let ∈ () be sampled uniformly from the Haar measure( ) and let ⃗ 1 , ⃗ 2 ∈ ℝ .Then the matrix Π × obtained by considering any k rows of followed by multiplication with √ satisfy
S = ∑ p𝑖 ln 1 p𝑖FIG. 4 .
FIG. 4. The schematic of steps involved in the dimensionality reduction and image reconstruction of image datasets using (a)PCA, (b)CRP, and (c)QRP.The figure shows the reconstructed images for various reduced dimensions.Though projectors with dimensions 700 and 400 are not straightforward to construct, a reduced dimension of 512 represents projection by measuring one of the qubits (so the size drops from 1024 to 512) and processing further only if it's 0 or 1.The figure illustrates the reconstruction of one of the data vectors from the MNIST data subset we are experimenting, a quantitative description of how the reconstruction performs compared to classical methods has been discussed in the Appendix B along with a description about the construction of projection operators.
FIG.5.The plots in this figure show the accuracies of quantum random projection and classical random projection in the entropy computation of randomly generated density matrices of size N=1024 and ranks = 10, 50, 100, 400 with a linearly decaying singular value profile.The envelopes represent their 90 percent confidence intervals by running the experiments over 100 randomly generated density matrices.The accuracies improve with decrease in the ranks as expected.
FIG. 8 .
FIG. 8.The figure shows the errors in the singular values obtained by reconstructing the dominant singular vectors post quantum random projecting a randomly generated data matrix of rank = 5 using variational quantum SVD for various reduced dimensions = 512, 256, 128 . | 10,120.6 | 2023-08-26T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Physics"
] |
Every commutative JB∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^*$$\end{document}-triple satisfies the complex Mazur–Ulam property
We prove that every commutative JB∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^*$$\end{document}-triple, represented as a space of continuous functions C0T(L),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_0^{\mathbb {T}}(L),$$\end{document} satisfies the complex Mazur–Ulam property, that is, every surjective isometry from the unit sphere of C0T(L)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_0^{\mathbb {T}}(L)$$\end{document} onto the unit sphere of any complex Banach space admits an extension to a surjective real linear isometry between the spaces.
Introduction
New recent advances continue improving our understanding of Tingley's problem by enlarging the list of positive solutions, and the range of spaces satisfying the Mazur-Ulam property. As introduced in [8], a Banach space X satisfies the Mazur-Ulam property if every surjective isometry from its unit sphere onto the unit sphere of any other Banach space admits an extension to a surjective real linear isometry between the spaces. It is worth to note that this property was previously considered by Ding in [10] without an explicit name (see also [23, page 730]). A remarkable outstanding discovering has been obtained by Banakh in [1], who has proved that every 2-dimensional Banach space X satisfies the Mazur-Ulam property. This is, in fact, the culminating point of deep technical advances (see [2,6]).
The abundance of unitary elements in unital C * -algebras, real von Neumann algebras and JBW * -algebras is a key property to prove that these spaces together with all JBW * -triples satisfy the Mazur-Ulam property (cf. [3,13,16]). A prototypical example of non-unital C * -algebra is given by the C * -algebra K(H), of all compact operators on an infinite dimensional complex Hilbert space H, or more generally, by a compact C * -algebra (i.e., a c 0 -sum of K(H)-spaces). Compact C * -algebras and weakly compact JBW * -triples are in the list of Banach spaces satisfying the Mazur-Ulam property (see [18]).
Tingley's problem is also studied in the case of certain function algebras and spaces. The first positive solution to Tingley's problem for a Banach space consisting of analytic functions, apart from Hilbert spaces, was obtained by Hatori et al. in [12], where a proof is given for any surjective isometry between the unit spheres of two uniform algebras (i.e., closed subalgebras of C(K) containing the constants and separating the points of K). Hatori has gone further by showing that every uniform algebra satisfies the complex Mazur-Ulam property, i.e., every surjective isometry from its unit sphere onto the unit sphere of any complex Banach spaces admits an extension to a real linear mapping between the spaces [11,Theorem 4.5].
The non-unital analogue of uniform algebras is materialized in the notion of uniformly closed function algebra on a locally compact Hausdorff space L. We recently showed that each surjective isometry between the unit spheres of two uniformly closed function algebras on locally compact Hausdorff spaces admits an extension to a surjective real linear isometry between these algebras (see [9]). In the just quoted reference we also proved that Tingley's problem admits a positive solution for any surjective isometry between the unit spheres of two commutative JB * -triples, which are not, in general, subalgebras of the algebra C 0 (L) of all complex-valued continuous functions on L vanishing at infinity (see Sect. 3 for details). In this note we shall employ a recent tool developed by Hatori in [11] to infer that a stronger conclusion holds, namely, every commutative JB * -triple satisfies the complex Mazur-Ulam property. Among the consequences we derive that every commutative C * -algebra enjoys the complex Mazur-Ulam property.
Preliminaries
We shall briefly recall some basic terminology to understand the sufficient condition in [11,Proposition 4.4] to guarantee that a Banach space satisfies the complex Mazur-Ulam property. Let X be a real or complex Banach space, and let X * , S(X) and B X denote the dual space, the unit sphere and the closed unit ball of X, respectively. It is known, thanks to Hahn-Banach theorem or Eidelheit's separation theorem, that maximal convex subsets of S(X) and maximal proper norm closed faces of B X define the same subsets (cf. [21,Lemma 3.3] or [22,Lemma 3.2]). The set of all maximal convex subsets of S(X), equivalently, all maximal proper norm closed faces of B X , will be denoted by X . For each F ∈ X there exists an extreme point of the closed unit ball . The set of all extreme points of B X * for which −1 {1} ∩ S(X) is a maximal convex subset of S(X) will be denoted by Q X . On the latter set we consider the equivalence relation defined by where = ℝ if X is a real Banach space and = ℂ if X is a complex Banach space. A set of representatives for the quotient set Q X ∕ ∼ (or for X ) will consist in a subset X of Q X which is formed by precisely one, and only one, element in each equivalence class of Q X ∕ ∼ . According to this notation, for each F ∈ X there exists a unique ∈ X and ∈ such that , that is, the elements in X are bijectively labelled by the set X × , and we can define a bijection I X ∶ X → X × labelling the set X .
For example, by the classical description of the extreme points of the closed unit ball of the dual of a C(K) space as those functionals of the form . It is shown in [11,Example 2.4] that for a uniform algebra A over a compact Hausdorff space K, the set { t ∶ t ∈ Ch(A)} is a set of representatives for A, where Ch(A) denotes the Choquet boundary of A.
Let A, B be non-empty closed subsets of a metric space (E, d). The usual Hausdorff distance between A and B is defined by We shall employ this Hausdorff distance to measure distances between elements in X . According to [11], a Banach space X satisfies the condition of the Hausdorff distance if the elements in X satisfy the following rules: for , � ∈ Q X and , � ∈ . Let X ⊂ Q X be a set of representatives for X .
Under the light of [11, Lemma 3.1] to conclude that a complex Banach space X together with a set of representatives X satisfies the condition of the Hausdorff distance, it suffices to prove that Let us go back to the set Q X determining the set X of all maximal proper norm closed faces of B X . For ∈ Q X and ∈ = B ( = ℝ or ℂ ), we set where | | = 1 if = 0. It is known that for each in a set of representatives X , the inclusion holds for all ∈ (cf. [11,Lemma 4.3]). O. Hatori has recently established that a complex Banach space X, together with a set of representatives X for X , satisfying the condition of the Hausdorff distance and the equality: for each in X and ∈ , satisfies the complex Mazur-Ulam property (cf. [11,Proposition 4.4]).
The complex Mazur-Ulam property for commutative JB * -triples
We shall avoid the axiomatic definition of commutative JB * -triples and we shall simply recall their representation as function spaces. By the Gelfand theory for JB * -triples (see [14,Corollary 1.11]), each abelian JB * -triple can be identified with the norm closed subspace of C 0 (L) defined by where L is a principal -bundle, that is, a subset of a Hausdorff locally convex complex space such that 0 ∉ L, L ∪ {0} is compact, and L = L (see also [7, §4.2.1] or [5,9]).
We can state next the main result of the paper. (2) F , ∩ F � , � ≠ � for any ≠ � in X , , � in .
The proof will be obtained after a series of technical results via [11,Proposition 4.4].
Although for each locally compact space L , the Banach space C 0 (L) is isometrically isomorphic to a C 0 (L) space (cf. [17,Proposition 10]), there exist principal -bundles L for which the space C 0 (L) is not isometrically isomorphic to a C 0 (L) space (cf. [14, Corollary 1.13 and subsequent comments]). Therefore, there exist abelian JB * -triples which are not isometrically isomorphic to commutative C * -algebras. The next corollary is a weaker consequence of our previous theorem.
Corollary 3.2
Every abelian C * -algebra (that is, every C 0 (L) space) satisfies the complex Mazur-Ulam property.
Compared with previous results, we observe that as a consequence of the result proved by Hatori for uniform algebras in [11,Theorem 4.5] every unital abelian C * -algebra satisfies the complex Mazur-Ulam property. Actually, all unital C * -algebras enjoy the Mazur-Ulam property [16]. In the case of real-valued continuous functions, Liu proved that for each compact Hausdorff space K, C(K, ℝ) satisfies the Mazur-Ulam property (see [15,Corollary 6]).
Let L be a locally compact Hausdorff space. A closed subspace E of C 0 (L, ), separates the points of L if for any t 1 ≠ t 2 in L there exists a function a ∈ E such that a(t 1 ) ≠ a(t 2 ). Following [11], we shall say that E satisfies the condition (r) if for any t in the Choquet boundary of E, each neighborhood V of t, and > 0 there exists u ∈ E such that 0 ≤ u ≤ 1 = u(t) on L and 0 ≤ u ≤ on L ∖V. The proof of Corollary 5.4 in the preprint version of [11] (see arXiv:2017.01515) affirms that each closed subspace E of C 0 (L, ℝ) separating the points of L and satisfying a stronger assumption than condition (r) has the Mazur-Ulam property. After some private communications with O. Hatori we actually learned that property (r) is enough to conclude that any such closed subspace E satisfies the Mazur-Ulam property. Actually the desired conclusion can be derived from [4,Theorem 2.4] by just observing that condition (r) implies that the isometric identification of E in C 0 Ch(E) is C-rich, and hence a lush space. Corollary 3.9 in [20] implies that E has the Mazur-Ulam property.
We focus now on the main goal of this section. Henceforth, let L be a principal -bundle and L 0 ⊂ L a maximal non-overlapping set, that is, L 0 is maximal satisfying that for each t ∈ L 0 we have L 0 ∩ t = {t} (its existence is guaranteed by Zorn's lemma).
Assume that a Banach space Y satisfies the following property: for every extreme point ∈ e (B Y * ), the set { } is a weak * -semi-exposed face of B Y * . It is clear that each extreme point ∈ e (B Y * ) is determined by the set { } � = −1 (1) ∩ S(Y). Hence, the equivalence relation ∼ defined in Sect. 2 (cf. [11, Definition 2.1]) can be characterized in the following terms: for , ∈ e (B Y * ), we have ∼ ⇔ = for some ∈ . Since Y = C 0 (L) satisfies the mentioned property, the set { t ∶ t ∈ L 0 } is a set of representatives for the relation ∼ . We know that each maximal proper face F of the closed unit ball of C 0 (L) is of the form: for some (t 0 , ) ∈ L 0 × (cf. [9, Lemma 3.5]).
We can now begin with the technical details for our arguments.
Proof Since L 0 is non-overlapping, we know that t 1 and t 2 are disjoint compact subsets of L.
Proof Lemma 3.3 assures the existence of disjoint open -symmetric neighbourhoods with compact closure W 1 and W 2 of t 1 and t 2 , respectively. By [9,Remark 3.4], there exist functions a 1 , a 2 ∈ S(C 0 (L)) such that a j (t j ) = 1 and a j| L∖W j ≡ 0 for j = 1, 2.
We shall next show that C 0 (L) satisfies the second hypothesis in [11,Proposition 4.4].
Take now any s ∈ L such that a(s) ≠ 0. Then we have and clearly the equality |a(s) − a j (s)| = | | | |a(s)| − h j |a(s)| | | | also holds when a(s) = 0. We, therefore, have d(a, a j ) ≤ 1 + (−1) j | | ( j = 1, 2 ), which implies that a ∈ M t 0 , . ◻ Proof of Theorem 3.1 Remark 3.5 and Proposition 3.6 guarantee that C 0 (L) satisfies the hypotheses in [11,Proposition 4.4] for the set of representatives given by L 0 , and the just quoted proposition gives the desired conclusion. ◻ not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 3,105.2 | 2022-08-07T00:00:00.000 | [
"Mathematics"
] |
Telelocomotion—Remotely Operated Legged Robots
Featured Application: Teleoperated control of legged locomotion for robotic proxies. Abstract: Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks.
Introduction
Telerobots, or remotely controlled robotic proxies, combine the robustness, scalability, and precision of machines with human-in-the-loop control. This teleoperation architecture extends human intervention to spaces that are too hazardous or otherwise unreachable by humans alone. In the particular case of emergency response, autonomy is still insufficient, and human first-responders risk their lives to navigate and operate in extreme conditions, oftentimes with stressful task constraints. This work is motivated to alleviate this human risk by working towards human-controlled, semi-autonomous and legged robot proxies that can both navigate and dexterously interact in difficult and potentially sensitive environments. For this, two user studies were conducted: (I) simulated servo-driven hexapedal telerobotic platform, as shown in Figure 1a, to compare the proposed haptic interface with commonly used alternatives for controlling locomotion in varying levels of traversal task difficulty; (II) physical implementation of hexapedal telerobotic platform, as shown in Figure 1b, to evaluate the haptic virtual fixtures developed.
(a) (b) Figure 1. Trossen Robotics PhantomX hexapod (a) leg enumeration, rendered and simulated in ROS Gazebo (b) physical implementation of hexapod robot. For alternating tripod gaits odd enumerated legs move separately from even legs.
Teleoperation
Autonomous robots have proven to be successful in surveillance, assembly, and basic navigation tasks, yet most real-world tasks are beyond the capabilities of state-of-theart autonomy. Where autonomy is insufficient, human-in-the-loop systems have shown success [1][2][3]. Teleoperated systems can employ semi-autonomous robots, reducing the operator's cognitive load. In a reliable system, the robot is tasked with autonomous subtasks, while operator intervention is left for high-level decision making, such as obstacle avoidance or managing a team of robots [4]. In addition, properly incorporated haptic feedback can improve teleoperator performance [5] in the fields of robot-assisted minimally invasive surgery [6], control in cluttered environments [7,8], obstacle avoidance in aerial navigation [9], and programming welding robots [10] to name a few.
The benefits of teleoperation are widely associated with manipulation tasks. Examples include robotic maintenance of underwater structures [11,12], robot-assisted minimally invasive surgery [13,14], and assembly/welding tasks [15,16]. Most of these operations rely on robot proxies localized to the task environment and require little if any traversal of terrain.
Haptic Feedback in Teleoperation
A key component of a successful teleoperated system is a seamless user interface with the ultimate goal to achieve a high degree of telepresence [3]. Teleoperator feedback modes can influence the overall effectiveness. Typical traditional feedback modes include visual, 2D and 3D, and auditory [17]. The haptic sensorimotor pathway has been leveraged to provide the operator with additional sensory information in the forms of both tactile and kinesthetic feedback [18]. Costes et al. presented a method to provide four types of haptic feedback-compliance, friction, fine-roughness and shape with a single force feedback device [19]. Haptic feedback can present cues to the operator as well as intelligent, task-based force feedback called virtual fixtures. Virtual fixtures can enhance task execution and shift task burdens from the operator to semiautonomous feedback. Such systems have great promise in the domains of vision-based assistance in surgical robotic procedures [14,20], joint-limit cue delivery [21], underwater robotic operations [12], integrating auxiliary sensors [22,23], and assembly and disassembly training [24].
Dynamic Locomotion
Successful implementations of robot navigation typically involve wheeled systems [25][26][27], yet these methods are constrained to predictable and relatively smooth terrains. Disaster scenarios, however, present navigation obstacles that cannot be solved by wheeled or tracked navigation [28]. Legged robots have demonstrated potential applicability in tasks such as search and rescue operations, and the handling of hazardous waste and recovery in disaster scenarios [29,30]. Legged robots also exhibit sufficiently high torque and speed for demanding terrains [31]. Xi et al. attempted to increase energetic economy by using multiple gaits in a single walking robot [32], and Carpentier et al. proposed a centroidal dynamics model for multicontact locomotion of legged robots [33]. Walking robots are a potential solution, but autonomous approaches are not yet reliable enough [34][35][36][37].
DARPA Robotics Challenge
The DARPA Robotics Challenge (DRC) aimed to develop teleoperated ground robots to complete complex tasks in dangerous, unpredictable task environments [38]. Due to the progression of wheeled or tracked robots to a new age of legged walking robots, mobility and balance were main areas of focus for the DRC [39]. Control of legged walking ranged from manual step planning [40], plan-and-execute trajectories with a balance controller, to full body humanoid control approaches [41]. Throughout the DRC trials, many robots fell en route towards the navigation goal, resulting in unrecoverable failures. Relying on predictable contact mechanics was ineffective, and more focus should have been placed on making, rather than merely planning, footholds [42]. The reader is directed to [43,44] for detailed software and hardware implementations used at the DRC trials.
Bio-inspired Locomotion
Legged natural organisms are adept at leveraging contact forces to negotiate complex obstacles [45]. Challenges remain for walking robotic platforms. However, examining the dynamic forms of locomotion from nature can lead to sophisticated solutions for robotic locomotion. For example, central pattern generators (CPGs) are biological networks that produce coordinated rhythmic output signals under the control of rudimentary input signals [46]. Lachat et al. presented a control architecture constructed around a CPG implemented as a system of coupled non-linear oscillators for the BoxyBot, a novel fish robot capable of both swimming and crawling [47].
Bipedal, quadrupedal, hexapedal and rotating tri-legged platforms are popular modes of robot locomotion [45]. Examples include BigDog, RHex, Whegs, and iSprawl [48][49][50][51]. Zucker et al. used a fast, anytime footstep planner guided by a cost map for Boston Dynamics' LittleDog [52]. Arthropod locomotion, specifically cockroach movement dynamics, has been a source of much inspiration for hexapedal robots. Cockroaches make use of passive dynamics in order to achieve a high degree of stability, speed and maneuverability. Hoover et al. presented the DynaRoACH, an under-actuated cockroach-inspired hexapedal robot capable of running at 14 body lengths per second and executing dynamic turning actions [53].
Contributions
To the best of the authors' knowledge, the work presented here is the first to prototype a seminal haptic operator interface for commanding cyclical hexapedal legged locomotion for online human operated control of robot gait execution. The authors then evaluate the interface through two experiments: (I) to compare the performance of said haptic interface for simulated teleoperated robot legged locomotion against more standard ones used to control robotic proxies with varying levels of traversal task complexity (II) to evaluate the preliminary performance of the haptic virtual fixtures used for the haptic interface in a physical implementation of the telelocomotion task.
Teleoperated Platform
In this work, the 18DOF Trossen PhantomX hexapedal robot, as shown in Figure 1, was used as the telerobotic platform in both simulation and physical implementation. The hexapod's many points of contact with the ground alleviate concerns of balance. Furthermore, the device is light-weight and servo driven, leading to negligible dynamic analysis of walking gait and locomotion. The remaining mechanics are left to robot kinematics and the physics simulation engine for experiments conducted in the simulated environment.
Hexapod Legged Mechanism
The PhantomX hexapod is comprised of 18 joints: six kinematically identical limbs with three joints each. The robot exhibits sagittal symmetry, with each symmetrical half consisting of three legs. These legs are denoted the prothoracic, mesothoracic and metathoracic. In this work they are enumerated as Legs 0-5, as depicted in Figure 1a.
Each of the six legs consists of three links, the coxa, femur and tibia, as depicted in Figure 2a. The servo-driven joints that actuate these links afford 3DOF rotary motion for each leg. These joints are the thorax-coxa, coxa-trochanter, and femur-tibia joints, as illustrated in Figure 2b. Joint 0, the thorax-coxa, has a vertical axis of rotation, while joints 1 and 2, the coxa-trochanter and femur-tibia respectively, present parallel horizontal axes of rotation. The PhantomX ROS package comes prepackaged with a general position plus yaw controller. The gait mechanism allows for forward and backward translation as well as pivoting about the vertical z-axis around its center. These predefined gait trajectories are well-studied alternating tripod gait sequences, similar to that of the common cockroach. More explicitly for forward and backward translation, from Figure 1a, odd enumerated legs move in concert while even enumerated legs contact points remain fixed for balance and vice versa.
Peripherals and Augmentations
The PhantomX hexapod Arbotix Robocontroller lacks flexibility in software development as well as seamless integration of physical peripherals necessary for telelocomotion control. Desired augmentations namely include high definition streaming RGB camera and long-range wireless connectivity. Thus, the Arbotix Robocontroller was replaced with the NVIDIA Jetson Nano. Additional peripherals were then incorporated as: an eight megapixel Raspberry Pi Camera V2 RGB camera, power distribution (buck converter), the Dynamixel U2D2 USB adapter, and a Netis AC1200 IEEE 802.11a/b/g/n/ac wireless adapter. These additional hardware components were mounted to the physical platform via 3D printed plastic mounts, as depicted in Figure 3a. 3D printing was performed using the Cetus MKIII with eSun PLA+ at 0.2 mm layer height, 0.35mm line thickness, and 15% infill. The modified PhantomX is illustrated alongside in Figure 3b. Figure 3. Hexapod physical augmentations and peripherals (a) CAD rendering of physical mounts for peripherals (b) modified PhantomX platform with Jetson Nano controller, Raspberry Pi Camera V2, buck convereter, U2D2, and Netis AC wireless adapter.
Visual Feedback
The operator interface included visual feedback, captured either from a simulated monocular camera mounted to the PhantomX hexapod base or as the Raspberry Pi Camera V2. This was presented to the human user via a standard LCD display. This constituted the only visual feedback provided to the operator, and no 3D visualization was performed. Figure 4 shows typical examples of the two different visual feedback sources.
(a) (b) Figure 4. Typical visual feedback received by operator, rendered either from (a) a simulated streaming RGB camera mounted to the robot base or (b) the Raspberry Pi Camera V2 mounted to the physical robot. In both cases, feedback is rendered on a standard LCD display.
Haptic Interface
The physical operator station also consisted of the Sensable PHANToM Omni R 3DOF haptic devices and a desktop computer. In the presented method, two Omni haptic devices were utilized-one for each set of three legs per phase of the walking tripod gait. Kinesthetic force-feedback from the haptic devices encouraged the users to command efficient hexapod movement. In terms of system hardware, the operator interface machine ran with a dualcore 64-bit Intel R Core TM i7-640M running at 2.8 GHz using the Microsoft Windows 10 operating system. The machine was equipped with 4 GB of system memory, and a graphics engine consisting of a NVIDIA NVS 3100M with 512 MB of GDDR3 64 bit memory. With regard to baseline software, haptic device setup files for calibrating and interfacing with the devices were necessary. The Computer Haptics and Active Interface (CHAI3D) SDK was implemented to both gather haptic device configuration and generate appropriate force feedback. This is in contrast to whole-body methods [54][55][56]. Communication between the operator console and the simulation platform was achieved via a rosbridge node using TCP/IP.
For forward and backward motion, each device is paired with one set of the alternating tripod gait groups. Specifically, the left haptic device controls odd enumerated legs, while the right even enumerated legs. This is shown in Figure 5.
haptic stylus location, A momentary button press on the haptic joystick activates a mode switch, and enables the user to command pivoting. The positioning of the alternating tripod gait sequences are governed by the haptic device configurations. First consider the raw joint limits for each legĴ where J l is the joint state for leg l, θ F is the angle for the femur-tibia joint, θ C the angle for the coxa-trochanter joint, and θ T the angle of the thorax-coxa joint, see Figure 2b. The zero state for θ T is as depicted in Figure 1, while the zero states for θ F and θ C are such that the femur and tibia are both parallel to the ground. Each of the two Sensable PHANToM Omni devices returns a 6DOF input, i.e., 3DOF position and 3DOF orientation. In this work, only the 3DOF position readings are used. Given the haptic device stylus position, depicted in Figure 5 and tracked as joint 3 of the Omni device.
the goal is to then map p h to resultant J l that conveys intuitive robot locomotion from user commands. Two such mappings are developed for two different modes: one for forward/backward translation, and one for pivoting motion. Users can initiate a mode switch with a simple momentary push button on the haptic device stylus. For translational forward and backward locomotion, the amplitude and stride length of the tripod gait cycle are determined from the values of x h , z h position coordinates of haptic stylus h. Specifically, x h and z h are mapped directly to the commanded values for θ T and θ C respectively. In this proof-of-concept implementation, any user provided command position is mapped to appropriate joint angles as: whereθ F = 0 • is the fixed angle for the femur-tibia joint, α z , α x are heuristically tuned real scalars, and sgn(φ hl ) are direction flags for the phase of the alternating tripod gait associated with each haptic device h and leg l. In this way, the gait cycles were forced to occur one at a time in software, thus enforcing at least three points of contact. Walking could thus be commanded with periodic, alternating cyclic motion in the XZ plane. This is summarized as pseudocode in Algorithm 1.
Algorithm 1 Translational Motion.
1: define k z , k x as positive scalar thresholds-these determine minimum movement to register motion command 2: for haptic device h controlling leg l do 3: if z h > k z then 4: check J l limits and if leg l in contact 5: end if 10: note that θ denotes previous joint angle 11: note thatθ denotes updated joint angle 12: note φ hl is assigned to each leg consistent with phase of the tripod gait 13: end for Turning motion is achieved similarly to forward and backward locomotion, except the amplitude and stride length controlled by haptic device h is governed by y h , x h respectively. That is, the joint angles θ T and θ C are determined directly from x h , y h respectively. The femur-tibia joint is again fixed atθ F = 0 • (in both translation and pivoting, the femurtibia fixed angle constraint is imposed to simplify mapping from low-dimensionality inputs). This is shown as where α y , α x are heuristically tuned real scalars, and sgn(ψ l ) is a direction flag for the phase of the pivoting gait associated with each leg l. This pivoting action is summarized as pseudocode in Algorithm 2. if y h > β y then 10: end if 11: end if 12: for each transition between y h > T y and y h < T y do 13: toggle phase state of pivot, γ 14: end for 15: end while Figure 6 depicts the switching of phase state γ, which determines which set of three legs are raised for each phase of the pivot motion. Haptic feedback is then generated separately for each of the two modes (translational and pivoting) via simple haptic virtual fixtures. In particular, for forward and backward motion, the operator is encouraged to stay in the XZ vertical plane of the haptic device with force rendered as a spring and haptic proxy or god object movement defined as the projection of the user position on the XZ plane. Once the mode switch for pivoting is activated, haptic feedback is rendered similarly now with the horizontal XY plane serving as the virtual fixture. It was hypothesized that forward or pivoting gait are best controlled when the reaches of input circular motion commands are mapped to the XZ or XY planes. When motions deviate from said planes, desired user input commands are not only suppressed but also distorted depending on angular deviation of the commands. See Figure 7 for visualization of the planar virtual fixtures used. Figure 6. Pivoting phase is determined solely by transitions from outside to within the shaded region, determined by T y . As the commanded position coordinates shift from p 1 to p 2 , the cursor enters the shaded region and the y position is less than T y . When the cursor next leaves the shaded region, for example to p 3 , the phases of the rotation is toggled, which alternates which set of three legs are raised.
The haptic device user input processing and resultant joint-level commands for the PhantomX hexapod were executed identically in both the simulation environment as well as the physical implementation. More specifically, alternating tripod gait amplitude and stride length were executed as described in Algorithms 1 and 2. Each leg within a tripod for translational movement undergo the same joint level commands, and tripod leg selection is maintained as either odd or even enumerated legs as shown in Figure 1a. For experiments with the haptic virtual fixtures, the kinesthetic force feedback was calculated and rendered identically in either simulation or physical implementation, namely as planar guidance fixtures as depicted in Figure 7. Figure 7. Haptic feedback is rendered as a simple spring force via proxy method to encourage users to stay within a 2D plane, (a) for forward and backward, (b) for turning. Proxy location is simply user position projected onto rendered plane.
Experimental Protocol
In this work, the proposed haptic telelocomotion was evaluated with two different user study experiments: (I) simulation based comparison of competing interface types with varying task complexity; (II) physical implementation of the haptic interface evaluating efficacy of the virtual fixtures.
Simulation Environment
ROS Gazebo was used to simulate kinematics and physical collisions of the PhantomX simulated device with the environment. 3D simulated environments were generated using Autodesk Fusion 360 software, and were imported and simulated in ROS Gazebo. The simulation environment was built on a Ubuntu 16.04 system in ROS Gazebo, and simulation contact mechanics were handled by Gazebosim's multiphysics engine. In terms of hardware, the simulation system ran with a quad-core 64-bit Intel R Core TM i7-7700 running at 3.6 GHz. The machine was equipped with 16 GB of DDR4 DRAM 3000 MHz system memory, and a graphics engine consisting of a NVIDIA GeForce GTX 1070 with 8 GB of GDDR5 256 bit dedicated memory.
Experimental Task
In this experiment, subjects were tasked with navigating the hexapod through two different courses. The first course consisted of a flat surface and two ninety degree turns, while the second consisted of a series of ascending and descending staircases and one ninety degree turn. The task courses are shown in Figure 8. Starting lines and locomotion end goal areas were clearly demarcated in both of the task courses.
Evaluated Operator Interface Types
In terms of operator input, three different hardware platforms were evaluated: (i) standard computer keyboard, K; (ii) standard gaming controller, J; (iii) Sensable PHANToM Omni 3 DOF haptic device, H.
Operator interface mode H is that described in detail in Section 2.2.2, while the devices types of keyboard K and controller J are commonly used alternatives in teleoperation architectures. In all cases, the hexapod was maneuvered with the alternating tripod gait.
Computer Keyboard, K
This operator interface is the most basic of the three tested, and involves just the use of a common computer keyboard. The user input mechanism relies solely on six keystrokes, as depicted in Figure 9a. With keys A and S, the operator is able to modulate step size for the predefined, alternating tripod gait sequence, and thus effectively change walking speed. Furthermore, the operator is able to translate the hexapod forward and backward with I and K respectively, and pivot left and right with J and L.
Gaming Controller, J
This interface incorporates the use of a handheld gaming controller. The components utilized are the left and right joystick, as depicted in Figure 9b. In this setup, the step size or speed is modulated by how far the user pushes the left joystick forward or backward. The further the joystick is displaced, the faster the robot walks. The pivoting motion is controlled by the right joystick. Similar controllers are used to control the Endeavor's PackBot [57] and Auris Monarch surgical robot [58]. Figure 9. Evaluated operator interfaces to be compared against the proposed haptic interface H (a) keyboard layout for operator interface K (b) game controller mapping for operator interface J.
Subject Recruitment
In this study, recruitment was performed on the university campus and subjects consisted of undergraduate students. Participants were recruited through word of mouth only, and no compensation or rewards were provided or advertised. A total of 21 subjects were tested in this within user study, with ages ranging from 18 to 29 years of age (average of 20). A total of 15 male subjects and six female subjects were tested, and all subjects used computers regularly (more than 10 hours per week). 19 of the subjects were right handed, and the remaining two subjects were left handed. No personally identifiable information was gathered and the study thus did not require approval by the Trinity College Institutional Review Board. In experiment I, a total of three test conditions were utilized, namely the three telelocomotion interfaces: K, J, H, and all participants were tasked with using each of the three experimental interface conditions.
Metrics
In this experiment, two quantitative metrics of efficiency were of interest. These were: (i) time taken to complete the navigation task (s) (ii) number of steps taken to complete the navigation task While the measures are hypothesized to be correlated, the latter may be more indicative of energy consumed in the navigation task. Both metrics were measured for each trial starting from the first step past the starting line and ending with the first step into the goal area.
Procedure
Prior to test trials, each subject was allowed fifteen minutes or whenever satisfied (whichever came first) to practice each interface on an open flat surface; in all cases the subject was satisfied prior to the fifteen minute mark. Once the experiment began, subjects were equipped with noise isolating ear protection to eliminate auditory distractions and cues. Each subject was asked to complete each of the two courses using all three different interfaces, and each interface was used in four trials per course; i.e. 24 total trial runs per user. The trial sequence was randomized prior to each subject. Additionally, users were allowed to take short breaks between trials when requested, and were reminded frequently that they could end the experiment at any time. A mean score across the four trials per each of six conditions (combination of course and interface) resulted in six data points per subject.
Experiment II-Physical Implementation
In this experiment, the modified PhantomX hexapod shown in Figure 3b was teleoperated. Users were presented with visual feedback from the Raspberry Pi Camera V2 and allowed to provide input motion commands via haptic devices, while the physical robot was placed out of view in another room.
Experimental Task
In this experiment, subjects were tasked with navigating the physical PhantomX hexapod through a physical staircase and obstacle traversal course. This course features a set of ascending and descending stairs, uneven surfaces, as well as a ninety degree turns. The staired navigation task is shown in Figure 10. The entire course length is approximately 4.5 m. Starting point, end goal, and desired path were clearly indicated. The obstacle course construction consisted of 1 /2 inch construction grade plywood.
Experimental Conditions
Two experimental conditions for this study were evaluated: (i) haptic virtual fixtures disabled, D; (ii) haptic virtual fixtures enabled, E.
In both modes D and E, users motion commands using the Sensable PHANToM Omni R 3DOF haptic devices are translated to physical robot motion as described in Section 2.2.2. However, the haptic virtual fixtures as described by Figure 7 provide kinesthetic force feedback only in mode E, while no force feedback is presented in mode D.
Subject Recruitment
In this study, recruitment was performed on the university campus and subjects consisted of undergraduate students. Participants were recruited through word of mouth only, and no compensation or rewards were provided or advertised. A total of 10 subjects were tested in this within user study, with ages ranging from 18 to 22 years of age (average of 20). A total of eight male subjects and two female subjects were tested, and all subjects used computers regularly (more than 10 hours per week). Nine of the subjects were right handed, and one subject was left handed. No personally identifiable information was gathered and the study thus did not require approval by the Trinity College Institutional Review Board.
Metrics
The same metrics that were recorded in Experiment I were used for Experiment II, namely time to completion and number of steps to completion.
Procedure
Prior to test trials, each subject was allowed fifteen minutes or whenever satisfied (whichever came first) to practice both mode D and mode E on a flat surface; in all cases the subject was satisfied prior to the fifteen minute mark. Subjects could also view the obstacle course ahead of time. Once the experiment began, subjects were equipped with noise isolating ear protection to eliminate auditory distractions and cues. Each subject was asked to complete the stair navigation task with each mode two times. The order of trials were pre-randomized for each subject to mitigate the effects of learning. Additionally, users were allowed to take short breaks between trials when requested, and were reminded frequently that they could end the experiment at any time. A mean score for each metric in each of the two experimental conditions, D and E, were reported for each participant, resulting in four data points per subject. Figure 11 depicts a high-level flowchart of the telerobotic architecture utilized in Experiment II. The mean values of each metric across tasks and operator modes, in addition to omnibus non-parametric Kruskal-Wallis one way analysis of variance p values, are shown below in Table 1. p values < 0.05 indicate that pairwise comparisons are warranted for the corresponding row; each row warrants further analysis. Raincloud plots better visualize performance drops with increasing task complexity for each operator interface, task and metric, as shown in Figure 13. Since the Kruskal-Wallis omnibus results in Table 1 indicate statistical significance in both metrics for both courses, post-hoc pairwise comparisons using Tukey Honest Significant Difference (HSD) were conducted. The results are illustrated in Table 2. Figure 14 shows the distribution of scores for each of the two experimental conditions, D and E, on the physical stair and obstacle course. The mean values of each metric across both interface modes, virtual fixture disabled D and virtual fixture enabled E, in addition to the mean improvement by addition of the proposed haptic virtual fixtures, are shown below in Table 3.
Discussion
Teleoperation has been shown to be effective in task spaces too dangerous or otherwise non-ideal for human intervention. Furthermore, when combined with haptic feedback and real-time sensing, performance and safety can be improved. These teleoperated architectures have success in controlling robot manipulators, but are only useful if the robot is able to reach the task at hand. This becomes an issue for difficult or challenging terrain unsuitable for wheeled or tracked navigation systems. Such obstacles are frequent in disaster response or space exploration, for example.
Articulated legs, as opposed to wheels and tracks, offer practical flexibility and maneuverability to potentially traverse such terrains. The problem is real-time automation of such an unstructured task, as evidenced by the trials at the DARPA Robotics Challenge [59,60]. Instead, human-in-the-loop control may offer better performance and adaptability in uncertain task environments. This project provides a proof of concept implementation of such an architecture. Overcoming challenging obstacles in space exploration might be achieved with human-controlled legged walking. In another case, in lieu of risking the lives of human responders in dangerous tasks, human-controlled legged robot proxies may be able to traverse a variety of real-world terrains and execute real tasks. This potential to save lives is one that is particularly of interest to the authors, and the promising preliminary results show that haptic feedback has the potential to be useful in future interfaces for control of robot gait execution. In particular, Experiment I showed that for remote control of legged locomotion, haptic joysticks with the proposed virtual fixtures resulted in significantly better performance for traversing complex terrain compared to widely used interface types. Furthermore, Experiment II produced preliminary results that suggest performance enhancements using the proposed virtual fixtures in a physical implementation of telelocomotion to traverse a complex obstacle course. Performance was observed to be more consistently distributed across trials and users with the proposed method.
Beyond life-critical tasks, robotic devices may need to navigate very unpredictable everyday traversal tasks. For example, navigating stairs, unfinished pathways, construction sites or otherwise loose terrain may be encountered. This work thus has the potential to extend telepresence to domains where wheeled or tracked robots cannot reach.
Conclusions
This paper introduces and implements telelocomotion, or real-time remotely operated walking robots. While autonomous locomotion may suffice in well known terrains, it relies on assumptions such as predictable contact mechanics. Humans on the other hand are well adept at responding to unanticipated scenarios. Combining the maneuverability of legged robots with the high-level decision making of humans can allow robots to negotiate challenging terrain. Offline approaches, as shown in the DRC, can be used to plan footholds but do not achieve desirable robustness in overall gait execution. This exploratory work proposed a haptic-enabled telelocomotion interface. The proof-ofconcept method mapped user input configuration to amplitude and reach of the hexapod gait. Haptic feedback in the form of a spatial virtual fixture encouraged users to remain within a 2D spatial plane, and periodic circular motion resulted in effective forward and backward motion.
The method was validated with two different user studies. The first examined three different online telelocomotion interfaces-keyboard (K), gaming controller (J), and the haptic-enabled telelocomotion interface (H) with different simulated traversal difficulties. Despite the simple nature of interface H, results show comparable performance to modes K and J on flat surfaces, and significantly better performance on stairs. Figure 13 and Table 2 depict these results. Modes K and J show high degrees of degradation moving from the flat maze to the staired environment, while performance deviated far less using H. In the second study, the method was adapted to a physical implementation of the hexapedal robot. Users were tasked with navigating a real, physical, staired obstacle course using the interface with either haptic virtual fixtures disabled (D) or enabled (E). Results are promising and demonstrate that the planar virtual fixtures indeed reduced time to completion and steps required to complete the real-world telelocomotion task on average. While results are still preliminary, bean and box-whisker plots show that typical performance is more tightly grouped and consistent about the average in both metrics when the haptic virtual fixtures were enabled. Removing outliers strengthens this observation and enhances the observed improvements when using the haptic enabled mode, E, as compared to D. Figure 14 and Table 3 depict these preliminary results. The maximum traversal speed with ideal conditions of the PhantomX hexapod is about 80 cm /s [61]. With stairs and ramps, the speed to traverse the 4.5m course here is about seven times slower than experiments without obstacles [62].
In this work, 3DOF haptic devices were used to telelocomote an 18DOF hexapod by constraining hexapod motion via fixed joint angles and the holistic motion for an alternating tripod gait. With a simple mapping, low dimension user inputs translated to high-level telelocomotion of a high dimensional legged robot. Future work will look more closely at designing efficient mappings from low dimensional inputs to operating kinematically dissimilar devices, and precise foot placements should be analyzed via Denavit-Hartenberg parameters. This is particularly crucial for navigating more complex terrains; in this work no modifications to the gait or footholds were made to account for stairs. The ideal interface may depend on the degree of kinematic dissimilarity and types of sensory data. More extensive studies can also be conducted to further evaluate performance gains along a more granular scale of terrain traversal complexity, as well as investigating varying mapping gains α x , α y (in this study, these were heuristically tuned to 10 and nine respectively) to affect different parameters, such as speed, stride length etc.. Disaster situations can present challenging terrains unsuited for predetermined traversal strategies, and different modes of control or levels of autonomy may be best suited for each task or environment. Flat, predictable areas with high levels of autonomous confidence may be traversed with supervised autonomy, while more delicate and unpredictable scenarios may call for some combination of online human intervention or manual planning. This work provides a seminal step towards one part of this holistic solution, efficient online human control of legged locomotion. Future work will examine the incorporation of balance controllers for bipedal telelocomotion, implementation of varying task complexity with physical hardware platforms, and direct comparisons with autonomous alternatives. Improvements to the physical implementation of the hexapedal robot include addition of contact sensing at the terminal link of each limb. Refinements to the haptic virtual fixtures may be derived from such contact information. Institutional Review Board Statement: Ethical review and approval were waived for this study, since no identifiable private information about living humans was obtained or used.
Informed Consent Statement: Verbal informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions. | 8,164.8 | 2020-12-28T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Geochronological Constraint on the Evolution of the Aktyuz Terrane, Kyrgyz North Tianshan, and the Fate of the Taldybulak Levoberezhny Gold Deposit
The Aktyuz Terrane in Kyrgyz North Tianshan is of particular interest due to the occurrence of high and ultrahigh pressure (HP–UHP) rocks and it containing the third largest gold deposit in Kyrgyz North Tianshan, i.e., Taldybulak Levoberezhny (abbreviated to Taldybulak Lev.). To constrain the ages of the host Kemin Complex and its auriferous monzogranite porphyry, detailed zircon U–Pb dating [by laser ablation inductively coupled plasma-mass spectrometry (LA-ICPMS) and secondary ion mass spectroscopy (SIMS)] and Lu–Hf isotopic analyses were carried out. The intensively altered auriferous monzogranite porphyry yielded two weighted mean ages of 444 ± 3 Ma (n = 14, mean squared weighted deviation (MSWD) = 0.49, by LA-ICPMS) and 440 ± 5 Ma (n = 8, MSWD = 0.82, by SIMS) that are indistinguishable within error ranges. Such ages are consistent with a previously reported sulfide Re–Os isochron age of 434 ± 18 Ma, supporting a Silurian porphyry gold mineralization. The granitic gneiss yielded a protolith age of 773 ± 7 Ma (n = 7, MSWD = 0.04) and two metamorphic ages of 514 ± 4 Ma (n = 8, MSWD = 0.09) and 483 ± 3 Ma (n = 11, MSWD = 0.04). Detrital zircons from one fuchsite schist sample yielded highly variable ages from 729 ± 13 Ma to 2,463 ± 30 Ma, with 12 data points weighted at 740 ± 5 Ma (MSWD = 0.95). The metamorphic overgrowth yielded a weighted mean age of 460 ± 4 Ma (n = 4, MSWD = 0.15). Detrital zircons in the migmatitic amphibolite are aged from 788 ± 7 Ma to 3,447 ± 32 Ma, with two major concentrations at 941 ± 7 Ma (n = 13, MSWD = 0.95) and 794 ± 5 Ma (n = 8, MSWD = 0.19). The metamorphic overgrowth yielded an average age of 555 ± 4 Ma (n = 8, MSWD = 0.65). The detrital and xenocryst zircons, and evolved εHf(t) values (−20.9 to −7.8) and old two-stage Hf model ages (1,367–3,159 Ma), revealed the presence of a Precambrian basement that may be dated back to the Archean Eon. The two metamorphic ages may correlate with oceanic subduction and continental collision, respectively.
INTRODUCTION
The Central Asia Orogenic Belt (CAOB; Figure 1A) is the biggest accretion-type orogen since the Phanerozoic Zhu et al., 2007;Gao et al., 2009b;Xiao et al., 2012Xiao et al., , 2013. Its southwestern area, i.e., the Tianshan Orogen in Kazakhstan, Kyrgyzstan, and northwest China, is a composite of Paleozoic island arcs, ophiolites, metamorphic complexes, and Precambrian microcontinental terranes that were wedged together during the early Paleozoic (Kröner et al., 2012). The mechanism of Paleozoic accretion, however, remains poorly constrained (Sengor and Natalin, 1994;Charvet et al., 2007Charvet et al., , 2011Windley et al., 2007;Kröner et al., 2012Kröner et al., , 2013. Critical questions, such as the geometry, extent, and source of individual terranes or sutures and also the age and origin of medium-to high-grade metamorphic complexes are still unsolved (Alexeiev et al., 2011;Kröner et al., 2012;Rojas-Agramonte et al., 2013). In this context, the high and ultrahigh-pressure (HP-UHP) metamorphic belt that characterizes ancient subduction zones and sutures is of particular interest due to the important information it can provide for the reconstruction of tectonic evolution (Gao and Klemd, 2003;Klemd et al., 2005;Orozbaev et al., 2007Orozbaev et al., , 2010Gao et al., 2009a).
The Aktyuz terrane is one of two eclogite and garnet amphibolite occurrences in Kyrgyz North Tianshan (Kröner et al., 2012;Rojas-Agramonte et al., 2013). It is composed of two litho-tectonic assemblages, named the Aktyuz and Kemin Complexes. The former is outcropped in the northern area and is well studied because it contains HP-UHP rocks that can be used as a powerful tool for geodynamic reconstruction (Kröner et al., 2012;Rojas-Agramonte et al., 2013). The latter is in the southern area of the terrane and, by contrast, is poorly studied. Kröner et al. (2012) and Rojas-Agramonte et al. (2014) carried out sensitive high-resolution ion microprobe (SHRIMP) zircon U-Pb studies of granitoid gneiss and paragenesis that precisely constrained their protolith age. Nevertheless, their metamorphic timing remains to be elucidated, preventing the correlation with the metamorphism and exhumation of HP-UHP rocks in this region.
Although it does not contain UHP-HP rocks, the Kemin Complex has a giant gold endowment. The Taldybulak Levoberezhny (abbreviated to Taldybulak Lev. hereafter) deposit is the third largest gold deposit in Kyrgyz North Tianshan, with a reserve of Au 130 t and an average grade of 6.9 g/t (Zhao et al., 2015;Xi et al., 2018). Since its discovery in the 1960s, the genesis of the deposit has been hotly debated. The proposed genesis includes porphyry (Djenchuraeva et al., 2008;Trifonov, 2016), orogeny (Goldfarb et al., 2014;Xue et al., 2014), or multiple mineralization involving both orogeny and porphyry (Zhao et al., 2015(Zhao et al., , 2017. Recently, Xi et al. (2018) recognized massive sulfide ores that are typical for the volcanogenic massive sulfide (VMS) or sedimentary exhalative (SEDEX) system instead of the above-mentioned porphyry or orogenic deposits. Hence, a comprehensive geochronological study of the host rocks and the ore-causative granitic porphyry is invaluable for an improved understanding of the deposit formation and regional tectonic evolution.
In this article, we report the results obtained from an integrated in situ U-Pb and Hf isotope analysis of zircon for an auriferous monzogranite porphyry and metamorphic rocks of the Kemin Complex hosting the Taldybulak Lev. deposit. These data provide a well-constrained temporal framework of regional magmatic and metamorphic events. In combination with the previous work, they also shed light on regional tectonic evolution and ore formation.
GEOLOGICAL BACKGROUND
The CAOB is sandwiched between blocks including the plate of Siberia, Eastern Europe, Tarim, Karakum, and North China ( Figure 1A). It is composed of an island arc, seamount, ophiolite suite, Precambrian micro-continents, and an accretionary wedge and is attributed to the collision between Siberia and the Tarim-North China Block Xiao et al., 2009;Chen et al., 2012). The Tianshan Orogen in Kyrgyzstan in the southwestern part of CAOB has been traditionally grouped into three fault-bounded tectonic zones ( Figure 1B), from south to north, i.e., South Tianshan, Middle Tianshan, and North Tianshan, with the Atbashi-Inylchek suture and Nikolaev Line as the main boundaries (Seltmann et al., 2011;Kröner et al., 2012). Kyrgyz North Tianshan is characterized by several Precambrian microcontinents (or fragments), ophiolite belts representative of a former oceanic basin (e.g., Djalair-Naiman and Kyrgyz-Terskey), and HP-UHP eclogite facies metamorphic rocks Glorie et al., 2010;De Grave et al., 2013). There are abundant Andes-type early Paleozoic magmatic rocks, with the Ordovician granitoids considered as products of subduction and subsequent Silurian granitoid as products of post-collision. The Kyrgyz Middle Tianshan contains Devonian to Pennsylvanian passive margin carbonate and siliciclastic facies rocks. It is considered as the deformed southern margin of the Kazakhstan continent (Alexeiev et al., 2019). On its southwestern margin, there are well-developed continental arcs of Silurian, Devonian, and Pennsylvanian origins (Rojas-Agramonte et al., 2014;Alexeiev et al., 2016). The Kyrgyz South Tianshan is dominated by sedimentary assemblages, followed by minor volcanic, metamorphic, and ophiolitic rocks. They are stacked together by a south-facing thrust in an accretionary and collisional setting in the Late Carboniferous and Permian (Han et al., 2011;Alexeiev et al., 2019).
The Aktyuz terrane in Kyrgyz North Tianshan is considered as part of the North Tianshan microcontinent that rifted from the Tarim craton (Rojas-Agramonte et al., 2014). It is bound by the Late Cambrian-Early Ordovician Dzhalair-Naiman ophiolitebearing suture in the north, and a Grenvillian-age granitic gneiss in the south (Kröner et al., 2012;Rojas-Agramonte et al., 2014;Alexeiev et al., 2019). In this region, there is an extensively exposed Precambrian metamorphic basement including Aktyuz and Kemin Complex rocks. The Aktyuz Complex in the north has an outcropped length of more than 20 km and a width up to 5 km (Orozbaev et al., 2010). It mainly consists of well-foliated pelitic and granitic gneisses Gao et al., 2009b). (B) Sketch tectonic map of Kyrgyz Tianshan showing the locations of the Aktyuz terrane, modified from Djenchuraeva et al. (2008) and Seltmann et al. (2011). (C) Geology map of the Aktyuz metamorphic terrane where the Taldybulak Levoberezhny gold deposit is located (modified from Kröner et al., 2012). The geochronological data are from Kröner et al. (2012), Rojas-Agramonte et al. (2013, and this study. Frontiers in Earth Science | www.frontiersin.org enclosing layers or boudins of eclogite, garnet amphibolite, and amphibolite (Bakirov et al., 2003). The gneisses were previously considered to be Archean and Paleoproterozoic in age (Bakirov and Korolev, 1979;Kiselev et al., 1993), although Kröner et al. (2012) reported protolith ages of 541 − 562 and 778 − 834 Ma for granitoid gneisses. Orozbaev et al. (2010) emphasized that the Aktyuz eclogite is unusual in that they occur as remnants in mafic dykes that previously intruded the sedimentary protolith, instead of in rocks derived from oceanic crust. The eclogites had undergone peak metamorphism in the range of 550-670 • C and 1.6-2.3 GPa (Orozbaev et al., 2010). Tagiri et al. (1995) obtained a mineral/whole-rock Rb-Sr isochron age of 749 ± 14 Ma for an eclogite sample, but this age is controversial. Rojas-Agramonte et al. (2013) and Klemd et al. (2014) reported Sm-Nd, Ar/Ar, and Lu-Hf ages of 462 ± 7 Ma, 481 ± 2 Ma, and 474.3 ± 2.2 Ma, respectively (Supplementary Table 4). Such ages are comparable with the muscovite Ar/Ar plateau age for the country rock gneiss (475.7 ± 5.5 and 470.6 ± 5.3 Ma; Kröner et al., 2012) and thus may be reasonable for the HP-UHP metamorphic event.
The Kemin Complex in the south of the terrane is distinguished from the Aktyuz Complex by the absence of HP-UHP metamorphic rocks. It was subdivided into three formations named Kopurelisai, Kapchigai, and Kokbulak by Bakirov et al. (2003) and Orozbaev et al. (2010) or two formations named Kopurelisai and Tegermenty (Djenchuraeva et al., 2008). In this article, the second classification scheme was adopted. The Kopurelisai Formation consists of massive but foliated isotropic hornblende gabbro, greenschist-facies metabasalt, locally with well-preserved but flattened pillows. It is in thrust contact along a serpentinite mélange zone with tectonically overlying gneiss of the Aktyuz Complex and interpreted as constituents of a dismembered ophiolite (Bakirov et al., 2003). Kröner et al. (2012) obtained a SHRIMP zircon U-Pb age of 531.2 ± 3.7 Ma for a foliated metagabbro. The Tegermenty Formation mainly consists of migmatitic paragneisses with subordinate granitoid gneisses. The matrix of some migmatites consists of amphibolitic gabbro, ultramafic rocks, and cherty schists, considered as a metamorphosed ophiolite by Bakirov et al. (2003). Previous α-Pb zircon dating yielded ages of 2,550 ± 250 and 2,050 ± 200 Ma for migmatitic gneisses (Bakirov and Korolev, 1979), whereas conventional multigrain U-Pb zircon dating of migmatitic tonalitic-trondhjemitic gneisses yielded ages of ∼2,050 and 1,850 ± 10 Ma (Kiselev et al., 1993). More precise geochronological data reported by Kröner et al. (2012) revealed protolith ages of 799 − 844 Ma for granitic gneiss and 503 − 2,460 Ma for paragneisses.
The Ordovician volcanic-sedimentary rocks unconformably overlying on the Kemin Complex are the oldest undeformed rocks in the region ( Figure 1C). One rhyolite sample from the Burubai Formation yielded a SHRIMP zircon U-Pb age of 452.2 ± 2.8 Ma, and one porphyritic basaltic andesite from the Cholok Formation is nearly contemporaneous, aged 448.9 ± 5.6 Ma (Kröner et al., 2012). Devonian volcanicsedimentary rocks in the southwest of the terrane are distributed around the metamorphic basement. Together they constitute the Taldybulak-Boordu dome (with an area of about 10 km × 20 km) that was subsequently intruded by the Late Paleozoic subvolcanic rocks.
DEPOSIT GEOLOGY AND SAMPLE DESCRIPTION
The Taldybulak Lev. gold deposit is located in the Taldybulak shear zone, which is an overthrust complex of highly deformed lithologies (Malyukova, 2001). The exposed strata include the Kopurelisai and Tegermenty Formations of the Kemin Complex and the volcanic-sedimentary rocks of the Devonian Barkol Formation (Figure 2A). The Kopurelisai Formation here is dominated by amphibolite, biotite amphibolite, schist, and migmatite, and the Tegermenty Formation consists mainly of mica schist with layers of gneiss and chert. The Devonian Barkol Formation unconformably covers the Tegermenty Formation metamorphic rocks. It is dominated by andesite, basaltic andesite, volcanic breccia, tuff, and sandstone (Djenchuraeva et al., 2008).
There are three shear and thrust zones in the mining area, which are successively named the Upper zone, the Middle zone, and the Taldybulak zone from west to east, with a total thickness of about 700 m (Figures 2A,B). In the shearing zone, there is a chaotic assemblage of schist, amphibolite, and gneisses that have undergone intensive quartz-calcite and quartz-sericite alteration. They were further offset by an NE-trending or nearly E-W-trending fault and intruded by numerous diorite-monzodiorite-monzogranite or dolerite dykes (Zhao et al., 2015(Zhao et al., , 2017. Thus far, 11 gold ore bodies have been discovered, which are confined to the shear zone or around the granitic dykes (Figures 2A,B). The No. 1 orebody is a prominent entity located at the junction of the Taldybulak shear and thrust zone with a nearly E-W-trending fault. Its eastern area is vein-or pipeshaped and decreases gradually to the west. It extends for ca. 1,300 m, and a width of 60 − 350 m and a thickness of 20 − 80 m (Xi et al., 2018). Except for Nos. I and II, other orebodies occur mainly as veins or lenses.
In this study, four samples, namely one auriferous porphyry and three wall rock samples from the Kopurelisai Formation, were selected for zircon U-Pb dating and Lu-Hf isotopic studies (Figure 3). Sample 15KT12 is a fuchsite schist collected from the No. 1582 ramp. It is dominated by quartz and fuchsite, where auriferous pyrite was associated with secondary sericite or calcite. Sample 15KT34 has little or no mineralization. It is a pink, medium-grained, finely layered granitic gneiss collected from the outcrops at the ventilation room of the deposit. It mainly consists of plagioclase, K-feldspar, biotite, quartz, and secondary sericite and epidoite. Sample15KT15 is a migmatitic amphibolite collected from the No. 1582 ramp. The matrix is gray or gray-green colored and consists mainly of hornblende, plagioclase, and biotite with volumetric percentages of 55, 40, and <5%, respectively. The hornblende was partially replaced by epidote, chlorite, calcite, and quartz. The leucosome is pink colored and contains mainly quartz, plagioclase, and K-feldspar. Sample 15KD66 is a medium-grained, reddish-colored biotite monzonite porphyry intruding the fuchsite schist. The sample was collected from drilling hole ZK15S0502 (with a sample depth of 68.9 m). It has a typical porphyritic texture, with plagioclase and K-feldspar phenocrysts in a groundmass of K-feldspar, plagioclase, quartz, biotite, and apatite. The auriferous porphyry was intensively altered by sericite and calcite, hosting abundant quartz-tourmaline-pyrite veins or veinlets ( Figure 3D).
In situ Zircon U-Pb Dating and Hf Isotopic Analyses
Zircon grains were extracted by magnetic techniques and conventional heavy liquids and then purified by handpicking under a binocular microscope. Representative zircon grains were mounted in epoxy resin and polished down to expose the grain center. Cathodoluminescence (CL) images of zircons were taken at the Beijing Zircon Linghang Technology Company, Beijing. The images were obtained by a MonoCL 3+ (Gatan Company, Abingdon, England) CL spectroscope attached to a Quanta 400 FEG Scanning Electron Microscope. The operative condition during CL imaging was 15 kV.
In situ zircon laser ablation inductively coupled plasma-mass spectrometry (LA-ICPMS) U-Pb dating, Hf isotope, and trace element analyses were conducted at the State Key Laboratory of Continental Dynamics at Northwest University, Xi'an. The Plasma Mass Spectrometer, Nu Instruments Ltd., Wrexham, United Kingdom), which were used for simultaneous acquisition of trace elements, Hf isotopes, and U-Pb isotopes. The zircon aerosol generated by the laser was split into two transmission tubes by a y-shaped convection device after the ablation unit and introduced into two mass spectrometers at the same time. The hafnium isotopes were measured by MC-ICP-MS, and U-Pb age and trace element composition were measured by Q-ICP-MS. An Edwards E2M80 source rotary pump was used in the interface area to improve sensitivity. The spot size was 30 µm for U-Pb dating and 40 µm for Lu-Hf analysis. Helium was used as the carrier gas to provide effective aerosol transport to ICP and to minimize aerosol deposition. The detailed analytical procedure can be found in Yuan et al. (2008).
Standard Zircon GJ-1, 91500, NIST SRM 610, and Monasteries are used as reference standards for calibration and control of analytical instrument conditions. Uranium, lead, thorium, and trace element concentrations were calibrated using 29 Si as an internal standard and NIST SRM 610 as an external reference standard. 202 Hg in Q-ICP-MS gas blank is usually < 30. Therefore, the contribution of 204 Hg to 204 Pb is negligible and has not been adjusted. The 206 Pb/ 238 U, 207 Pb/ 206 Pb, 208 Pb/ 232 Th, and 207 Pb/ 235 U ratios were calculated using the GLITTER 4.4 (GEMOC, Macquarie University, Sydney, Australia), and zircon 91500 was used to correct instrument mass deviations and depthdependent element and isotope fractionation. Zircon 91500 and GJ-1 is used as an external standard. Obtained 206 Pb/ 238 U weighted mean ages of 91,500 and GJ-1 were 1,062 ± 5.6 Ma (2σ) and 604.8 ± 5.6 Ma (2σ), respectively, highly consistent with the recommended ages. Common lead was corrected according to the method put forward by Andersen (2002). The results were reported as one error. The Concordia plot and weighted average U-Pb age (95% confidence) were obtained using the ISOPLOT 3.0 (Ludwig, 2003).
Interference of 176 Lu/ 177 Hf was corrected by measuring the intensity of the 175 Lu without interference, and the 176 Lu/ 177 Hf was calculated using the recommended ratio of 176 Lu/ 175 Lu with 0.02669 (Horn and von Blanckenburg, 2007). In the same way, we used the recommended 176 Yb/ 177 Hf ratio of 0.5586 (Hazarika et al., 2018) to correct the 176 Yb/ 172 Yb ratio to calculate the 176 Yb/ 177 Hf ratio. In the course of analysis, the ratio of 176 Hf/ 177 Hf obtained in the analysis is 0.282296 ± 50 for 91,500 and 0.282019 ± 15 for GJ-1, consistent with the recommended ratios of 0.2823075 ± 58 and 0.282015, respectively. Analytical ratios of 176 (Vervoort and Patchett, 1996;Vervoort and Blichert-Toft, 1999). The f Lu/Hf ratio equals ( 176 Lu/ 177 Hf) s /( 176 Lu/ 177 Hf) CHUR − 1, where the ( 176 Lu/ 177 Hf) CHUR = 0.0332, and the ( 176 Lu/ 177 Hf) s value is obtained from analysis of the sample.
SIMS Zircon U-Pb Dating
To better constrain the ages of the rim or cores, zircon U-Pb dating of sample 15KT15 and 15KD66 was further conducted using a Cameca IMS-1280HR SIMS at the Institute of Geology and Geophysics, Chinese Academy of Sciences in Beijing. For comprehensive descriptions of the instrument and the analytical procedure, refer to Li et al. (2009). A brief summary is provided here. The primary O 2 − ion beam spot is about 20 µm × 30 µm in size, and positive secondary ions were extracted with a 10-kV potential. In the secondary ion beam optics, an energy window of 60 eV was used, together with a mass resolution of ca. 5,400 (at 10% peak height), to separate Pb + peaks from isobaric interferences. A single electron multiplier was used in the ion-counting mode to measure secondary ion beam intensities by peak jumping mode. Analyses of the standard zircon Plesovice were interspersed with unknown grains. Each measurement consists of seven cycles. Pb/U calibration was performed relative to zircon standard Plesovice ( 206 Pb/ 238 U age = 337 Ma, Slama et al., 2008); U and Th concentrations were calibrated against zircon standard 91500 (Th = 29 ppm and U = 81 ppm; Wiedenbeck et al., 1995). A long-term uncertainty of 1.5% [1σ relative standard deviation (RSD)] for 206 Pb/ 238 U measurements of the standard zircons was propagated to the unknowns (Li et al., 2010), despite that the measured 206 Pb/ 238 U error in a specific session is generally ≤1% (1 σ RSD). The measured compositions were corrected for common Pb using non-radiogenic 204 Pb. Corrections are sufficiently small to be insensitive to the choice of common Pb composition, and an average of present-day crustal composition (Stacey and Kramers, 1975) is used for the common Pb assuming that the common Pb is largely surface contamination introduced during sample preparation. Data reduction was carried out using the ISOPLOT 3.0 (Ludwig, 2003). Uncertainties in individual analyses in data tables are reported at the 1σ level; Concordia U-Pb ages are reported with 95% CI, unless otherwise stated.
To monitor the external uncertainties of SIMS U-Pb zircon dating calibrated against a Plesovice standard, a standard zircon GBW04705 (Qinghu) was alternately analyzed as an unknown together with other unknown zircons. Nine measurements on Qinghu zircon yielded a Concordia age of 160 ± 1 Ma, which is identical within error to the recommended value of 159.5 ± 0.2 Ma (Li et al., 2013).
Zircon Morphology and U-Pb Ages
The zircon CL images, LA-ICPMS, and SIMS U-Pb data are available in Supplementary Tables 1,2 and Figures 4-6. It is commonly accepted that precise dating of zircon crystals younger than 1,000 Ma is best achieved by using concordant 206 Pb/ 238 U age, whereas the older grains would be more precisely dated using 206 Pb/ 207 Pb ages due to decreasing amounts of radiogenic lead available for the measurement (Kröner et al., 2012;Rojas-Agramonte et al., 2013;Zhang et al., 2020).
Detrital zircons collected from sample 15KT12 are mainly light black and colorless, with lengths ranging from 80 to 280 µm and length/width ratios from 1.3:1 to 3:1. They tend to have near-euhedral to subhedral shapes, indicative of rather short transport and source areas not far from the depositional sites. The CL images illustrate that the zircons generally have complex inner textures, with a bright, oscillatory zoned core overgrown by a dark, unzoned rim ( Figure 4B). Nineteen of 23 LA-ICPMS analyses were performed on the central part. They yielded highly variable U (36 − 1,393 ppm) and Th (30 − 250 ppm) contents as well as Th/U ratios (0.07 − 1.51) and provide an array of concordant or near-concordant data corresponding to ages between 729 and 2,463 Ma ( Figure 4A). The most prominent concentrate emanates from those with oscillatory zoning (Figure 4B) (Figures 4A-C) suggests a Neoproterozoic-Cambrian age (most probably between 729 and 463 Ma) of deposition and the source terrane ranges in age from Neoproterozoic to Paleoproterozoic. The rock underwent metamorphism at 460 Ma.
In sample 15KT34, the zircon crystals are gray-white-colored short prismatic or subhedral crystal, with a common length of 100-180 µm and length/width ratio of 1.3:1 to 2.5:1. The CL imaging revealed that most zircon crystals contain inherited cores that can either be oscillatory zoned or exhibit faint zonation. Seventeen LA-ICPMS analyses on the interior reveal three populations. The older two populations have similar oscillatory zonation and overlapped U (105-299 ppm), Th (68-289 ppm), and Th/U ratios (0.46-1.16). Two analyses yielded an average of 823 ± 12 Ma (n = 2, MSWD = 0.007), and seven analyses yielded a younger age of 773 ± 7 Ma (n = 7, MSWD = 0.04). They are interpreted as the age of zircon xenocryst and the granitic protolith, respectively. The third population is easily distinguished by their unzoned or patchzoned CL response and high U (388-558 ppm), but low Th (14-52 ppm) and Th/U ratios (0.03-0.11). They collectively give a weighted average age of 514 ± 4 Ma (n = 8, MSWD = 0.09) ( Figure 4D and Supplementary Table 1), which may represent an important metamorphic event. Outward from the core is a dark overgrowth with little or no zonation. Eleven analyses reveal extremely low Th and Th/U ratios of 2-7 ppm and 0.01-0.03, respectively. Obtained 206 Pb/ 238 U ages vary from 481 ± 8 to 485 ± 5 Ma (1σ error), with a weighted average of 483 ± 3 Ma (n = 11, MSWD = 0.04). Furthermore, there is a very bright rim that is too thin to be dated precisely ( Figure 4D).
In sample 15KT15, the zircon crystals have highly variable morphologies and colors (Figures 5B-D). Some grains are dark colored, subrounded columns, or almost spherical, whereas others are brown in color and subrounded to near-euhedral in shape. They range from 100 to 120 µm in length and 70 to 100 µm in width, yielding length/width ratios of 1.2:1 to 2:1. The CL images show that most zircons exhibit a distinct core-rim texture. In some cases, an inherited core is identified on the basis of its bright and well-round appearance, with oscillatory or planar zoning indicative of a magmatic origin. Interior domain mantling cores tend to be dark and unzoned in the CL response. They are further overgrown by thin, bright, and unzoned rims. A total of 50 LA-ICPMS analyses were conducted and provided mostly concordant, but highly diverse ages ( Figure 5A). Eight core analyses yielded 206 Pb/ 238 U dates of 788-797 Ma and combined to a weighted average of 794 ± 5 Ma (n = 8, MSWD = 0.19). A second population on the core comprises 13 dates ranging from 931 to 948 Ma, with a weighted average of 941 ± 7 Ma (n = 13, MSWD = 2). Other spots on the core yielded ages ranging from 975 to 3,447 Ma (Supplementary Table 1). The youngest 206 Pb/ 238 U date averaged at 555 ± 4 Ma (n = 8, MSWD = 0.65) emanates from the thin zircon overgrowths that have low U (245-1,384 ppm, mostly < 500 ppm), extremely low Th (15-122 ppm, mostly < 35 ppm), and Th/U ratio (0.04-0.09). This could be regarded as the best estimate of an unbiased age for a metamorphic event. Furthermore, there is a rim, which is too thin to be dated precisely. The above result is further verified by an additional seven SIMS analyses that yielded consistent ages of 1,021 − 1,102 Ma, 895, 689, and 556 Ma (Supplementary Table 2). The youngest two ages averaged at 558 ± 12 Ma (MSWD = 0.03) may represent the metamorphic event, whereas other older ages represent the ages of the sedimentary protolith.
Most zircon crystals collected from sample 15KD66 have an euhedral, elongated shape with sharp facets and pointed tips. The zircon crystals are mainly colorless and transparent, with lengths ranging from 140 to 270 µm and length/width ratios from 1.5:1 to 3:1. Occasionally, the prisms are irregular, suggesting chemical corrosion or physical abrasion. The CL investigation (Figures 6A,B) revealed that most zircon crystals have oscillatory zoning. Some grains have irregular cores mantled by oscillatoryzoned magmatic rims, and the core-rim boundary is often irregular and marked by a CL-bright thin band (Figure 6B). The cores are small (50-70 µm in length), with points being analyzed by LA-ICPMS. Three analyses on the inherited cores yielded ages of 766 ± 13, 770 ± 8, and 2,567 ± 35 Ma, respectively (Supplementary Table 1). In an attempt to date the crystallization age of the porphyry, the LA-ICPMS analyses were mainly concentrated on the zircon rims. A total of 14 analyses yielded consistent 206 Pb/ 238 U ages varying from 434 ± 7 to 448 ± 4 Ma, with a weighted mean of 444 ± 3 Ma (n = 14, MSWD = 0.49, Figure 6A). The same sample was further analyzed by SIMS. Nine analyses on the thin overgrowth yielded a weighted mean of 440 ± 5 Ma (n = 8, MSWD = 0.82, Figure 6B) that is indistinguishable from the LA-ICPMS data within error. Thus, the emplacement of the monzogranite porphyry is constrained to ca. 442 Ma. In sample 15KT34, the 773-Ma-aged zircons have 176 Lu/ 177 Hf ratios within a range between 0.000568 and 0.002407, 176 Hf/ 177 Hf ratios between 0.281681 and 0.281943, εHf(t) values between −22.8 and −12.8 and T DM2 ages between 2,469 and 3,088 Ma, respectively (Figure 7 and Supplementary Table 3).
Evidence for Porphyry-Type Gold Mineralization
Since its discovery in 1963, the ore genesis of the Taldybulak Lev. deposit has been hotly debated. Some scholars regard the deposit as a complex porphyry system (Trifonov, 1987;Malyukova, 2001;Djenchuraeva et al., 2008;Seltmann et al., 2014) based on the spatial association between the porphyritic dykes and gold mineralization revealed by extensive drilling in Soviet times ( Figure 2B). Emplacement ages of these intrusions, however, are poorly constrained, with available ages varying considerably from 183 to 403 Ma (without dating error, K-Ar dating). By contrast, Xue et al. (2014) and Goldfarb et al. (2014) proposed that the deposit may be orogenic gold system considering that it is spatially controlled by the shear and thrust zones. Zhao et al. (2015Zhao et al. ( , 2017 suggested a multistage mineralization involving both orogenic and porphyry events and provided two sulfide Re-Os isochron ages of 434 ± 18 Ma (n = 6, MSWD = 24) and 511 ± 18 Ma (n = 5, MSWD = 2.0). Thus, the emplacement age of the granitic dyke and its contribution to gold mineralization becomes a key point.
In this article, one monzogranite sample with crosscutting quartz-tourmaline-pyrite veinlets was collected. The rock was intensively altered, with typical porphyry-type alterations including silicic, potassic, and sericite alterations. Djenchuraeva et al. (2008) further proposed a vertically zoned hydrothermal alteration surrounding the diorite-monzonite intrusion, including the argillic zone corresponding to the apical part, the quartz-carbonate, and quartz-sericite zones around the intrusion, grading at depth to potassic alteration and the quartz-tourmaline alteration. Detailed petrological studies revealed native gold and electrum (with gold contents of 76.51%) inclusions in pyrite, and the pyrite itself contains considerable invisible gold up to 0.26% (Xi et al., 2018). Our new U-Pb ages provide a robust geochronological constraint for the emplacement of the auriferous monzogranite porphyry (Supplementary Table 1), with overlapping ages of 444 ± 3 Ma (n = 14, MSWD = 0.93) and 440 ± 4 Ma (n = 9, MSWD = 6.5) revealed by LA-ICPMS and SIMS U-Pb dating, respectively. These ages are consistent with the sulfide Re-Os isochron age of 434 ± 18 Ma (n = 6, MSWD = 24, Zhao et al., 2015) within error, providing another line of evidence for porphyry mineralization.
The Age and Reworking of the Precambrian Basement
The Kyrgyz North Tianshan, including the Aktyuz terrane, is one of the oldest orogenic domains of the CAOB. It has a Precambrian block mainly consisting of Grenvillian-aged granitoids, whereas the subordinate Paleoproterozoic and Archean rocks crop out in its western extension to southern Kazakhstan . Based on detrital and xenocryst zircon ages and comparison with published ages from the Chinese Tianshan and the Tarim Craton, Rojas-Agramonte et al. (2014) suggested that the Kyrgyz North Tianshan rifted off the Tarim craton.
In this study, the inhomogeneous population of zircons from the fuchsite schist and migmatitic amphibolite supports a detrital origin of the zircons. These zircons are mostly near-euhedral to subhedral in shape, with a few exhibiting slight rounding at their terminations, suggesting a relatively short transport and source areas not far from the depositional site. As shown in the age probability plot (Figure 8), the most prominent age frequency is bracketed between 700 and 1,200 Ma. Such a major phase of Neo-to Mesoproterozoic magmatic event is not only prevalent in North Tianshan, but also well documented in surrounding areas such as Kyrgyz Middle Tianshan, southern Kazakhstan, the Chinese Tianshan, and the Tarim Craton (Kiselev and Maksumova, 2001;Kröner et al., 2007;Zhou et al., 2017Zhou et al., , 2018. Another main cluster of single grain U-Pb ages between 2,300 and 2,700 Ma accords well with the Paleoproterozoic to Neoarchean peaks revealed by detrital and xenocryst zircons from the North Tianshan, South Tianshan, and the Tarim craton (Rojas-Agramonte et al., 2014). A minor peak between 1,500 and 1,700 Ma is difficult to interpret because it corresponds to the global magmatic hiatus (Condie and Aster, 2010). The oldest zircon age is 3,447 ± 32 Ma, suggesting the presence of Archean crust.
The old zircon cores or xenocryst also carry a record of older geological history, signifying the existence of a Precambrian basement at depth. In the monzogranite porphyry, three zircon cores yielded concordant ages of 766, 770, and 2,567 Ma, respectively. In the case of granitic gneiss, there are two xenocrysts aged at ca. 823 Ma. The younger ages grouped at 770-823 Ma fit well to the Neoproterozoic peak for detrital zircons, whereas the old age of 2,567 Ma corresponds to the Archean peak around 2,500 Ma (Figure 8). Zircon xenocrysts of similar ages are also reported for other rocks in this region. For instance, Zhao et al. (2017) reported two inherited zircon aged 813 and 808 Ma from a diorite outcropped in the deposit. Kröner et al. (2012) recognized zircon xenocrysts dated at 1,180 Ma in a Neoproterozoic migmatite of the Kemin Complex and at 1,263 Ma in a para-migmatite sample.
More evidence for an early Archean crustal component is presented based on our zircon Lu-Hf isotope studies of monzogranite porphyry and other detrital zircons. In the former case, 26 out of 27 zircons yielded negative εHf(t) values between − 22.8 and − 0.8 (Figure 7), with one exceptional positive εHf(t) value of 2.6. Zhao et al. (2017) carried out zircon Lu-Hf isotopic studies for one diorite (435.3 ± 3.8 Ma) and one monzogranite (427.7 ± 1.9 Ma) sample of similar ages. Their results yielded slightly higher εHf(t) values (− 8.2 to 12), with data varying between negative and positive. The Neoproterozoic granitoids outcropped in the Aktyuz terrane yielded even lower negative εHf(t) values indicative of crustal remelting. Kröner et al. (2012) obtained zircon εHf(t) values of − 19.5 to − 15.7 for a gray granodioritic gneiss (844 ± 9 Ma) of the Kemin Complex, and − 15.4 to − 2.6 for a well-banded leucogneiss (834 ± 8 Ma) of the Aktyuz Complex. Such highly evolved εHf(t) values, in combination with geochemistry and wholerock Sr-Nd isotopic data (Kröner et al., 2012;Zhao et al., 2017), reveal that at least the Paleozoic and Neoproterozoic magmatic rocks outcropped in the Aktyuz terrane mainly resulted from partial melting of continental crust or its mixing with limited juvenile or short-lived material. This supports the conclusion proposed by Kröner et al. (2013) that the production of mantle-derived or juvenile continental crust in CAOB has been overstated. The results also provide clues that there is an extensive Precambrian basement at depth, more than the surface outcrops; the Aktyuz terrane consists of reworked crustal material, some of it with a long history, possibly dating back to the Archean Eon.
The Timing of Metamorphism and Its Geological Significance
No geochronological constraint had been available for the metamorphism of the Kemin Complex until the present study. Our work reveals zircon overgrowth with a low Th/U ratio (generally < 0.1, Supplementary Table 1) and unzoned or patch-zoned CL imaging. They generally grow surrounding an oscillatory-zoned core and are attributed to a metamorphic origin according to Corfu et al. (2003) and Wu and Zheng (2004). Such overgrowths yielded a weighted mean of 460 ± 4 Ma for fuchsite schist, indicating a Middle Ordovician metamorphic event. The migmatitic amphibolite, by contrast, exhibits Neoproterozoic metamorphism at 555 ± 4 Ma, although a subsequent metamorphic event cannot be excluded since the ca. 555-Maaged overgrowth still has a distinct bright, thin rim ( Figure 5D). Interestingly, the granitic gneiss experienced two metamorphic events at ca. 514 Ma and 483 Ma that can even be recorded by a single zircon grain (the left one in Figure 5D). Hence, the above data reveal that both a Neoproterozoic-Cambrian (514 − 555 Ma) and an Ordovician (460 − 483 Ma) metamorphism were accompanied by zircon growth. The latter matches well with the timing of HP-UHP metamorphism of the Aktyuz Complex (462-481 Ma, by Sm-Nd, Lu-Hf, and Ar/Ar dating, Supplementary Table 4), but no precursor metamorphic event has been reported for the Aktyuz Complex. Instead, the Neoproterozoic-Cambrian metamorphic event is in good agreement with the HP-UHP metamorphism at other complexes such as Kokchetav, Makbal, and Anrakhai.
At the Kokchetav Terrane, northern Kazakhstan, the HP-UHP metamorphism and subsequent exhumation are well confined to 530 − 540 Ma and 505 − 530 Ma by biotite Ar/Ar and zircon U-Pb dating (Claoué-Long et al., 1991;Shatsky et al., 1999;Hermann et al., 2001;Dobretsov et al., 2005Dobretsov et al., , 2006Buslov et al., 2010). The migmatites and gneisses within the terrane (Katayama et al., 2001;Ragozin et al., 2009) yielded zircon U-Pb ages of 505 − 540 Ma, with the oldest and youngest being interpreted as the timing of peak metamorphism and the timing of UHP exhumation, respectively. The subsequent collisional deformation occurred at 470-490 Ma, inducing folding, shearing, and retrograde metamorphism (De Grave et al., 2006;Dobretsov and Buslov, 2007;Zhimulev et al., 2011). At the Makbal block of the Kyrgyz North Tianshan, the HP-UHP event was constrained to 480 − 509 Ma by monazite and zircon U-Pb dating (Togonbaeva et al., 2009;Konopelko et al., 2012). In the case of the Anrakhai Complex, SHRIMP zircon U-Pb dating yielded ages of ca. 490 Ma for garnet pyroxenite, constraining a Late Cambrian HP metamorphism and exhumation (Alexeiev et al., 2011). These lines of geochronological evidence support a possible link among these terranes and support a Kokchetav-Kyrgyz North Tianshan belt Kröner et al., 2012).
Tectonic Evolution and the Fate of the Gold Deposit
The current geodynamic model for the Aktyuz terrane (Alexeiev et al., 2011;Kröner et al., 2012;Rojas-Agramonte et al., 2013) favors the North Tianshan as a microcontinent sandwiched between the Dzhalair-Naiman basin in the north and the Kyrgyz-Terskey basin in the south during the Cambrian and Early Ordovician. Our new data, in combination with previous results (Kröner et al., 2012;Rojas-Agramonte et al., 2013, reveal a Precambrian basement that shows affinity to other small fragments in the Kokchetav-North Tianshan belt. The Aktyuz terrane itself, however, is not a coherent continental domain, but instead, is composed of tectonic slivers including Precambrian continental crusts and Early Paleozoic ophiolite. This is also evident in the variable Hf isotope of the Kemin and Aktyuz Complexes (Figure 7).
The southwest-directed subduction of the Paleo-Asian Ocean beneath the North Tianshan induced the opening of the Dzhalair-Naiman back-arc basin. The zircon U-Pb data of 531.2 ± 3.7 Ma (Kröner et al., 2012) for a leucogabbro sample from the Kopurelisai ophiolite suite confirms an Early Cambrian age (Ryazantsev et al., 2009). Although there are no available geochronological data, the massive sulfide ores of the Taldybulak Lev. deposit (Xi et al., 2018) may be the product of back-arc extension (Figure 9). It is hosted by the Kopurelisai Formation and underwent orogenic deformation. The sulfide Re-Os isochron of 511 ± 18 Ma that is interpreted as the timing of orogenic-type mineralization (Zhao et al., 2015(Zhao et al., , 2017 may alternatively record an initial massive sulfide precursor. The large uncertainty in this date prevents a comprehensive understanding of its geological meaning. Another response to the oceanic subduction is the granitoids with continental arc affinity. Kröner et al. (2012) reported zircon U-Pb ages of 541-562 Ma for the youngest granitoid gneisses at the Aktyuz terrane, which is comparable to a metadacite sample from the Anrakhai area. The Cambrian metamorphic zircon overgrowth recognized in our study may evidence this convergence. These rocks represent exotic slivers that share a similar metamorphic history with the Makbal or Kokchetav terranes. Subsequent northwards subduction of the Dzhalair-Naiman basin and its final closure led to the convergence of the Anrakhai and Aktyuz microcontinents (Kröner et al., 2012). The previous passive margin of the North Tianshan microcontinent was involved in subduction underneath the Anrakhai microcontinent. Klemd et al. (2014) suggest that the HP rocks of the Aktyuz Complex represent subducted continental crust, and the eclogite and hosting metasedimentary rocks underwent metamorphism at variable depths of the subduction zone and subsequently juxtaposed during exhumation in the subduction channel. Our present study, in combination with previous stratigraphic, structural, and isotopic data (Kröner et al., 2012 and references therein), constrains the timing of the collision to Early Ordovician (most probably between 480 and 460 Ma). The Silurian (428-440 Ma) diorite and monzogranite outcropped in the Taldybulak Lev. deposit may be formed by partial melting of continental crust in a post-collision setting (Zhao et al., 2017). Frontiers in Earth Science | www.frontiersin.org They intruded into the metamorphic rocks, induced porphyrytype gold mineralization that superimposed on earlier VMS, and/or orogenic gold mineralization.
(2) Three wall rock samples of the Kemin Complex yielded two episodes of metamorphic ages at 514 − 556 and 460 − 483 Ma, which may represent the age of oceanic subduction and continental collision, respectively. (3) The detrital and xenocryst zircons (689 − 3,447 Ma), and highly evolved εHf(t) values (− 20.9 to − 7.8) and old two-stage Hf model ages (1,367 − 3,159 Ma), reveal the presence of a Precambrian basement that may be dated back to the Archean Eon.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
WX provided initial data as part of a Ph.D thesis. NL and XX designed the project, and took the lead on writing the manuscript. XL and YW helped with the SIMS and LA-ICPMS dating. All authors contributed to the article and approved the submitted version. | 9,627.8 | 2021-05-20T00:00:00.000 | [
"Geology"
] |
$B-L$ as a Gauged Peccei-Quinn Symmetry
The gauged Peccei-Quinn (PQ) mechanism provides a simple prescription to embed the global PQ symmetry into a gauged $U(1)$ symmetry. As it originates from the gauged PQ symmetry, the global PQ symmetry can be protected from explicit breaking by quantum gravitational effects once appropriate charge assignment is given. In this paper, we identify the gauged PQ symmetry with the ${B-L}$ symmetry, which is obviously attractive as the ${B-L}$ gauge symmetry is the most authentic extension of the Standard Model. As we will show, a natural $B-L$ charge assignment can be found in a model motivated by the seesaw mechanism in the $SU(5)$ Grand Unified Theory. As a notable feature of this model, it does not require extra $SU(5)$ singlet matter fields other than the right-handed neutrinos to cancel the self and the gravitational anomalies.
I. INTRODUCTION
The strong CP problem is longstanding and probably one of the most puzzling issues in particle physics. Although the Peccei-Quinn (PQ) mechanism [1][2][3][4] provides a successful solution to the problem, it is not very satisfactory from a theoretical point of view, as it relies on a global Peccei-Quinn U (1) symmetry. The Peccei-Quinn symmetry is required to be almost exact but explicitly broken by the QCD anomaly. Even tiny explicit breaking terms of the PQ symmetry spoil the PQ mechanism. It is, on the other hand, conceived that any global symmetries are broken by quantum gravity effects [5][6][7][8][9][10]. Thus, the PQ mechanism brings up another question, the existence of such an almost but not exact global symmetry.
In [11], a general prescription to achieve a desirable PQ symmetry is proposed in which the PQ symmetry originates from a gauged U (1) symmetry, U (1) gP Q . The anomalies of U (1) gP Q are canceled between the contributions from two (or more) PQ charged sectors, while the inter-sector interactions between the PQ charged sectors are highly suppressed by appropriate U (1) gP Q charge assignment. As a result of the separation, a global PQ symmetry exist in addition to U (1) gP Q as an accidental symmetry. The accidental PQ symmetry is highly protected from explicit breaking by quantum gravitational effects as it originates from the gauge symmetry. The gauged PQ mechanism is a generalization of the mechanisms which achieve the PQ symmetry as an accidental symmetry resulting from (discrete) gauge symmetries [12][13][14][15][16][17][18][19][20].
In this paper, we discuss whether the B − L symmetry can play a role of the gauged PQ symmetry. The B − L gauge symmetry is the most authentic extension of the Standard Model (SM) which explains the tiny neutrino masses via the seesaw mechanism [21][22][23] (see also [24]). Therefore, the identification of the gauged PQ symmetry with B − L makes the gauged PQ mechanism more plausible. An intriguing coincidence between the righthanded neutrino mass scale appropriate for the thermal leptogenesis [25] (see [26][27][28], for review) and the PQ breaking scale which avoids astrophysical constraints also motivates this identification [29].
As will be shown, we find a natural B − L charge assignment motivated by the seesaw mechanism in the SU (5) Grand Unified Theory (GUT), with which the gauged PQ mech-anism is achieved. Notably, the charge assignment we find does not require extra SU (5) singlet matter fields other than the right-handed neutrinos to cancel the [U (1) gP Q ] 3 and the gravitational anomalies.
The organization of the paper is as follows. In section II, we discuss an appropriate B − L charge assignment so that it plays a role of U (1) gP Q . In section III, we discuss the properties of the axion and the global PQ symmetry. In section IV, we briefly discuss the domain wall problem. In section V, we discuss supersymetric (SUSY) extension of the model. The final section is devoted to our conclusions. Having the SU (5) GUT in mind, it is more convenient to consider "fiveness", 5(B − L) − 4Y , instead of B − L, as it commutes with the SU (5) gauge group. The fiveness charges of the matter fields are given by while the Higgs doublet, h, has a charge +2 (i.e. B − L = 0). 1 Here, we use the SU (5) GUT representations for the matter fields, i.e. 10 SM = (q L ,ū R ,ē R ) and5 SM = (d R , L ), whileN R denotes the right-handed neutrinos.
The seesaw mechanism is implemented by assuming that the right-handed neutrinos obtain Majorana masses from spontaneous breaking of fiveness. In this paper, we assume that the Majorana masses are provided by the vacuum expectation value (VEV) of a gauge singlet scalar field with fiveness, −10, i.e., which couples to the right handed neutrinos, Here, y N denotes a coupling constant, with which the Majorana mass is given by M N = y N φ . By integrating out the right-handed neutrinos, the tiny neutrino masses are obtained, where y also denotes a coupling constant.
Now, let us identify the gauged PQ symmetry with B − L, i.e., fiveness. Following the general prescription of the gauged PQ mechanism in [11], let us introduce extra matter multiplets which obtain a mass from the VEV of φ; with y K being a coupling constant. 2 Here, the extra multiplets (5 K ,5 K ) are assumed to form the 5 and5 representations of the SU (5) gauge group, respectively. As in the KSVZ axion model [30,31], the Ward identity of the fiveness current, j 5 , obtains an anomalous contribution from the extra multiplets, Here, F a (a = 1, 2, 3) are the gauge field strengths of the Standard Model and g a the corresponding SM gauge coupling constants. The Lorentz indices and the gauge group representation indices are suppressed. The factor −10 corresponds to the charge of the bi-linear, 5 K5K (see Eq. (5)).
In the gauged PQ mechanism, the U (1) gP Q gauge anomalies are canceled by a contribution from another set of the PQ charged sector. For that purpose, let us also introduce 10-flavors of extra matter multiplets (5 K ,5 K ). We assume that they obtain masses from a VEV of a 2 The reason why the extra multiplets couple not to φ but φ * will become clear shortly.
complex scalar field φ whose fiveness charge is +1; where the charge of the bi-linear, 5 K5 K , is set to be +1. With this choice, the anomalous contributions of the Ward identity in (6) are canceled by the one from (5 K ,5 K ), i.e., The fiveness charges of the respective extra multiplets are chosen as follows. To avoid stable extra matter fields, we assume that5 K and5 K can mix with5 SM , so that respectively. As a notable feature of this charge assignment, it cancels the [U (1) gP Q ] 3 and the gravitational anomalies automatically without introducing additional SM singlet fields.
The anomaly cancellation without singlet fields other than the right-handed neutrinos is by far advantageous compared with the previous models [11,12,32]. The singlet fields required for the anomaly cancellation tend to be rather light and longlived, which make the thermal history of the universe complicated [32]. The anomaly cancellation of the present model is, therefore, a very important success as it is partly motivated by thermal leptogenesis which requires a high reheating temperature after inflation, i.e., T R 10 9 GeV [26][27][28].
Under the fiveness symmetry, the interactions are restricted to Here,5 collectively denotes (5 SM ,5 K ,5 K ), and V (φ, φ , h) is the scalar potential. The coupling coefficients are omitted for notational simplicity. At the renormalizable level, the above Lagrangian possesses a global U (1) symmetry, which is identified with the global PQ symmetry. The global PQ symmetry corresponds to a phase rotation of a gauge invariant combination, φφ 10 , while the other fields are rotated appropriately. The global PQ charges of the individual fields are generically given by for {SM,N R , 5 K ,5} and {5 K }, respectively. Here, q 5 denotes the fivness charge of each field, and Q φ,φ are the global PQ charges of φ and φ with Q φ /Q φ = −10, respectively.
The global PQ symmetry is broken by the QCD anomaly. In fact, under the global PQ rotation with a rotation angle α P Q , the Lagrangian shifts by, It should be noted that the normalization factor of Eq. (14) is independent of the choice of the global PQ charge assignment for the individual fields.
Since the global PQ symmetry is just an accidental one, it is also broken by the Planck suppressed operators explicitly. However, due to the gauged fiveness symmetry, no PQsymmetry breaking operators such as φ n or φ n (n > 0) are allowed. As a result, the explicit breaking terms of the global PQ symmetry are highly suppressed, and the lowest dimensional ones are given by, where M P L 2.44 × 10 18 is the reduced Planck scale. As we will see in the next section, the breaking terms are acceptably small not to spoil the PQ mechanism in a certain parameter space.
III. AXION AND GLOBAL PQ SYMMETRY
To see the properties of the accidental global PQ symmetry, let us decompose the axion from the would-be Goldstone boson of fiveness. Both of them originate from the phase components of φ and φ ; where f a,b are the decay constants and we keep only the Goldstone modes, a and b. The domains of the phase components are given respectively.
In terms of θ a,b , fiveness is realized by, Here, α(x) denotes a gauge parameter field with q a = −10 and q b = +1, Y µ the gauge field, and g the coupling constant, respectively. The gauge invariant effective Lagrangian of the Goldstone modes is given by, where the covariant derivatives are defined by The gauge invariant axion, A (∝ q b θ a − q a θ b ), and the would-be Goldstone mode, B, are given by By using A and B, the effective Lagrangian is reduced to, The second term is the Stückelberg mass term of the gauge boson with m Y being the gauge boson mass, Through the mass term, the would-be Goldstone mode B is absorbed into Y µ by the Higgs mechanism. The effective decay constant of the axion A is given by, Given F A , the domain of the gauge invariant axion is given by when |q a | and |q b | are relatively prime integers [11].
The global PQ symmetry defined in the previous section is realized by a shift of where α P Q ranges from 0 to 2π. In fact, the phase of the gauge invariant combination φφ 10 rotates by as in Eq. (13).
After integrating out the extra multiplets, the axion obtains anomalous couplings to the SM gauge fields, The constraint on the VEVs of φ and φ . The gray shaded region is excluded by ∆θ < 10 −10 for the non-SUSY model (see Eq. (31)). The orange lines are the contours of the effective decay constant F A . In the blue shaded region, φ > φ .
Here, we have used the fact that the numbers of extra multiplets coupling to φ and φ are giving by N a = q b = 1 and N b = −q a = 10. By substituting Eq. (22), the anomalous coupling is reduce to, which reproduces the axial anomaly of Eq. (14) by the shift of the axion in Eq. (27). Through this term, the axion obtains a mass from the anomalous coupling below the QCD scale, with which the QCD vacuum angle is erased.
In the presence of the explicit breaking terms in Eq. (15), the QCD vacuum angle is slightly shifted by 3 where m a denotes the axion mass. Such a small shift should be consistent with the current experimental upper limit on the θ angle, θ 10 −10 [33].
In It should also be noted that the "inter-sector" interactions via5 do not lead to explicit breaking of the global PQ symmetry. To see this, it is most convenient to choose Q φ = 0 and Q φ = 1 (see Eq. (12)), which leads to the global PQ charges, The non-vanishing couplings to the neutrinos can also be understood from the fact that the axion in the present model also plays a role of the Majoron [35] which is obvious in the limit of φ φ . However, it seems very difficult to test the direct couplings between the axion and the neutrinos in laboratory experiments. 4 Note that φφ 10 is the lowest dimensional operators among all the global PQ breaking operators. In this case, no larger explicit breaking terms are generated by radiative corrections other than the anomalous breaking terms given in Eq. (30).
IV. DOMAIN WALL PROBLEM
Here, let us briefly discuss the domain wall problem and axion dark matter. As discussed in [32], the model suffers from the domain wall problem for φ φ when global PQ symmetry breaking takes place after inflation. To avoid the domain wall problem, we assume either one of the following possibilities; (i) Both phase transitions of φ = 0 and φ = 0 take place before inflation.
The latter possibility is available as the fiveness charges of φ and φ are relatively prime and |q a | : |q b | = 10 : 1. 5 For the first possibility, the cosmic axion abundance is given by, for the initial misalignment angle θ a = O(1) [36]. Thus, in the allowed parameter region in Fig. 1, i.e., F A 10 10 GeV, relic axion abundance is a subdominant component of dark matter. It should be also noted that the Hubble constant during inflation is required to satisfy, to avoid the axion isocurvature problem (see Refs. [37,38]). 6 For the second possibility, the cosmic axion abundance is dominated by the one from the decay of the string-domain wall networks [39], Ω a h 2 0.035 ± 0.012 F A 10 10 GeV Thus, the relic axion from the string-domain wall network can be the dominant component of dark matter at the corner of the parameter space in Fig. 1. To avoid symmetry restoration 5 The domain wall problem might also be solved for φ ∼ φ even if both the phase transitions take place after inflation. To confirm this possibility, detailed numerical simulations are required. 6 Here, we do not assume that the axion is the dominant component of dark matter but use the axion relic abundance in Eq. (33) to derive the constraint.
after inflation, we also require that the maximum temperature during reheating [40], does not exceed φ , which leads to Here, we use the effective massless degrees of freedom g * 200, though the condition does not depend on g * significantly.
V. SUPERSYMMETRIC EXTENSION
The SUSY extension of the present model is straightforward. The SM matter fields, the right-handed neutrinos, and the extra multiplets are simply extended to corresponding supermultiplets with the same fiveness charges given in Eqs. (1) and (9) Under the fiveness symmetry, the superpotential is restricted to 7 Here, X and Y are introduced to make φ and φ obtain non-vanishing VEVs, which are neutral under fiveness. 8 The coupling coefficients are again omitted for notational simplicity.
The SUSY extension again possesses the global PQ symmetry as in the case of the non-SUSY model.
In addition to fiveness, we also assume that a discrete subgroup of U (1) R , Z N R (N > 2), 7 More generally, the Higgs bi-linear, H u H d , also couples to X and Y . We assume that the soft masses of the Higgs doublets are positive and larger than those of φ's and φ 's, so that the Higgs doublets do not obtain VEVs from the couplings to X and Y . We may also restrict those couplings by some symmetry. 8 See [32] for details of the SUSY extension of the gauged PQ mechanism. TABLE I. The charge assignment of the fiveness symmetry and the gauged Z 4R symmetry. Here, we fix the Z 4R charges of the Higgs doublets to 0 which is motivated by pure gravity mediation model [41]. An extra multiplet (5 E ,5 E ) is introduced to cancel the Z 4R -SU (5) 2 anomaly [42].
is an exact discrete gauge symmetry. This assumption is crucial to allow the VEV of the superpotential, and hence, the supersymmetry breaking scale much smaller than the Planck scale. 9 In the following, we take the simplest possibility, Z 4R with the charge assignment given in Tab. I, which is free from Z 4R -SU (5) 2 anomaly and the gravitational anomaly. 10 It should be noted that the mixed anomalies of Z 4R and fiveness do not put constraints on charges since they depend on the normalization of the heavy spectrum [45][46][47][48][49][50][51][52][53][54]. 11 Under fiveness and the gauged Z 4R symmetry, the lowest dimensional operators which break the global PQ symmetry are given by, where m 3/2 denotes the gravitino mass. Compared with Eq. (15), the explicit breaking is suppressed by a factor of (m 3/2 /M PL ) 2 . Accordingly, the shift of the QCD vacuum angle is given by, 9 R-symmetry is also relevant for SUSY breaking vacua to be stable [43,44]. 10 It should be noticed that there is no need to add extra SU (5) singlet fields to cancel the anomalies. 11 GUT models consistent with the Z 4R symmetry are discussed in, e.g., [55,56]. (41)). The orange lines are the contours of the effective decay constant F A . In the blue shaded region, φ > φ . The gray shaded lower regions are excluded as the gauge coupling constants become non-perturbative below the GUT scale. The thin green region is excluded by the Axion Dark Matter eXperiment (ADMX) [57] where the dark matter density is assumed to be dominated by the relic axion.
where we assume φ = φ and φ = φ for simplicity. 12 In Fig. 2, we show the constraints on the VEVs of φ and φ from the experimental upper limit on ∆θ. Here, we take the gravitino mass, m 3/2 100 TeV, which is favored to avoid the cosmological gravitino problem for T R 10 9 GeV [58][59][60]. For m 3/2 100 TeV, the scalar partner and the fermionic partner of the axion also do not cause cosmological problems as they obtain the masses of the order of the gravitino mass and decay rather fast [61].
In the figure, the gray shaded region is excluded by the constraint on ∆θ 10 −10 . Due to the suppression of the breaking term in Eq. (40), the higher value of φ is allowed compared with the non-SUSY model. The higher φ is advantageous to avoid symmetry restoration after inflation (see Eq. (37)), with which the domain wall problem is avoided in the possibility (ii) (see section III). Accordingly, the decay constant can also be as high as about 10 11−12 GeV, which also allows the axion to be the dominant dark matter component (see Eq. (35)). Therefore, we find that the SUSY extension of the model is more successful. 13 It should be noted that the 11-flavors of extra multiplets at the intermediate scale make 12 The following argument can be easily extended to the cases with φ = φ and φ = φ . 13 As in [32], we will discuss a possibility where SUSY and B − L are broken simultaneously elsewhere.
the renormalization group running of the MSSM gauge coupling constants asymptotic nonfree. Thus, the masses of them are bounded from below so that perturbative unification is achieved. In the figure, the gray shaded lower region shows the contour of the renormalization scale M * at which at least one of g 1,2,3 becomes 4π. Here, we use the one-loop renormalization group equations assuming that the extra quarks obtain masses of φ and φ , respectively. 14 The result shows that the perturbative unification can be easily achieved for φ 10 9-10 GeV even in the presence of 11-flavors of the extra multiplets.
VI. CONCLUSIONS AND DISCUSSIONS
In this paper, we consider the gauged PQ mechanism where the gauged PQ symmetry is identified with the B − L symmetry (fiveness). As the B − L gauge symmetry is the most plausible extension of the SM, the identification of the gauged PQ symmetry with B − L is very attractive. An intriguing coincidence between the B − L breaking scale appropriate for the thermal leptogenesis and the favored PQ breaking scale from the astrophysical constraints also motivates this identification.
We found a natural B − L charge assignment motivated by the seesaw mechanism in the SU (5) GUT, with which the gauged PQ mechanism is achieved. There, the global PQ symmetry breaking effects are suppressed by the gauged fiveness symmetry so that the successful PQ mechanism is realized. As a notable feature, the fiveness charge assignment does not require extra SU (5) singlet matter fields other than the right-handed neutrinos to cancel the [U (1) gP Q ] 3 anomaly and the gravitational anomaly. This feature is advantageous since the singlet fields required for anomaly cancellation tend to be rather light and longlived, and hence, often cause cosmological problems. As a result, we find that the gauged PQ mechanism based on the B−L symmetry is successfully consistent with thermal leptogenesis.
We also discussed the SUSY extension where the Z 4R symmetry is also assumed. As has been shown, a larger effective decay constant is allowed in the SUSY model, as explicit breaking of the global PQ symmetry is more suppressed. Resultantly, the upper limit on 14 The masses of the sfermions, the heavy charged/neutral Higgs boson, the Higgsinos, and (5 E ,5 E ) are at the gravitino mass scale, m 3/2 100-1000 TeV. The gaugino masses are, on the other hand, assumed to be in the TeV scale as expected by anomaly mediation [62,63]. This is motivated by the pure gravity mediation model in [41] (see also Refs. [64][65][66][67] for similar models), where the Higgsino mass is generated from the R-symmetry breaking [68].
the effective decay constant is extended to which corresponds to the axion mass, m a 1.9 µeV The dark matter axion in this mass range can be detected by the ongoing ADMX-G2 experiment [69] and future ADMX-HF experiment [70].
In the SUSY model, it should be also noted that Z 4R is spontaneously broken down to the Z 2R symmetry. 15 Thus, the lightest supersymmetric particle in the MSSM sector also contributes to the dark matter density. Therefore, the model predicts a wide range of dark matter scenario from axion dominated dark matter to the LSP dominated dark matter, which can be tested by future extensive dark matter searches.
As emphasized above, the fiveness anomalies are canceled without introducing singlet fields other than the right-handed neutrinos. Although this feature is advantageous from The anomaly free charge assignment of 5's is fixed in the following way. For all the nflavors of (5,5) to have masses in the intermediate scale, they need to couple to the order parameters of fiveness. As we assume the seesaw mechanism, we have a natural candidate of such an order parameter, a complex scalar field, φ, with a fiveness charge −10. In order to make all the n-flavors of (5,5) massive while achieving anomaly free fiveness, however, it is required to introduce one more complex scalar, φ , with the fiveness charge q φ .
In the presence of φ and φ , the mass terms of (5,5) are generated from L = φ 55 + φ * 5 5 + φ 5 5 + φ * 5 5 , where the coupling coefficients are again omitted. Here, 5's are devided into {5,5 ,5 ,5 } 16 The choice of −3 just defines the normalization of fiveness. whose fiveness charges are given by, 5(+13) , 5 (−7) , 5 (−q φ + 3) , 5 (q φ + 3) , respectively. We allocate N 5 , N 5 , N 5 and N 5 flavors to {5,5 ,5 ,5 } with N 5 + N 5 + N 5 + N 5 = n. The anomaly free conditions of fiveness are given by, By solving the anomaly free conditions, we find only two sets of solutions, or N 5 = 7 , N 5 = 1 , N 5 = 3 , N 5 = 0 , q φ = 20 , both of which corresponds to n = 11. 17 Here, we restrict ourselves to n < 22. The first charge assignment is nothing but the fiveness charges assumed in this paper, while the later is another possibility. In this sense, we find that the number of the flavors, n = 11, is a unique choice within n < 22, and the fiveness charge assignment in this paper is one of the only two possibilities, where the second possibility is not suitable for the gauged PQ mechanism. | 5,898.2 | 2018-05-25T00:00:00.000 | [
"Physics"
] |
Continuous Particle Separation in Microfluidics: Deterministic Lateral Displacement Assisted by Electric Fields
Advances in the miniaturization of microelectromechanical systems (MEMS) [...]
Advances in the miniaturization of microelectromechanical systems (MEMS) [1] are revolutionizing the possibilities of sample analysis. For example, microfluidic devices enable the manipulation of tiny amounts of liquids, minimizing the consumption of samples and reagents and paving the way to portable Lab-on-a-Chip (LOC) devices for point-ofcare diagnosis [2]. Pretreatments such as separation and concentration of target analytes are an essential step before assays on biological or environmental samples [3], which usually consist of a mixture of components including particles of microscopic dimensions. Therefore, the development of efficient and reliable particle separation techniques is a major challenge in the progress of LOC technologies.
An innovative strategy for continuous microparticle sorting in microfluidics is the use of deterministic lateral displacement (DLD) devices [4][5][6]. Several recent publications demonstrate its potential for separation of bioparticles [7][8][9]. DLD devices consist in microchannels containing an array of pillars with diameters around tens of microns or smaller. The pillars are arranged in rows that are slightly tilted with respect to the lateral channel walls. When a liquid with suspended particles flows through the array, it turns out that particles bigger than a critical size (D c ) bump on the pillars and deviate. Repetition of this bumping results in a net lateral displacement of the particles reaching the end of the channel. This is in contrast to the behavior of particles smaller than D c , which zigzag around the pillars and keep moving in the direction of the fluid flow [10]. DLD devices perform binary separation mainly by size (separation based on particle deformability has also been demonstrated [11,12]). D c is determined by geometrical factors such as pillar radius, gap between neighboring pillars, and tilting angle of the rows, which cannot be tuned once the microchannel is fabricated.
A current topic of research is the effect of electric fields on the motion of particles in aqueous suspensions within DLD devices. The first work of electric-field assisted DLD separation was by Beech et al. [13], who inserted electrodes at the inlet and outlet of the microchannel and demonstrated that particles smaller than D c can be displaced upon application of ac electric fields with a frequency of 100 Hz. Thus, the particle deviation can be externally controlled via an electrical signal. More recent work is based on the application of an electric field perpendicular to the flow direction [14,15]. This is accomplished by integrating electrodes on the sides of the microchannel, reducing the electrode gap and, consequently, the amplitude of the applied voltages. Among the benefits of using lower voltages, we point out the possibility of increasing the frequency range of the ac signals up to hundreds of kHz-these frequencies are not achievable by standard voltage amplifiers if required to generate thousands of volts. Using this electrode configuration, Calero et al. [16] recently demonstrated that particle deviation at high frequencies ( f > 1 kHz) is caused by dielectrophoresis (DEP), i.e., the movement of particles in a nonuniform electric field caused by polarization effects [17,18]. However, for low frequencies of the applied voltages, the particles undergo electrophoresis and oscillate perpendicular to the flow direction. In this case, the particles are also deflected, and, significantly, the threshold electric field magnitude for particle deviation is much lower than for high frequencies (DEP deviation). The physical mechanism behind particle separation at low frequencies has not been clarified yet. The smaller values of the applied electric fields suggest that another phenomenon different from DEP is responsible for this. Recent experimental work has shown the appearance of stationary flow vortices around the pillars for ac signals around hundreds of Hz and below [16,19,20]. Future work will be focused on the effect of these flows on particle separation, not only on DLD channels but also in related problems such as insulating-DEP (iDEP) devices where constrictions create non-homogeneous electric fields leading to dielectrophoretic trapping of particles [21,22].
Funding:
The authors acknowledge financial support by ERDF and Spanish Research Agency MCI under contract PGC2018-099217-B-I00.
Conflicts of Interest:
The authors declare no conflict of interest. | 967.4 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
The Effect of Human Resource Competence , Organizational Commitment , and Systems Quality on Individual Use of Accrual Based Accounting System Application At Statistics Indonesia ( BPS )
According to Indonesian Government Regulation No. 270/2015 and Financial Minister Regulation No. 71/2015, all government institutions must use an accrual-based system in the financial report in 2015. Therefore, Indonesian government institutions need some adequate human resource competence, high organizational commitment and qualified information systems regarding reliable, accurate, comprehensive and relevant to decision-making of financial statement. This study examines the effect of human resource competencies, organizational commitment and quality of the system to the accrual based accounting system application usability at Statistics Indonesia (BPS). The population of this study were 513 government agencies at BPS. Using paperbased survey, data was gathered from 129 respondent based accrual institutional accounting system application services. The data analysis technique was the multiple linear regression analysis. Results showed that human resource competence and organizational commitment has a positive effect on individual use of the accrual-based accounting system application. The implication for stakeholders and further research are discussed.
INTRODUCTION
The implementation timeframe of the accrual based Government Accounting Standard as regulated in Government Regulation Number 71 Year 2010 about the Government Accounting Standard and Minister of Finance Regulation Number 270 Year 2014 about the Accrual Based Government Accounting Standard Application in the Central Government, so 2014 is the last year that the government is allowed to use a towards accrual based treasury.In 2015 the central government and regional have to already use an accrual base in the presentation of financial reports.Financial reports that are produced from the accrual-based application have a purpose to give more comprehensive and better information to the financial report users compared to cash-based treasury that has been used all this time.This is in line with one of the accounting principles which is full disclosure.The accrualbased accounting system able to produce financial reports more accurate, comprehensive and relevant for decision making (Mardiasmo, 2002).
The Ministry of Finance developed an integrated application that is used by National Ministries/Institutions in supporting the implementation of accrual based accounting in the central government.Application development is expected to be able to integrate the implementation and responsibility process in corresponding with the budget cycle.The Ministry of Finance developed an accounting application from cash-based treasury Sistem Akuntansi Instansi (SAI) (Institution Accounting System) which becomes the Sistem Akuntansi Instansi Berbasis Akrual (SAIBA) (Accrual Based Institution Accounting System) to be used by every National Ministry/Institution.This system is applied in parallel with the implementation of the integrated finance application system in corresponding with the phases.The SAIBA application that is developed is in guidance with the current government accounting system.Nevertheless, there are some factors that are specifically regulated to be able to be implemented easier by working units or it is expected to be able to produce better information.The development is done for adapting with the government accrual based accounting standard.
The Statistics Indonesia is one of the governmental institutions that applied the SAIBA application for supporting the implementation process and responsibility in corresponding with the budget cycle.The presence of the SAIBA application is able to ease financial administrators in finishing their work, if compared with finishing manually.Based on early observations of the researcher, there are still obstacles in using the SAIBA application among others: (1) there are mistakes in the input of financial data, this means that the competence of the financial administrator human resources are not yet adequate; (2) The low organizational commitment that the financial administrators have; (3) the SAIBA application software that is not yet compatible with the information process that the organization needs because financial administrators have to wait for a software update of the SAIBA application to finish financial administration tasks.The presence of obstacles in the use of the SAIBA application causes the use of the SAIBA application to be not optimal.This is strengthened by an interview result of the researcher with a SAIBA operator in the Statistics Indonesia for Bengkulu Province that states: "The presence of the SAIBA application actually eases us in making financial reports but I feel difficulties in using this application because my education is not from accounting.Our working period that is not yet lengthy and the addition of having to update the application often so our work is not able to be finished on time." The statement of the SAIBA operator indicates that there is a human resource competence that is not yet adequate, the low organizational commitment of the SAIBA users and the incompatibility between the SAIBA application software with the financial administration process affects the use of the SAIBA application.If the user experiences an obstacle in the use of the SAIBA application it will inflict an impact in the finishing of the financial administration work so financial administration work is not able to be finished effectively, efficiently and economically.
The information system is a system that is interconnected and has a function to gather, save and process data, neither manually nor with computer aid for producing information that is useful in decision making (Laudon and Laudon, 2000).The government begins to develop and give special attention to information technology as a source that facilitates in gathering, processing, saving and using information effectively.One of the forms of government concern is the use of a computer based accounting information system (software accounting) SAIBA that is used by the central government for accelerating information flow.
The problem that usually occurs in the use of accounting software is the incompatibility of the system with the business and information process that the organization needs (Janson and Subraham, 1996;Lucas, Walton and Ginzberg, 1998).The incompatibility between accounting application software with the business process in an organization is able to create difficulties to users of accounting applications, so it creates an impact to the finishing of financial administration work.The problem that is created by the information system is the cause of the business and information that is needed by the organization is not yet compatible, so a software update is needed to solve the problems.In 2015 there occurred SAIBA software updates as many as five times, beginning from the SAIBA application version 2.2 until the SAIBA update version 2.8 (www.djpbn.kemenkeu.go.id).The presence of SAIBA software updates, helps the employees in work highly.The flaws in the system or a system that is not yet of good quality slows the employees in finishing their work highly.
The success of an organization for maintaining their existence begins with the human effort itself so the effectiveness and efficiency is able to increase maximally.In other words, the success of an organization is highly influenced by the competence of the human resources that it has in the form of knowledge, skill and behavioral attitude.According to Wibowo (2007), competence is an ability to implement or do work or tasks that is based on knowledge, skill and supported by work attitude that is demanded by the work.An adequate competence in using the SAIBA application that finance administrators have is able to aid in finishing the work effectively, efficiently and economical.Moreover about the financial administrators that have a non-accounting education background, they have to learn a new process for solving the finance administration problems themselves.
The research that is implemented by Stevens and Campion (1994) shows that is performance analysis there needs to be a specification that has to be fulfilled by an employee which are knowledge, skill, and ability.Finance administrators should have knowledge, understanding and skill about work that has to be done so the work that is burdened is able to be finished and presented on time.
Moreover, employee's organizational level able to encourage success of the accounting information system application in the company (Larsen, 2003) The core of organizational commitment is the attachment and loyalty of an employee to the company that will encourage them to always work in several situations in the company.Mathis and Jackson (2004), explains that the core of organizational commitment is the loyalty of an employee to work.In the tight workforce marker condition, the relocation of an employee often occurs when the loyalty of an employee is low, because of that loyalty and commitment is an important aspect in work.The organizational commitment that an employee has is able to encourage in the use of an information system and the application success of the accounting information system in an organization.
Meanwhile , higher quality of IT will provide not only the easiness to finance administrators for producing trusted, relevant, timely financial information but also able to be understood, and tested to aid in the decision making process of the financial information users.The complexity of the accounting information system, the extent of accounting transactions and the amount of procedures in the accounting information system process demands finance administrators to have adequate knowledge and skill supported by behavioral attitude for evaluating the system trouble.Taking adequate actions for solving the problem, so it does not impact the accounting information system cycle as a whole.A small mistake in the accounting information system such as a mistake in journaling transactions will impact the inaccuracies of system information that is produced.
Finally, system quality able to influence the level use of system.According to DeLone and McLean, (1992) system quality means quality from the hardware and software combination in the information system, so it is able to be summarized the better the system and output quality, such as the rapid time for accessing and using the system output, will cause the users to feel reluctant to reuse it.Therefore, the intensity of system use will increase.Repeated information system use is able to be defined that the implemented use is useful to users.
Several previous research already studied the system quality issue.Surya, Astuti, and Susilo (2014) implemented research about the influence of knowledge, skill, and ability to the use of human resource information system in the National Electricity Company Malang East Java Distribution.Their research shows that the presence of knowledge and skill influence the use of the information system and employee performance.The research implemented by Anwar (2012) and Witaliza, Kirmizi, and Agusti (2015) were also found that organizational commitment and manager knowledge influenced the success of the accounting information system application success.Mulyono (2009) in his research, which is an empirical test of a regional finance information system success model in increasing the transparency and accountability of regional finance found that system quality influences intensity of use.Nevertheless, specific research that studies the context of information system use in government institutions are relatively limited, especially related with the accrual based accounting system application.For that, this research has a purpose to study the influence of human resource competence, organizational commitment and system quality in the SAIBA application use at the Statistics Indonesia.
This paper is divided into three primary parts, which are literature review, research method, and discussion of research findings.The final parts of this paper summarizes research findings and suggestions for interest managers, including implication further research.
LITERATURE REVIEW Information System Use
According to Laudon and Laudon (2000) states that the information system is a set of components that are interconnected that function to gather, process, save and distribute information for supporting the creation of satisfaction and supervising in an organization.The computer based information system is a group of hardware and softwaere that are designed for changing data to information that is useful (Bodnar and Hopwood, 2000).The use of the hardware and software has a purpose for producing information rapidly and accurately.The use of the right system will be able to increase employee performance because employees would not working manually.
According to Gelderman (1998), the application success of an information system is the intensity of system use of the accounting information system in everyday work and the user satisfaction of the accounting information system use.The comprehensive model that is able to be referred for the success dimension of the accounting information system application are: (1) The Information Success Model from Delone and McLean (1992); and the Hierarchical Structural Model from Drury and Farhoomand (1998).Besides these two models, Laudon and Laudon (2000) provides five dimensions for measuring the success of the accounting information system application, the dimensions are: (1) the high level of system use; (2) the user satisfaction on the system; (3) a favorable attitude; (4) Achieving the objectives of the information system; and (5) a financial payoff.Livari (2005) uses a concept of use of information system as mandatory usage in public sector, by observing from an actual use perspective.Items from actual use that are used are daily use time and frequency of use.The use of the information system that is developed refers to how often the user uses the information system.The more often the user uses the information system, it is usually followed by more degrees of learning that the user obtains about the information system.(Mc Gill et al., 2003).Meanwhile, DeLone and McLean (2003) measures system use as an indicator of information system success.According to Jogiyanto (2007), the use concept of a system is able to be observed from several perspectives, which are actual use and perceived use or reported use According to Davis (1989), stated that ease of use perception is a level where a person believes that the use of a certain system is able to make the person free of effort, which means that when an employee uses the system, the employee needs less time to learn about the system because the system is simple, not complicated and easy to understand or maybe familiar.The ease of use is not only the ease for learning and using a system but also refers to the ease in doing a task.The use of a system will provide ease for employees in working compared with working manually, The use of the information system believes that an information system that is more flexible, easy to understand and easy to operate as characteristics.
Factors that Influence Information System Use
Organizations have to operate effectively, efficiently and controlled through an increase of human resources, product and service quality and use of information technology to be able to compete in the local and national level (Susanto, 2008).According to Romney and Steinbart (2009) the accounting information system is a part of corporate infrastructure which together with human resources and technology that becomes support activity in the creation of value to customers.As one of the support activities, the accounting information system has a role in providing financial information that is useful for five primary corporate activities, through improvement: (1) Quality and cost decrease of products and services; (2) Efficiency (3) Knowledge sharing; (4) Efficiency and Effectiveness of the value chain; (5) Improving the internal control structure; and (6) Decision making.The accounting information system is something that must be needed so an organization is able to operate effectively, efficiently and controlled.
The success of an organization in maintaining its existence begins with the human effort itself in increasing effectiveness and efficiency optimally.In other words, the organization success is highly influenced by the quality and competitive ability that it has.Suya et al., (2014), stated that employee's knowledge and skill are factors that are able to influence information system use.Employees that have a high knowledge specifically abilities in the information technology field, tend to have an adequate enough ability in operating a program.Employees with high skill are able to access information system programs faster using a work method that is considered more effective and efficient.According to Anwar (2012), organizational commitment and manager knowledge are factors that influence the application success of the accounting information system observed from the user satisfaction and intense of use (intended use).The success of the application of the accounting information system are able to be reached with the optimization of increasing operational commitment and manager knowledge and has an impact to corporate financial performance.
According to Choe (1996), the success of the accounting information system application in a company is influenced by several factors, among others: (1) User involvement; (2) Leadership support; (3) User training and education (4) Organization workgroup factors; and (5) other organizational factors such as size, task characteristics and others.Essex et al., (1998) in his research found that factors that influence success of an information center in an organization are: (1) Staff quality (competence, training, and knowledge) and ( 2) User knowledge about technology and business.Sounders and Jones (1992), stated that organizational commitment as an organizational factor that influences the success of the accounting information system use besides the SIA, such as: SIA integration with corporate planning, output quality, SIA operational efficiency, user/management attitude, competence of SIA implementation staff, and others.
Based on the study implemented by DeLone and McLean (1992), they found that success of an information system are able to be presented by: 1. Qualitative characteristics of the information system itself (system quality) 2. Output quality from the information system (information quality) 3. Consumption to output (use) 4. User respond to the information system (user satisfaction) 5. Information system influence to user habits (individual impact) 6.Its influence to organizational performance (organizational impact) From the explanation above it can be summarized that there are many factors that are able to influence information system use.In this research the factors of human resource competence, organizational commitment, and system quality will be used by the researcher as factors that influence information system use.
Human Resource Competence
Competence is an ability for implementing or doing a task or work that is based on skill and knowledge and supported by a work attitude that is demanded by the work (Wibowo 2007).Competence as a person's ability of productivity in a satisfactory level also show knowledge and skill characteristics that every individual has or need that make them capable to do tasks and responsibility effectively and efficiently and increase the professional quality standard in work.
A Head Decision of the National Board of Official Number 46A Year 2003 about the Guide of Making a Competence Standard of Civil Employee Structural
Positions affirms that competence is the ability and characteristic that a Civil Employee has in the form of knowledge, skill, and behavior attitude that is needed in implementing his/her position tasks, so the Civil Employee is able to do their tasks professionally, effective and efficient.The employee's competence measured with knowledge, ability and skill.A competence person is a person who has knowledge, skill and attitude in implementing a task or work.An employee who has enough knowledge will increase an organization efficiency (Robbins, 2003).Competence explains what people do at the workplace in several levels and itemizes standards of each level, identifies characteristics, knowledge and skill that is needed by the individual that makes doing tasks possible and effective responsibility so it reaches a professional quality standard in work, and includes all notable aspects of certain management performance, skill, and knowledge, attitude, communication, application and development (Wibowo, 2007).Spencer and Spencer (1993), stated that competence is a base of personal characteristics and identifies the way of thinking and behavior, equalizes situations and support for a lengthy period.There are five competence characteristic types, which are as the following: 1. Motive is something that is consistently thought or wanted by someone that causes action.Motive pushes, directs, and chooses behavior toward a certain action or purpose.2. Description are physical characteristics and respond that is consistent to the situation or information.Reaction speed and sharp eyes are physical competence descriptions of a fighter pilot.3. Self-concept is the attitude, values or self-image of a person.Self-confidence is a belief that a person is able to be effective in almost every situation which is a part of a person's self-concept.4. Knowledge is information that a person has in a specific field.5. Skill is the ability to do certain physical or mental tasks.Mental competence or cognitive skill includes analytical and conceptual thinking.
Competence is a behavior dimension behind competent work.It is often named behavior competence because it explains how people behave when they implement their roles well (Armstrong and Baron, 1998).Competence is not an ability that is unable to be influenced . Michael Zwell (2000) stated that there are some factors that are able to influence a person's competence ability that encompasses: belief and values, skill, experience, personal characteristics, motivation, emotional issues and intellectual ability.Competence is defined as knowledge, expertise, ability or personal characteristics of an individual that influences work performance directly (Brian E. Becher, Mark Huslid et al. in Sudarmanto, 2009) Competence is mastering tasks, skill, attitude and appreciation that is needed to support the success of a system.
From the explanations above about the competence concept, it can be summarized that competence is knowledge, skill, and attitude that underlies a person's behavior to implement tasks and obligations that he/she burdens so finance administrators are able to implement their tasks professionally, effectively, and efficient.This competence model is an instrument for organizations base for human resource management, such as human resource planning.In this research, indicators of competence are knowledge, skill and behavior attitude that will be used in analysis.
Organizational Commitment
According to Mathis and Jackson (2004), organizational commitment is the level of trust and employee acceptance of the organization's purpose and willingness to stay in the organization.Organizational commitment as a relative power from individuals in identifying their involvement in the parts of the organization (Mathis and Jackson, 2000).Organizational commitment is employees membership in the organization and be available to implement high effort for the organization's purpose (Bathaw and Grant, 1994).In other words, organizational commitment as a form of orientation to the organization in term of loyalty in implementing tasks, identification to the values and purpose of the organization, and the involvement of members to make achievements.
The success of an organization is determined by the commitment of the employees for reaching the organization's purpose.Luthans (2011) mentioned about several definitions and measures about organizational commitment that is most often defined which are: 1.A strong will to still be a member of a certain organization.2. The readiness to direct all the abilities for organizational purposes.3. Self-confidence and strong acceptance to organizational purposes and values.
In other words this concerns the attitude that reflects employee loyalty to the organization and the sustainable process where organization members express concern to the organization and success and sustainable progress.The success of an organization is determined by the commitment of the employees in reaching the organization's target or purpose.An employee that has a low commitment of course will produce results that are not optimal to the organization.
According to Mowday et al., (1979) and Allen and Meyer (1993), stated that there are three aspects of organization commitment, which are: 1. Affective commitment, is the commitment that is related with the presence of will to be bonded to the organization.How far a person has an emotional relation, self-identification, feeling involved in the organization and staying in the organization because of their own will.2. Continuance commitment, is the commitment based on rational needs.In other words, this commitment is formed based on costs that will appear in relation with leaving the organization and considering what has to be sacrificed if staying in an organization.3. Normative commitment is the commitment that is based on moral obligations that are in the employees' selves.Individual confidence of responsibility to the organization and the feeling of having to stay because of loyalty.Allen and Mayer (1990), stated that every component in organizational commitment has a different base.Employees that have a high affective component join the organization because of the will to still be a member of the organization.Meanwhile, employees with a high continuance component, still become members of the organization because they need the organization.Employees that have a high normative component stay in the organization because they have to do it.Based on organizational commitments every employee has a different basic behavior.An employee that has an organizational commitment with an affective base in an organization has a different behavior with employees that have a continuance commitment.Meanwhile, normative component as results of socialization experience, depending on how far the employee has a feeling of obligation.The normative component inflicts a feeling of obligation to the employee for giving response to what has been accepted from the organization.
System Quality
In the information system success model (DeLone and McLean, 2003), system quality is a technical measure of success, information quality is a semantic measure of success, and user satisfaction describes individual influence and organization, are measures of effectiveness success.This information system success model has six dimensions among others: system quality, information quality, user satisfaction, use intensity, individual impact, and organizational impact.The system quality shows product quality from its information system application.The system quality, determines the attitude of its user as the receiver of the information.The higher the system quality, the higher level of user satisfaction and use.
Accounting software is expected to ease finance administrators in implementing their tasks.The problem that usually occurs in using an accounting software is the incompatibility of software features with the business and information process or organization needs (Janson and Subramamian 1996;Lucas et al. 1988).The incompatibility that occurs among others are: software problems, system interfacing and difficulties in hardware that can cause significant problems for the user.Technical difficulties that interfere in the software, interfacing problems in the system, and difficulties in the hardware make users reluctant to use it.
Indicators used to measure system quality among others are: ease of use, response time, reliability, flexibility, and security.Research used the usefulness variable and ease of use to measure the information system success has already been implemented by McGill, Hobbs and Klobas (2003).They stated that information quality from the information system produces are: 1. Accounting software are able to increase data processing capacity significantly 2. Accounting software are able to be used in other computers 3. Accounting software are able to be used in an organizational environment without further modification.4. Accounting software has a security system 5.There are facilities for data correction (help function) in accounting software 6.Errors are easy to correct and identify 7. Every part of the system accommodates information 8. Accounting software is easy to use 9. Accounting software is easy to learn 10.Accounting software is able to be used in all organizations The indicators that are used for measuring system quality have been developed by several researchers.Swanson (1974) mentioned items used to measure information system quality are: reliability of the computer system, online response time, and the ease of terminal use.Then, Emery (1971) uses a system characteristics concept for measuring information system quality that covers: content of the database, aggregation of details, human factor, response time, and system accuracy.Hamilton and Chervany (1981) in measuring information system quality uses: proposed data currency, response time, turnaround time, data accuracy, reliability, completeness, system flexibility, and ease of use.
SAIBA (Sistem Akuntansi Instansi Berbasis Akrual) (Accrua-Based Institution Accounting System) Application Program
The SAIBA (Sistem Akuntansi Instansi Berbasis Akrual) (Accrual Based Institution Accounting System) is an accounting information system application based on computer technology or website.In term of implementing accrual-based accounting in the central government, the Ministry of Finance developed an integrated application to be used in National Institutions/Ministries.The development of the application is expected to integrate the implementation and responsibility process in corresponding with the budget cycle.This accounting information system has a purpose to motivate people in facing change, and produces a quality accrual-based finance report regarding of good governance.
The SAIBA application is developed from the cash-based SAI application.The development of the SAIBA application is guided in SAP, Accounting Policies, Accounting Systems and the Standard Accounting Diagram.This SAIBA application uses an accounting information system that integrated with the National Budget and Treasury System, Work Unit Application System, and the National Goods Accounting System.Regulations in accounting standards are followed in corresponding with the determined accounting policies.The process is corresponded with the Accounting System of the Central Government by using the Standard Accounting Diagram.The SAIBA business process is able to be seen in following Figure 1.external documents.Internal documents are made by accounting entities (working units) for recording data, sourced from inside or external parties.Source documents in SAIBA are mostly indifferent with accounting documents used in the CTA base such as DIPA Petikan Satker Revisi DIPA, SPM/SP2D, Non Tax Deposit Letters, Accrual Based Accounting Deposit Letters.When data processed by the SAIBA application, then the journal is posted in the ledger and recapitulated in the financial report.Data that is processed covers the early deposit data of the balance or realization data of the previous process, budget data consists of income budget and spending budget that is allocated to every working unit, current year transactions covers treasury income and spending, accrual transactions covers accrual income and load, and other transactions.Other transactions among others are financial transactions that do not influence load and income, such as the reclassification of mail in the balance.
The SAIBA application produces output in the form of: Budget Realization Report; Operational Report; Balance; Equity Change Report and Financial Report Notes.The output produced by the SAIBA application are expected to produce a quality financial report (relevant, reliable, and able to be compared and understood by financial report users).
RESEARCH METHOD
This research is a quantitative research with a survey method.Variables measured in this research are human resource competence, organizational commitment, and system quality as independent variables and SAIBA application use as a dependent variable.All variables are measured with 5-scales likert from strongly do not agree until strongly agree.The research population are the SAIBA application users in the Statistics Indonesia of all regional offices in Indonesia, which are 513 persons and all are chosen as research samples.The data gathered by email questionnaire during May-June 2016 with 25 percent response rate.The data analysis technique uses a multiple regression linear method which began with construct validity and reliability tests.
ANALYSIS AND DISCUSSION
The respondents in this research are 129 SAIBA administrators at Statistics Indonesia.Respondent characteristics are based on demographic characteristics which are sex, age, graduate of/science field, working time, staff, working unit, office level and SAIBA administrator experience, are generally explained in following Table 1. 1 shows the respondent demographic characteristics in this research are male as many as 52 persons (40.3%) and female respondents as many as 77 persons (59.7%).Based on age, most of the respondents are grouping in 23-35 years old (48.1%).It indicates that SAIBA operators are mature and productive period, so it is expected to be able to have high mobility in implementing their tasks as SAIBA operators.
Most SAIBA operators are Bachelor Degree graduates (51.2%).It indicates that respondents already have an adequate education level, so it is possible to be able to finish their work well.Education levels are highly needed by SAIBA operators, where education is able to determine knowledge, skill, and attitude in finishing tasks and understanding the accounting information system and the government accounting standard.
Furthermore, it contrary with science field where 86.8 percent SAIBA operator are non-accounting.However, operator's working period is mostly senior (82 percent is 5-10 years working period).Then, it is expected that employees have highly organizational commitment and experience that support operators using SAIBA while mostly operators are junior level of SAIBA operators.
Multiple Regression Linear Analysis
This research uses a multiple regression linear analysis to examine the influence of human resource competence, organizational commitment and system quality on SAIBA application use at Statistics Indonesia.Human resource competence has positive influences on SAIBA application use at Statistics Indonesia.It means that the higher level of employee competence, the higher level of SAIBA use at Statistics Indonesia.It also indicates that the successfulness of SAIBA is determined by adequate operator competence which supported by proper education and experience in using SAIBA.
Based on respondent characteristics, most SAIBA operators at Statistics Indonesia are non-accounting and less than 3 years experiences in SAIBA.For that, the SAIBA operator competence will be continuously increased through training or socialization.Competent human resources supported by an accounting education background and experience in the work field, will motivate success in using the accounting information system.
The complexity of SAIBA application, the extent of the scope of the accrual based accounting and the presence of many procedures in the institution accrual based accounting system process, beginning from the occurrence of the transaction until a financial report is produced demands SAIBA operators at Statistics Indonesia to have adequate competence and experience.This is consistent with Shaberwal et al. (2006), stated that complexity of the accounting information system process demands a person's experience in the accounting information system and training of the accounting information system, which both are constructs that determine the success of the accounting information system application.
SAIBA operators at Statistics Indonesia must have adequate competence in the form of knowledge in the accounting information system field and accrualbased accounting; skills in using hardware and software; and having a behavior attitude that is responsible and integrity to their work in evaluating system trouble and then take the right action for solving the problem that appears.Then, it is expect to minimize error in financial statement process.Surya et al., (2014) stated that knowledge and skill has an important influence in system use.Employees with high level of specific knowledge and skills in the information technology field, tend to have adequate skills in operating a program and able to increase their work productivity.Employees with high level Organizational commitment has positive influence on SAIBA application use at Statistics Indonesia.It indicates that the higher level of employee commitment, the higher level of operator uses of SAIBA application.
Most SAIBA operators in the Statistics Indonesia are in the work period of 5-10 years which are as many as 63.6%.This shows that the work period of SAIBA operators are long enough, so the organizational commitment that the SAIBA operators have is adequately high.Based on age most of the respondents are in the 25-35 years old age range which is as large as 48.1%, this shows that SAIBA operators are in the age that is considered mature and productive.With an age that is mature and productive, the SAIBA operators have a high mobility in doing their tasks and are able to cooperate with their colleagues and superiors.
The employee organizational commitment is a factor to push the success of the accounting information system application in the organization, because the core of organizational commitment is the loyalty of an employee to the occupation.The higher level of employee organizational commitment will produce optimum output for the organization.Therefore, an increasing organizational commitment is critical issues regarding to SAIBA application use.Larsen (2003), stated that organizational commitment is the primary factor that influences the success of the accounting information system application.Results of this research also supported by Anwar (2012) who found that success of the accounting information system application is reached by high level of employee organizational commitment.
Based on respondent profile, operator's organizational commitment are fairly high.It indicates that SAIBA operators already have an emotional connection with the organization, willingness to stay in and obligations to the organization.SAIBA operators feel that they are emotionally attached to the Statistics Indonesia, because SAIBA operators feel that they belong and part of Statistics Indonesia as a place of their work and socialize with colleagues and superiors.
SAIBA operators perceive Statistics Indonesia as a place for they work and source of income.If a SAIBA operator leaves the Statistics Indonesia, it would be hard for them in choosing another organization.SAIBA operators also perceive that they have obligations and responsibility to the Statistics Indonesia.This affective, normative, and continuance commitment are form of employee organizational commitment (Allen and Mayer, 1993) at Indonesia Statistics.Based on the explanation, it can be summarized that organizational commitment has an influence on SAIBA application use at Indonesia Statistics.The SAIBA application use will be optimal if the SAIBA operators have a higher level of organizational commitment and at the end, it will support the employee to reach the organization purpose.
CONCLUSION
The success of the accounting information system application is influenced by individual and system factors.The individual factor is employee motivation and satisfaction in using accounting information system.According to Ro mney and Steinbart (2009), the accounting information system is a system that gathers, takes note, saves, and processes data to produce information for the purpose of decision making.A good accounting information system will produce a good financial report.
The complexity of the accounting information system, the extent of the accounting cycle and the amount of procedures in the accounting information system process beginning from the occurrence of a transaction until a financial report is produced demands SAIBA operators to have adequate competence and a high organizational commitment to evaluate system error and solving the problems in the SAIBA application use.
Research results show that human resource competence has an influence on SAIBA application use.Because of that, the competence of SAIBA operators have to be continuously increased because most SAIBA operators have a nonaccounting education background and are not yet experienced enough in SAIBA operation.For that, it is important to increase SAIBA operator competences by increasing intensity of training or socialization of accounting cycle, accrual-based accounting, financial report, and accounting administration.An accounting-based employee background is more suitable for SAIBA operator.
Furthermore, Statistics Indonesia can increase SAIBA operator organizational commitment by involving employees in every activity.Also, increasing motivation of employee membership by designing and implementing Standard Operating Procedure (SOP) and merit system.
Moreover, the use of the SAIBA application at Statistics Indonesia is not fully optimal.The proper use of the SAIBA application will be able to increase the performance of SAIBA operators because the employees do not work manually anymore.For that, Statistics Indonesia should provide adequate facilities in the form of hardware and software.The use of the hardware and software has a purpose to produce trusted, relevant, timely information that helps decision making process of the financial information users and producing a quality accrual-based financial report for the forming of good governance.
Finally, this research still has limitations that need improvement and development in future research as are as follow.a. Respondents used in this research are only SAIBA operators in a certain time period which is when the survey was implemented, so future research should examine financial administrators such as the expenditure treasurer and the
Figure 1
Figure 1 SAIBA Business Process Source: SAIBA Business Process Module Based on Figure 1, SAIBA begins from document recording manually or electronically and will shape transaction journals.Accounting documents are inputs in the accounting process.The source documents consists of internal and
Table 1
Respondent Characteristics
Influence of Human Resource Competence on SAIBA Application Use
Table 2 presents output of statistical examination.Based on Table 2, human resource competence and organizational commitment influence SAIBA application use at Statistics Indonesia.The following sections explanation the research findings.The
Table 2
Results of Multiple Regression Linear Analysis implementing access of a software, is considered to faster, more effective, and more efficient work method.A higher expertise employee is able to increase work efficiency, accelerate information speed, and minimize task error.SAIBA operators at Statistics Indonesia produces financial reports in corresponding with Government Regulation Number 71 Year 2010 about the Government Accounting Standard that consists of: Budget Realization Report; Deposit Change of Extra Budget Report; Balance; Operational Report; Cash Flow Report; Equity Change Report; Notes of the Financial Report.The whole forming process of the financial report needs a competent SAIBA operator.Adequate competence that SAIBA operators have in the Statistics Indonesia will help them in finishing work and tasks professionally, effectively and efficiently so it produces a quality financial report.Based on those explanation, it can be summarized that competence is one of the important factors in increasing the SAIBA application use.The | 9,121 | 2017-08-10T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
Comparison of Representations of Named Entities for Document Classification
We explore representations for multi-word names in text classification tasks, on Reuters (RCV1) topic and sector classification. We find that: the best way to treat names is to split them into tokens and use each token as a separate feature; NEs have more impact on sector classification than topic classification; replacing NEs with entity types is not an effective strategy; representing tokens by different embeddings for proper names vs. common nouns does not improve results. We highlight the improvements over state-of-the-art results that our CNN models yield.
Introduction
This paper addresses large-scale multi-class text classification tasks: categorizing articles in the Reuters news corpus (RCV1) according to topic and to industry sectors. A topic is a broad news category, e.g., "Economics," "Sport," "Health." A sector defines a narrower business area, e.g., "Banking," "Telecommunications," "Insurance. " We use convolutional neural networks (CNNs), which take word embeddings as input. Typically word embeddings are built by treating a corpus as a sequence of tokens, where named entities (NEs) receive no special treatment. Yet NEs may be important features in some classification tasks: companies, e.g., are often linked to particular industry sectors, and certain industries are linked to locations. Thus company and location names may be important features for sector classification.
RCV1 is much smaller than corpora typically used to build word embeddings. Thus we utilize external resources-a corpus of approximately 10 million business news articles, collected using the PULS news monitoring system (Pivovarova et al., 2013). While nominally RCV1 contains general news, it is skewed toward business; many of the topic labels are business-related ("Markets", "Commodities", "Share Capital," etc.). Thus, we expect our business corpus to help in learning features for the Reuters classification tasks.
We compare several NE representation to find the most suitable name features for each task. We use the PULS NER system (Grishman et al., 2003;Huttunen et al., 2002a,b) to find NEs and their types-company, location, person, etc. We compare various representations of NEs, by building embeddings, and training CNNs to find the best representation. We also compare building embeddings on the RCV1 corpus vs. using much larger external corpora.
Data and Prior Work
RCV1 (Lewis et al., 2004) is a corpus of about 800K Reuters articles from 1996-1997 with manually assigned sector and topic labels. Both classifications are multi-label-each document may have zero or more labels. While all documents have topic labels, only 350K have sector labels.
While RCV1 appears frequently in published research, few authors tackle the full-scale classification problem. Typically they use subsets of the data: (Daniely et al., 2017;Duchi et al., 2011) use only the four most general topic labels; (Dredze et al., 2008) use 6 sector categories to explore binary classification, (Daniels and Metaxas, 2017) use a subset of 6K articles. Even when the entire dataset is used, the training-text split varies across papers, because the "original" split (Lewis et al., 2004) is impractical for most purposes: 23K instances for training, and 780K for testing.
Another problem that complicates comparison is the lack of consistency in evaluation metrics used to evaluate classifier performance. The most common measures for multi-class classification are macro-and micro-averaged F-measure, which we use in this paper. However, others use other metrics. For example, (Liu et al., 2017) use precision and cumulative gain at top K-measures adopted from information retrieval. This is not comparable with other work, because these metrics are used not only to report results, but also to optimize the algorithms during training. The notion of the best classifier differs depending on which evaluation measure is used. Thus, although RCV1 is frequently used, we find few papers directly comparable to our research, in the sense that they use the entire RCV1 dataset and report microand macro-averaged F-measure.
To the best of our knowledge, our previous work (Du et al., 2015) was the only study of the utility of NEs for RCV1 classification. We demonstrated that using a combination of keyword-based and NE-based classifiers works better than either classifier alone. In that paper we applied a rulebased approach for NEs, and did not use NEs as features for machine learning.
Model
The architecture of our CNN is shown in Figure 1. The inputs are fed into the network as zero-padded text fragments of fixed size, with each word represented by a fixed-dimensional embedding vector. The inputs are fed into a layer of convolutional filters with multiple widths, optionally followed by deeper convolutional layers. The results of the last convolutional layer are max-pooled, producing a vector with one scalar per filter. This is fed into a fully-connected layer with dropout regularization, with one sigmoid node in the output layer for each of the class labels. For each class label, a crossentropy loss is computed. Losses are averaged across labels, and the gradient of the loss is backpropagated to update the weights. This is similar to the model (Kim, 2014) used for sentiment analysis. The key differences are that our model uses an arbitrary number of convolutions, and that we use sigmoid rather than softmax on output, since the labels are not mutually exclusive.
To train the model we used a random split: 80% of the data used for training, 10% development set, and 10% test set. The development set is used to determine when to stop training, and to tune a set of optimal thresholds {θ i } for each label i-if the output probability p i is higher than θ i , the label is assigned to the instance, otherwise it is not. To find the optimal threshold, we optimize the Fmeasure for each label. The test set is used to obtain the final, reported performance scores.
Our focus is this paper is data representation, thus we defer the tuning of hyper-parameters for future work. All experiments use the same network structure: 3 convolution layers with filter sizes {3,7,11}, {3,7,11}, and {3,11}, with 512, 256 and 256 filters of each size, respectively. The runs differ only in the input embeddings they use.
Data Representation
We train the embeddings using GloVe (Pennington et al., 2014). As features we use lower-cased lemmas of all words. The rationale for this is that our corpora are relatively small, so the data are sparse and not sufficient to build embeddings from surface forms. We tune the embeddings while training the CNN, updating them at each iteration.
We explore several name representations, using our NER system: • type: each entity is represented by a special token denoting its type-C-company, Cperson, C-location, etc, and C-name if the type is not determined. The model learns one embedding for each of these tokens. • name: each name gets its own embedding; multi-word names treated as a single token. • split-name: multi-word names are split into tokens, and each token has its own embedding; the motivation is that some company names may contain informative parts-e.g., Air Baltic, Delta Airlines-which may indicate that these companies operate in the same field; these name parts may be more useful than the name as a whole. • split-name+common: similar to the above, but tokens inside names and in common context are distinguished; the motivation is that some words may be used in names without any relation to the company's line of business-e.g., Apple, Blackberry-and their usage inside names should not be mixed with their usage as common nouns. In the experiments, we build GloVe embeddings from two corpora: RCV1 only, and RCV1 plus our external corpus. For comparison, we also use 200-dimensional embeddings trained on a 6 billion general corpus (glove-6B), provided by the GloVe project. 0 This corresponds to our splitname representation mode.
To illustrate the effect of the different token representations, Table 1 shows ten words nearest to the sample lemmas: apple and airline. When name representation is used, the token apple is ambiguous, its nearest neighbors are both fruit words (pear) and computer words (apple computer). In type representation, the "computer" meaning disappears, since all mentions of Apple as company are represented by the special token C-company. When using glove-6B, the fruit meaning is absent, and all neighbors are computerrelated words. The token airline does not exhibit such ambiguity, and all representations produce similar nearest neighbors. In the split-name+common representation mode, each lemma may produce two vectors, one for a common noun and one for a proper noun (inside a name). As the table shows, apple as a common noun has a clear "fruit" meaning; the one company appearing among the neighbors is a juice producer, Odwalla. The nearest neighbors for apple NE, in name context, include IT companies. The tokens airline and airline NE have no clear semantic distinction, with similar nearest neighbors. In such cases there is no clear advantage in using two embeddings rather than one.
We test all of the above name representations experimentally, to determine which is more useful in the document classification tasks.
Results and Discussion
Experimental results are presented in Tables 2 and 3. We compare our results with those found in related work, described in Section 2, focusing on micro-and macro-averaged F-measure-µ-F1 and M-F1, respectively. The experimental settings differ in the various papers, which makes precise comparison difficult. For example, several previous papers use the "standard split," (proposed in (Lewis et al., 2004)), which contains only 23K Algorithm (prior) M-F1 µ-F1 SVM (Lewis et al., 2004) 29.7 51.3 SVM (Zhuang et al., 2005) 30.1 52.0 Naive Bayes (Puurula, 2012) -70.5 Bloom Filters (Cisse et al., 2013 47.8 72.4 SVM + NEs (Du et al., 2015) 57 training instances, which is not sufficient for learning word embeddings. Compared to the reported state-of-the-art results on Sector Classification (Table 2), our best model yields a 10% gain in µ-F1, (Cisse et al., 2013), and a 6% gain in M-F1 (Du et al., 2015). The best µ-F1 and M-F1 results are obtained by the same model. 1 On Topic Classification (Table 3), our µ-F1 results show a modest improvement of 0.5% in Fmeasure-or a 3.5% (averaged) error reductionover state of the art (Johnson and Zhang, 2015). 2 As seen in Table 2, the best data representation for Sector Classification, is split-name, where each token has the same embedding regardless whether it is used in a proper-name or a commonnoun context. The worst performing name representation is type, where names are mapped to special "concepts" (C-company, C-person etc.), and each concept has its own embedding. This indicates the importance of the tokens inside the named entities for Sector Classification, and supports the notion that company names mentioned in text correlate with sector labels.
Results for Topic Classification are in Table 3. The best data representation is again split-name, though the difference between representations is less pronounced than in the case of Sector Classification, and using type does not lead to a significant drop in model performance. This suggests that proper names are less important for Topic Algorithm (prior) M-F1 µ-F1 SVM (Lewis et al., 2004) 61.9 81.6 ANN (Nam et al., 2014) 69.2 85.3 CNN (Johnson and Zhang, 2015) 67. (event) classification, and supports the intuition that entity names (e.g., companies) are less correlated with the types of events in which the entities participate in business news. However, there may be correlations between industry sectors and topics/events: e.g., mining or petroleum companies rarely launch new products. This may explain why the split-name representation appears to be better for Topic Classification. One possible next step is to build CNNs that jointly model Topics and Sectors; we plan to explore this in future work.
Surprisingly, using external corpora did not improve the models' performance, as indicated by both Sector and Topic results (Tables 2 and 3, respectively). This may mean that the genre and the time period of the news corpus are more relevant for building embeddings than the size of the corpora. However, other factors may contribute as well, e.g., our hyper-parameter combination may not be optimal for these embeddings. Nevertheless, the results follow the same pattern: the best name representation is split-name and the difference between representations is more pronounced for Sector than for Topic classification.
In conclusion, our contribution is two-fold. On one classic large-scale classification task, sectors, our proposed CNNs yield substantial improvements over state-of-the-art; on topics-a modest improvement in µ-F-measure. Further, to the best of our knowledge, this is the first attempt at a systematic comparison of NE representation for text classification. More effective ways of representing NEs should be explored in future work, given their importance for the classification tasks, as demonstrated by the experiments we present in this paper. | 2,917.6 | 2018-07-01T00:00:00.000 | [
"Computer Science"
] |
Active Fault Tolerance Control Based on Consistent Matrix for Multimotor Synchronous System
. This paper presents an active fault-tolerant method to mitigate sensor failures in multimotor synchronous control. First, inspired by the construction of the coupling matrix in complex network synchronous output, a consistent matrix is designed based on structural redundancy in synchronous control. This consistent matrix has two advantages: one is that it can reflect different sensor output similarities and the other one is that it can detect, locate, and estimate the sensor fault. Then, the fault information is integrated into the design of tolerance control with an improved mean feedback mechanism. The proposed method is suitable for both single and multiple fault situations, and its effectiveness is finally verified by both MATLAB simulation and the ABB semiphysical experimental platform.
Introduction
Due to great load drive capabilities and more flexible motion modes, multimotor synchronous driven systems have been widely applied in numerous industry domains including robotics, paper making, and belt conveyors [1][2][3][4]. e objective of synchronous control is to ensure synchronization of speed or displacement between different motors under different loads or disturbances [5][6][7]. In multimotor synchronous control systems, the sensors used to measure speed or position are easily malfunctioned due to ageing, collision, or electromagnetic interference. Once a fault occurs, the synchronization control performance will be seriously affected and disastrous consequences may happen. erefore, identifying such faults and taking effective measures in time is of great significance for ensuring system safety, reliability, and product quality [5,6]. e development of fault detection and diagnosis (FDD) and fault-tolerant control (FTC) based on analytical redundancy provides strong support for ensuring the security of the system. Many researchers have studied FTC for sensor faults in motion control systems for decades [8][9][10][11][12][13][14][15]. ey can be roughly divided into two categories, namely, mathematical model-based and data-driven-based [9]. e core of the mathematical model method is the design of various types of observers. For example, Mao et al. [10] adopted adaptive disturbance rejection control (ADRC) and an extended state observer (ESO) to estimate the speed in a current loop, which realized FTC for a speed sensor fault and improved the multimotor synchronization accuracy. Najafabadi et al. [11] solved the problem of diagnosing and isolating three sensor faults for current, voltage, and speed in induction motors through designing an adaptive current observer for rotor resistance estimation. For a speed sensor failure in an induction motor, Marino et al. [12] designed an adaptive observer to detect sensor faults online, and fault tolerance was realized based on indirect magnetic field orientation control. In recent years, aiming at the difficulty of modelling, the data-driven method has gradually become a new research hot spot, which has received extensive attention [13][14][15]. Based on the large amount of online and offline monitoring data existing in the system, this method can characterize the normal and fault modes in the system based on that useful information hidden in data using data mining and processing technologies. Consequently, it has been recognized as a practical diagnostic technology. As an application, by collecting line voltage and performing fast Fourier transform, a data-driven method using a two-layer Bayesian network was proposed in [13] to obtain fault characteristics. As a result, fault diagnosis of an inverter in PMSM was realized.
Recalling the existing results in the literature, most results are directed to single-motor drives. Although some of them are also applicable to multimotor synchronous driving modes, they still have some limitations since the presented algorithms are complicated and difficult to implement. To overcome such limitations, in this paper, through exploring mathematical model-based and data-driven-based methods' characters and shortages, we devise a combination method based on a complex network. e complex network model has been widely used in many fields, such as power grids and aerospace [16][17][18][19]. A typical complex network system consists of a number of subsystems that are usually coupled with each other. Typically, a multimotor synchronous control system is such a complex network. e consistency study of complex networks, that is, complex network synchronization control, is an important research topic in the field of complex networks.
Redundancy is the basis of fault diagnosis and fault tolerance. In this paper, inspired by the construction of a coupling matrix in complex network systems, a consistent matrix is designed to characterize the similarities of output data of different sensors, using the structure and information redundancy in multimotor synchronous control systems. Based on online analysis and judgment of matrix elements and eigenvalues, the detection, location, and estimation for faulty sensors are realized. As a result, a novel improved mean feedback strategy is presented using structural redundancy and fault information to achieve fault tolerance. It is shown through simulation and experiment that system security and reliability are greatly improved.
Deviation-Coupled Synchronous
Control Structure e research on multimotor synchronous control strategies mainly includes two aspects, synchronous control structure and synchronous control algorithm. In terms of the control structure, there are mainly serial master-slave control [20], virtual spindle control [21], cross-coupling control [22,23], and deviation-coupling control [24]. Among them, deviation coupling adopts the compensation control strategy and the comprehensive effect is more positive than others in aspects of starting characteristics, disturbance suppression ability, applicable range, and convenience of engineering realization. Accordingly, its practical applications have been found in a wider range of fields. In terms of control algorithms, which are mainly for load uncertainty and unknown interference, various robust control algorithms are devised [10,[24][25][26][27], such as sliding mode control, internal model control, and active disturbance rejection control. Since this paper focuses on fault diagnosis and tolerance in multimotor synchronous control systems, the deviation-coupled synchronous control structure in [10] is adopted. e control principle is shown in Figure 1, where ADRC is a kind of controller and S.C. is the synchronous compensator or controller, such as PI or PID. Based on graph theory, a complex network system with identical dynamic systems as nodes that satisfies the dissipation conditions can be described as follows [28]:
Sensor Fault FDD and FTC Design
is a known function (often a nonlinear function); σ > 0 is the coupling strength of the network; A � (a ij ) N×N is the coupling matrix of the network, which satisfies j a ij � 0, i � 1, 2, . . . , N; and H ∈ C[R n , R n ], which is a coupling function. e coupling matrix A can be used to describe an undirected topology. If there is a connection between node i and node j, then a ij � a ji � k, k > 0, otherwise a ij � a ji � 0, i ≠ j. e diagonal elements of matrix A satisfy (2) Definition 1. In the dynamic network (1), the network is said to be consecutively synchronized for any initial conditions [29] if e coupling matrix A determined by the complex network structure reflects the synchronous state or consistent attribute of the network. Inspired by the nature of the coupling matrix, this paper constructs a consistent matrix with the abovementioned properties to characterize the synchronization or consistency of a complex network. When the system is in an abnormal state due to faults, the consistency will be destroyed, and the resulting abnormality will also be reflected in the coupling matrix. So, with the location and size of the matrix element that changes, the fault can be detected and located. Figure 1 shows that multimotor synchronous control is a typical complex network system. Considering the control objectives, this paper pays more attention to judging the consistency of the complex network output. To evaluate the 2 Complexity consistency or similarity between different nodes in a complex network, a consistent matrix, which is derived from the coupling matrix, is introduced. e process of constructing the consistent matrix can be summarized as follows. e output of the N nodes of the multimotor synchronous complex network system shown in Figure 1 is measured by N sensors and constitutes a set Y � [y 1 , y 2 , . . . , y N ]. e consistency between y i and y j is represented by the coefficient a ij . As a ij gets larger, y j and y i become more consistent. e consistency between two different nodes is the same, which means a ij � a ji . us, the matrix A is a symmetric matrix. Constructing A is very important to effectively distinguish the consistency between y i and y j . Generally, it is set as an exponential function of the distance between y i and y j . e construction form is given as follows:
Data-Driven Consistent Matrix Construction.
where y i (k) is the output data from the sensor on node i at It also has the same properties as the coupling matrix.
Remark 1.
e matrix is completely generated by the output data of the sensor, so it can be considered as data-driven based. At the same time, the structure of A reflects the distance relationship between two different sensor outputs. erefore, it has the characteristics of a model. Moreover, the design of A can be considered as a combination of datadriven and model-based methods, which has the advantages of convenient data acquisition and reflecting the mechanism characteristics of the network model.
Remark 2.
For the network synchronization, the abnormal or fault output of a certain node will be far away from the output of other normal nodes. It will decrease the consistency between them accordingly. is change can be directly reflected from the corresponding position in the matrix, which is the basis for follow-up fault diagnosis studies.
Consistent Matrix Analysis under Normal/Fault
Condition of the Sensor. Based on the design of the consistent matrix described above, for a multimotor synchronous control system shown in Figure 1, the following assumptions are made.
ere are N motors to be synchronized, and the output of each subsystem can be regarded as a node of a complex network.
Assumption 2.
e system has good synchronous control performance under normal conditions. e system reaches steady state and has good synchronization accuracy, which means that y i ≈ y j .
Assumption 3. When two or more sensors fail at the same time, the sizes or magnitudes of faults are different.
Under the previous assumptions, we can obtain a ij ≈ 1 and a ii ≈ − (N − 1). e eigenvalues of A satisfy λ 1 � λ 2 � · · · � λ N− 1 � − N and λ N � 0. When a sensor (taking the ith sensor as an example) fails, its output will inevitably deviate from other normal sensor outputs. e elements in the ith row and the ith column of A will show a i * < 1, a * i < 1 (except for the diagonal), and with the magnitude of the fault increases, a i * ≪ 1, a * i ≪ 1. e eigenvalues of matrix A will also change accordingly.
Fault Detection and Location Based on Consistent
Matrix Judgement. From the analysis above on different characteristics of the consistent matrix A before and after the fault, fault detection can be performed by judging the element size of A. Generally, three types of fault detection methods can be selected: with the elements at the corresponding position of A(k − l). When a row or column of elements in the deviation matrix exceeds the predefined threshold, the fault can be detected and the fault sensor can be located according to the row or column position of the element exceeding the threshold.
(2) Elements in A(k) are compared with 1. With the analysis above, when the sensor is normal, a ij ≈ 1.
When the ith sensor fails, the ith row and the ith column elements in A(k) appear to be much smaller than 1, a i * < 1, a * i < 1. e fault sensor can be identified and located accordingly. For the abovementioned three methods, from the perspective of detection speed and reliability, there are different characteristics. Method (1) uses l sample data to test. With the sample number increasing, the reliability of the detection is improved, while a certain delay will occur. erefore, from real-time and reliability consideration, l should not be too large. Method (2) has fast diagnostic speed and good real-time performance, but the anti-interference performance is poor, possibly leading to false alarms. Method (3) is essentially the same as Method (2). ey both are based on A(k)′s own elements or eigenvalues. However, the limitation of Method (3) lies in that it cannot locate the fault.
Combining the characteristics analysis of the abovementioned three methods, to improve the efficiency and reliability of the diagnosis, it is a good idea to combine Methods (1) and (2). When the diagnosis program is implemented, Method (2) is mainly used. After the fault is judged, the A(k) comparison is made with the latter steps in combination. at is, Method (1) is used for further confirmation. Diagnosis speed, reliability, and computing resources can be simultaneously ensured.
It is worth mentioning that the diagnostic method above is also applicable to multiple fault situations. It means that when multiple sensors fail simultaneously, multirow and multicolumn element values will satisfy a ij < 1 at the corresponding positions in the matrix.
For example, when M sensors fault simultaneously in a multimotor synchronous control system composed of N Motors (M < N) under Assumption 3, there will be C 2 e diagnostic process is similar to the single fault scenario.
However, as the number of faulty sensors increases, the types of faults diversify and the number of elements to be judged will increase greatly. Accordingly, the diagnosis time is prolonged. At the same time, the distances between two different sensors, especially between the faulty sensors, are various. All of these will decrease the overall diagnostic reliability of the method to some extent, especially when the detection threshold is set to a certain fixed value.
Fault Estimation Based on Eigenvalues of the Consistent
Matrix. In addition to detecting and locating the fault, the fault size can be estimated by further studying the performance of matrix A. e specific process is as follows.
When the ith sensor fails, the consistent matrix A can be approximated as (ignoring the noise and synchronization error) where β is the consistent coefficient between the fault sensor and other normal sensors, β < 1. To estimate the fault size, the abovementioned matrix can be transformed into the following form: 4 Complexity at is, except for a 1i � a i1 � β, the diagonal elements remain unchanged from the original rule and the elements in other positions are all set to 1. It is easy to obtain the eigenvalues of A f * , which are λ 1 � · · · � λ N− 2 � − N, λ N− 1 � − (N − 2 + 2β), and λ N � 0. Compared with the matrix A in normal situations, the only changed eigenvalue of A f * is a fault-related quantity, i.e., λ N− 1 � − (N − 2 + 2β). erefore, the fault size can be estimated as Considering that A f is a singular matrix, to obtain A f * , a generalized inverse of A f can be used to obtain a transform matrix, P � A f * · (A f ) + .
Remark 3.
Compared with some traditional model-based methods, which have a complicated process of modelling, observer designing, parameter optimization, etc., the proposed method can realize the detection, location, and estimation of sensor faults only by judging the elements and eigenvalues of the consistent matrix. It is completely driven by the output data of the sensor and applicable for single and multiple fault situations. erefore, the proposed method has some significant advantages, such as simple calculation, convenient implementation, and clear physical meaning.
Fault Tolerance Based on Improved Weighted Mean
Feedback. After the fault is diagnosed, timely and effective isolation and fault tolerance of the faulty sensor are essential to ensure the safe and stable operation of the system. e signal collected by the sensor is used as the input signal of the mean feedback and the self-feedback. erefore, after detecting the fault, the two signals should be isolated and reconstructed to ensure the security of the system. For the mean feedback part, it is modified from the weighted mean form as follows: where y i is the output signal of each sensor and α i can be defined as the reliability coefficient of the ith sensor. Based on the abovementioned fault diagnosis result, when the sensor is normal, that is, completely reliable, α i � 1, while when the sensor is faulty, α i � 0, as seen in the following equation: When the sensor is normal, the feedback signal is the traditional average form 1/N N i�1 y i , and when the sensor is faulty because the reliability coefficient is introduced, α i � 0, the corresponding sensor signal is cut off and the fault sensor is automatically isolated. As a result, fault sensor isolation and system fault tolerance can be realized. It does not affect the generation of the mean signal, nor does it change the topology of the original network.
For the self-feedback part, the mean signal y is used to replace the faulty sensor output y i . In this situation, the system output keeps the current value unchanged.
Based on the analysis above, for the sensor failure problem in a multimotor synchronous control system, the synchronous control with improved deviation-coupling structure of fault diagnosis and fault tolerance function is shown in Figure 2.
e outputs of the FDD and FTC module are the modified outputs y 1 , y 2 , . . . , y N of sensors 1 to N, the fault indicator c, and the mean feedback y, respectively. When the sensor is normal, y i � y i , and when the sensor is faulty, the corresponding output is y i � y.
Remark 4.
e original mean value is improved by introducing the reliability coefficient, and the modified mean feedback design is performed accordingly. e proposed method does not affect the mean feedback output or the system topology and can implement faulttolerant control to assure the reliability and safety of the system.
Sensor Fault Detection, Isolation, and Tolerance Steps.
Based on the abovementioned analysis, for a single sensor fault, the fault diagnosis, isolation, and fault tolerance processes are as follows: Step 1. Initialization: set the sampling period T s and the diagnostic threshold th of the system operation; label the sensors in the multimotor synchronous system from 1 to N; and let α i � 1, i � 1, 2, . . . , N, and the fault indication output c � 0.
Step 2. Generating matrix A(k): the consistent matrix A(k) is generated according to (4) by the output of each subsystem.
Step 3. Fault diagnosis: judging if each element of A(k) satisfies a ij < th. If yes, the fault occurs and the algorithm proceeds to Step 4; otherwise, it returns to Step 2 to generate the next moment matrix A(k + 1) and judges again.
Step 4. Fault tolerance: according to the diagnosis result of Step 3, if the jth sensor fails, set α j � 0, calculate y according to (8), and set y j � y and c � j.
It should be noted that to improve the efficiency of diagnosis, it only needs to judge the first row or the first column of matrix A(k), not all the elements. e result of the judgment is nothing more than the following three cases.
Simulation Results and Analysis
To verify the effectiveness of the fault diagnosis and faulttolerant method proposed above, a simulation is carried out for both a single sensor fault and multisensor simultaneous fault in the MATLAB/Simulink environment based on the control structure shown in Figure 2.
Single Sensor Fault Simulation.
For a single sensor fault situation, the synchronous control of three permanent magnet synchronous motors (PMSM) is taken as an example. To verify the advantages of the proposed method, under different speed inputs, the sensors in subsystems 1-3 have three different types of faults such as constant deviation, stuck, and constant gain at different times. e PMSM models are shown in (10)- (13), and the parameters of each motor are given in Table 1. e simulation time is 35 s, the simulation step size is T s � 0.001 s, and threshold th � 0.1. e simulation results are shown in Figures 3-6: T e � 1.5P n ϕ f i q . (13) First, to reflect the impact of the fault, three types of fault shown in Table 2 are first applied to the No. 1 sensor only. Figure 7 shows the result of no action taken after the fault occurs. A fault of the No. 1 sensor seriously affects the outputs of subsystems 2 and 3 at this moment. Clearly, the whole system cannot be synchronized for a period from the occurrence to the end of a fault.
Second, to verify the effect of the proposed fault diagnosis method, different types of faults, as shown in Table 2, are applied to three sensors at different times. Figure 3 shows the fault indicator results. When different types of sensor Q-axis self-inductance (mH) L q 3. Figure 4 shows the results of fault estimation. e maximum estimation error shown in the detail figure is only about 3 rad/s. e estimation is accurate. ird, with the abovementioned diagnosis results, the improved weighted mean feedback based on the reliability coefficient was used to isolate the faulty sensor and achieve fault-tolerant control. e results are shown in Figure 5 and 6.
It can be seen from Figure 5 that the system still maintains a satisfactory synchronization accuracy after the fault occurs, in addition to the slight fluctuations caused by the adjustment of the controller in a short time. Furthermore, from the synchronous output error of Figure 6, one can see that the error caused by the fault tolerance is much smaller than the error of control when the desired rotational speed is changed. e maximum tolerance error is about 0.1 rad/s.
Multisensor Fault Simulation.
As mentioned above, the proposed method is also suitable for multifault situation. So, another simulated synchronous system composed of four motors is presented, considering the simultaneous failure of two sensors. e parameters of the four motors are shown in Table 3. e simulation time is 35 s, th � 0.1, and the simulation step size T s � 0.001 s.
For the synchronous control system consisting of four motors, when two sensors fail simultaneously, there are six combinations. ree combinations of them can be taken into consideration. ree types of sensor failures (constant gain, stuck, and constant deviation) are considered. e fault information is shown in Table 4. Figure 8 shows the results of the system synchronization output after 2 sensors fail simultaneously without taking any measures. Similar to the single failure scenario, the synchronization performance of the system is degraded for a period of time. Figures 9-11 show the results of FDD and FTC using the proposed method.
After the fault is diagnosed, Figure 10 shows the faulttolerant results using the improved weighted mean feedback. When two sensors fail at the same time, only the outputs of Complexity the remaining two normal sensors are used, and a satisfactory synchronization performance is achieved. Furthermore, from Figure 11, the synchronization error at t � 25 s is relatively large (about 2 rad/s) due to the slow change characteristics of constant gain fault. e synchronization error of tolerance is far less than the control error of the expected speed change. e fault tolerance is satisfactory overall.
It should be pointed out that, when two sensors have different amplitude faults, the distances between the faulty sensor and a normal sensor, as well as between two faulty sensors, become complicated and diverse. It is difficult to estimate the fault size. In addition, considering the existence of the synchronization error and the randomness of the distance between different sensors outputs, the threshold selection is more sensitive.
Experimental Results and Analysis
To test the engineering applicability of the proposed method, a multimotor synchronous control experimental platform composed of 4 motors is established. e hardware mainly includes ABB's AC500-eCo PLC; input and output modules DX561, DC562, and AX561; ACS355 frequency 8 Complexity converter and permanent magnet synchronous motor (see Table 5 for parameter information); and 360 line photoelectric encoder. e software mainly includes Control Builder Plus (referred to as CBP, integrated with CoDeSys), OPC configurator, and MATLAB/Simulink. Because the AC500-eCo PLC contains only two high-speed counting channels, the four motors are controlled by two PLCs through four frequency converters. e established experimental platform is shown in Figure 12.
Setting sensor No. 2-4 to have three kinds of fault, constant deviation, constant gain, and stuck, the corresponding fault occurrence and duration time is 1 − 1.5 s, 2.5 − 3 s, and 4 − 4.5 s, respectively. e expected speed is 2000 rpm. e results of fault diagnosis and fault tolerance are shown in Figures 13 and 14.
e fault diagnosis is timely and reliable. e design of the improved weighted average feedback mechanism based on the diagnosis results realized the fault-tolerant control. e synchronous precision of the system after fault tolerance is satisfactory in the semiphysical experiment. e effectiveness of the method is verified once again in the platform. It also demonstrates that the fault diagnosis method based on complex network consistent matrix and the tolerance design of improved weighted mean feedback have good engineering applicability.
Conclusion
A fault diagnosis and fault-tolerant control method based on complex network synchronization has been presented to Complexity mitigate sensor fault problems in multimotor synchronous control. Based on the concept of distance, inspired by the idea of the coupling matrix in complex network synchronization, a consistent matrix is devised, which can reflect the similarity of different sensor output data. From the online correlation of elements and features in the matrix, the sensor fault can be detected, located, and estimated by the element value judgments in the matrix. Based on the fault diagnosis information, a fault-tolerant mechanism has been designed by introducing an improved weighted mean feedback to achieve effective isolation of the faulty sensor. Compared with the existing theories and techniques, the proposed method has several advantages, such as simple principles, small calculation amount, no need to change the topology structure of the original network, easy engineering realization, and suitability for single and multiple failure.
From simulations and experiments, in order to implement satisfactory synchronous control, by considering load interference, noise, and uncertainty, a strong robustness control algorithm is necessary when the sensor is normal. In the meantime, the proposed method can be superimposed on the system as a further synchronous output guarantee. e reason is that, due to interference and other factors, when the synchronization error is large at a certain time, the method has the function of automatically rejecting the deviation from the larger output, and adopting other closer output averages as the feedback function, which makes the mean feedback closer to the true value. erefore, this method can achieve better performance than conventional mean feedback, that is, it can realize fault tolerance when there is a fault and improve synchronization accuracy when there is no fault.
In our future work, the focus will be on the robust and adaptive threshold selection study for multisensor fault situation.
Data Availability e simulation and experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 6,475 | 2020-04-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Possible impacts of the predominant Bacillus bacteria on the Ophiocordyceps unilateralis s. l. in its infected ant cadavers
Animal hosts infected and killed by parasitoid fungi become nutrient-rich cadavers for saprophytes. Bacteria adapted to colonization of parasitoid fungi can be selected and can predominate in the cadavers, actions that consequently impact the fitness of the parasitoid fungi. In Taiwan, the zombie fungus, Ophiocordyceps unilateralis sensu lato (Clavicipitaceae: Hypocreales), was found to parasitize eight ant species, with preference for a principal host, Polyrhachis moesta. In this study, ant cadavers grew a fungal stroma that was predominated by Bacillus cereus/thuringiensis. The bacterial diversity in the principal ant host was found to be lower than the bacterial diversity in alternative hosts, a situation that might enhance the impact of B. cereus/thuringiensis on the sympatric fungus. The B. cereus/thuringiensis isolates from fungal stroma displayed higher resistance to a specific naphthoquinone (plumbagin) than sympatric bacteria from the environment. Naphthoquinones are known to be produced by O. unilateralis s. l., and hence the resistance displayed by B. cereus/thuringiensis isolates to these compounds suggests an advantage to B. cereus/thuringiensis to grow in the ant cadaver. Bacteria proliferating in the ant cadaver inevitably compete for resources with the fungus. However, the B. cereus/thuringiensis isolates displayed in vitro capabilities of hemolysis, production of hydrolytic enzymes, and antagonistic effects to co-cultured nematodes and entomopathogenic fungi. Thus, co-infection with B. cereus/thuringiensis offers potential benefits to the zombie fungus in killing the host under favorable conditions for reproduction, digesting the host tissue, and protecting the cadaver from being taken over by other consumers. With these potential benefits, the synergistic effect of B. cereus/thuringiensis on O. unilateralis infection is noteworthy given the competitive relationship of these two organisms sharing the same resource.
Fungi and bacteria often live in close proximity and share the same microhabitats. The inevitable competition for limited resources promotes the selection of partners tolerant to the presence of each other. Ecological interactions that are either antagonistic or synergistic can develop. Competition for the same resources enhances the antagonistic relationship in many fungus-bacterium associations 1 . However, fungi and bacteria can co-occur synergistically, enhancing their partner's adaptations and consequently forming a co-evolving metaorganism 2 .
The ant-pathogenic fungus, Ophiocordyceps unilateralis sensu lato, is a well-known parasitoid that causes manipulated behaviors and subsequent death of the ant hosts 3 . Spores invade the ant hosts by attaching, germinating, and penetrating the cuticles of foraging ant workers. The fungus lives in a yeast-like state as single cells in the host hemolymph and causes a series of host behavioral manipulations, which is the origin of the name "zombie ant. " These include convulsion, erratic walking, and finally, the host dying after biting onto leaf veins or twigs. The fungus produces hyphae to form a branching network of cells throughout the ant cadaver 4 , exploits resources by lysing the host tissue, and ends the parasitic life cycle with a stroma sprouting from the intersegmental membrane between the head and prothorax of the host, forming the perithecial plates for spreading spores 5 .
Materials and methods
Sample collection. Samples were collected from an evergreen broadleaf forest in central Taiwan (Lianhuachi Experimental Forest, Nantou County, 23°55′7″N 120°52′58″E) from January 2017 to March 2018. Permission to collect plants for the study was obtained from the Lianhuachi Research Center, Taiwan Forestry Research Institute, Council of Agriculture, Executive Yuan, Taiwan (Permission no.: 1062272538). The present study complies with the International Union for Conservation of Nature Policy Statement on Research Involving Species at Risk of Extinction and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. Ant cadavers with fungal growth were collected from understory plants with a height of less than 3 m. Ant cadavers infected with O. unilateralis s. l. were removed carefully by cutting the leaf and placing it into a 50-mL conical centrifuge tube, which was then transported to the laboratory. Only cadavers in which the fungal growth stage preceded the development of perithecia, which theoretically has the highest biological activity, were collected (Fig. 1). In total, 24 infected P. moesta and 20 infected P. wolfi samples were collected. www.nature.com/scientificreports/ Isolation and cultivation of bacteria. Ants on the leaves were first identified to species and then, using tweezers, each ant was placed carefully into a sterilized 1.5-mL microcentrifuge tube [see details in Lin et al. (2020) 15 . Samples were shaken one by one in 600 μL of sterilized water for a few seconds at 3000 revolutions/ min (rpm) using a vortex mixer (AL-VTX3000L, CAE technology Co., Ltd., Québec, Canada), and were then soaked with 600 μL of 70% ethanol to sterilize the ant's surface. The ethanol on the samples was washed twice with 600 μL of sterilized water, then vortexed in 400 μL of sterilized water. Next, 200 μL of the supernatant was spread homogeneously onto a Luria-Bertani (LB) agar plate (25 g Luria-Bertani broth and 15 g agar per liter) to confirm the absence of live bacteria. Bacteria from inside the ant host were released by homogenizing the ant host in 200 μL of water and culturing on LB agar plates at 28 °C for 2 days. Bacteria from each of the ant individuals were cultured independently and approximately equal numbers of the isolates were picked randomly with sterile toothpicks, and were suspended in the LB medium supplemented with 15% v/v glycerol and maintained at − 80 °C until the time of examination. In total, 247 bacterial isolates from P. moesta and 241 bacterial isolates from P. wolfi were collected.
In addition to the bacterial isolates from the ant bodies, 60 bacterial isolates from soil, leaves, and air in the same forest were collected for the purpose of comparing their resistance to naphthoquinones (see below) by using the aforementioned procedure but without initial cleaning and sterilizing of the sample surface.
Bacterial identification. Bacteria collected from the ant hosts were identified by gene marker sequencing.
Bacterial isolates were cultured in LB medium at 28 °C overnight to reach the log-phase, and genomic DNA was extracted following the methods described in Vingataramin and Frost (2015) 20 . Conspecies/strains of the bacterial isolates from the same host were determined using the randomly amplified polymorphic DNA (RAPD) method with the primer 5′-GAG GGT GGC GGT TCT-3′. PCR amplification was performed as follows: initial denaturation at 95 °C for 5 min, 40 cycles of amplification including denaturation at 95 °C for 1 min, annealing at 42 °C for 30 s, and extension at 72 °C for 1 min, followed by a final extension at 72 °C for 10 min. PCR products were run in 2% agarose gel and bacterial isolates were characterized by fragment patterns. For each of the ant hosts, bacterial isolates with the same RAPD pattern were considered to be the same strain. In total, 106 and 178 strains were found from P. moesta and P. wolfi, respectively. One of the bacterial isolates was selected at random to represent the strain and coded with "JYCB" followed by a series of numbers (e.g., JYCB191). Taxonomic status of each strain was determined to species by using the V3/V4 region of the 16S rDNA gene. PCR amplification with the primer set (8F: 5′-AGA GTT TGA TCC TGG CTC AG-3′ and 1541R: 5′-AAG GAG GTG ATC CAG CCG CA-3′) 21,22 was performed under the following conditions: initial denaturation at 95 °C for 5 min, 40 cycles of amplification including denaturation at 95 °C for 1 min, annealing at 55 °C for 30 s, and extension at 72 °C for 1 min 45 s, followed by a final extension at 72 °C for 10 min. PCR products were first checked by running a gel, and were then sequenced at Genomics, Inc. (New Taipei City, Taiwan).
The sequences of the bacterial strains from each of the ant hosts were first analyzed by the unweighted pair group method with arithmetic mean (UPGMA) analysis and clustered into clades according to the sequence dissimilarity (< 0.01) by using MEGA X 23 . Species of each of the clades were judged by the basic local alignment search tool (BLAST) method against nucleotide sequences in the National Center for Biotechnology Information (NCBI) nucleotide database (https:// ftp. ncbi. nlm. nih. gov/ blast/ db/), updated through 2021 May 17th. Because each of the clades contains one to several bacterial strains, each of the strains was first labeled by the species of the sequence with the highest BLAST identity, which was ranked by expected value, percentage of identical matches, and alignment length (https:// www. ncbi. nlm. nih. gov/ BLAST/ tutor ial/ Altsc hul-1. html). If multiple sequences from the database were found to be same in the indexes of the identity, the bacterial strain was labeled by the species that appeared most frequently. Finally, the species for each of the clades were judged by the bacterial species found most frequently in the strains belonging to the clade.
The 60 bacterial isolates collected from the environment were examined using the RAPD method and a Bacillus-specific primer set (5′-CTT GCT CCT CTG AAGT TAG CGG CG-3′ and 5′-TGT TCT TCC CTA ATA ACA GAG TTT TAC GAC CCG-3′), with PCR conditions suggested in Nakano et al. (2004) 24 . Twenty of the bacterial isolates (10 Bacillus and 10 non-Bacillus) with different RAPD patterns were collected for further experiments.
Bacterial diversity of the two ant host species. Three biodiversity indexes (Chao1 richness, exponential of Shannon entropy, and inverse Simpson concentration) of bacterial species were estimated by the sample size-based rarefaction/extrapolation sampling curve using the abundance of bacterial isolates from each of the two ant host species 25 . The calculation was conducted using R 26 with the "iNEXT" package 27 .
Biological properties of bacterial isolates from infected ants. Selected strains. For examining the biological properties of the most predominant species, B. cereus/thuringiensis (see results), 11 of 47 B. cereus/ thuringiensis strains from P. moesta and 10 of 63 B. cereus/thuringiensis strains from P. wolfi were selected. The strains were selected according to the UPGMA analysis of the sequence. One to three strains grouped in the same cluster were selected (Fig. S1). In addition, 6 of 15 strains of the second-most predominant Bacillus species (B. gibsonii) in P. wolfi were also selected randomly for examination, because B. gibsonii occupied approximately 20% of the individuals within the Bacillus isolates.
All the selected strains were used to examine biological properties including potential (1) capability of the isolate to lyse host tissue (hydrolytic enzymes); (2) defense against fungal competition for the ant cadaver, involving the presence of pathogenic and antibiotic genes; and (3) resistance to naphthoquinone derivatives. In addition to the repellence against entomopathogenic fungi, one strain of the B. cereus/thuringiensis and one strain from the secondarily predominant Bacillus clade from each of the hosts were selected at random for examining the potential impact on the invasion and consumption of ant cadavers by scavenger nematodes. www.nature.com/scientificreports/ Hemolysis reaction. Hemolysis reaction tests were conducted on tryptic soy agar (TSA) plates (15 g pancreatic digest of casein, 5 g soybean meal, 5 g NaCl, and 15 g agar, with final pH of 7.3) mixed with 5% defibrinated sheep blood, which was added to the TSA after it had cooled down to approximately 50 °C. One 3-µL drop of the log-phase bacterial suspension was placed onto each TSA plate and incubated at 28 °C for 1-2 days. The hemolysis reaction was determined by the formation of clean (β-hemolysis) or greenish (α-hemolysis) hemolytic zones, or no such zone (γ-hemolysis, non-hemolytic) around the bacterial colonies 28 .
Production of hydrolytic enzymes. The production of hydrolytic enzymes was examined by culturing a 3-µL drop of the exponential-phase bacterial suspension on four different types of plated media: chitinase detection medium (solid medium with 0.3 g MgSO 4 .7H 2 O, 3 g (NH 4 ) 2 SO 4 , 2 g KH 2 PO 4 , 1 g citric acid monohydrate, 0.15 g bromocresol purple, 200 μL Tween 80, 4.5 g colloidal chitin, and 1 L deionized water with 1.5% [w/v] agar, with final pH of 4.7); skim milk agar (solid medium with 2% [w/v] agar, 28 g skim milk powder, 5 g casein enzymic hydrolysate (Tryptone), 2.5 g yeast extract, 1 g dextrose, and 1 L deionized water); lipase agar (solid medium with 2% [w/v] agar, 0.1 g phenol red, 1 g CaCl 2 , 10 mL olive oil, and 1 L deionized water, with final pH of 7.4); and esterase agar (solid medium with 2% [w/v] agar, 0.1 g phenol red, 1 g CaCl 2 , 10 mL tributyrin, and 1 L deionized water, with final pH of 7.4). The chitinase detection medium was used to examine purple zones, indicating chitinase activity 29,30 ; the skim milk agar medium was used to examine clearance zones for proteases activity 31 ; and the lipase and esterase agar media were used to examine yellow zones, indicating lipase and esterase activity, respectively 32 .
Pathogenic and antibiotic genes. The total genomic DNA of Bacillus strains was extracted by using an AccuPrep genomic DNA extraction kit (Bioneer, Daejeon, Korea) for PCR amplification. The specific screening primers for amplifying the genes, including cry, cyt, Iturin, Chitinase, Bacillomycin, Fengycin, Surfactin, vip, and Zwittermicin A, were used under PCR conditions suggested in previous studies [33][34][35][36] . The primer sets used for the amplifications are listed in Table S2.
Lethal effects on Caenorhabditis elegans. Antagonistic effects of B. cereus/thuringiensis isolates on the model nematode, C. elegans, were examined by estimating the potential of hemolytic B. cereus/thuringiensis to prevent competition by scavengers for the resource-rich insect cadavers 37 . Daily mortality of C. elegans strain N2 was compared between randomly selected B. cereus/thuringiensis strains (B. cereus/thuringiensis JYCB227 in clade m4 from P. moesta and B. cereus/thuringiensis JYCB302 in clade w4 from P. wolfi) and Bacillus species of secondary predominance (Bacillus sp. JYCB252 in clade m8 from P. moesta and B. gibsonii JYCB395 in clade w30 from P. wolfi).
Synchronized fourth-stage larval (L4) nematodes were grown on nematode growth medium (NGM) (3 g NaCl, 2.5 g peptone, 17 g agar, 5 mg cholesterol, 1 mL 1 M CaCl 2 , 1 mL 1 M MgSO 4 , 25 mL 1 M KH 2 PO 4 , and H 2 O to 1 L) agar plates seeded with Escherichia coli OP50. The Bacillus isolates were prepared by inoculating in 3 mL LB liquid broth at 20 °C (the mean annual temperature in Lianhuachi Research Center, where the infected ants were collected) overnight, and then adjusting to an absorbance of optical density (O.D.) 0.2 at a wavelength of 600 nm.
To test the survival rate of C. elegans in the presence of various bacteria, L4 nematodes were co-cultured with (1) a hemolytic bacterial strain; (2) a non-hemolytic bacterial strain; (3) a hemolytic strain + E. coli OP50; (4) a non-hemolytic strain + E. coli OP50; and (5) E. coli OP50 only (control). We added 20 μL of bacterial culture to a 35-mm NGM agar plate and spread evenly with a glass rod. For each treatment, 30 L4 larvae were cultured on the NGM agar plate and their survival was monitored daily for 7 days. Each treatment was replicated three times.
Survival curves were compared using a survival analysis with treatment as the fixed effect. The significance of fixed effect was assessed by model reduction and the likelihood ratio test. Post-hoc multiple comparisons were conducted with Tukey's all-pair comparisons. The model building and hypothesis tests were conducted by using the"survival" and "multcomp" packages in R.
A piece of mycelium (approximately 5 × 5 mm 2 ) was seeded in the center of a TSA plate and surrounded by three equidistant 3-μL drops of exponential-phase bacterial suspension. Plates were incubated at 20 °C for 7-10 days. After incubation, areas of the mycelium occupying the plate surface were photographed and measured using Image J. Each pair of bacteria and entomopathogenic fungi, plus the control (a piece of mycelium not surrounded by the bacterial suspension) was replicated 3-4 times.
Antagonism was estimated based on the percentage of mycelial growth inhibition (MGI), which was calculated using the formula ([R c − R exp ]/R c ) × 100%, where R c is the mean area of the control fungus and R exp is the mean area of the examined entomopathogenic fungus co-cultured with each of the Bacillus strains 38 . The MGI value from each of the bacteria co-cultured with each of the fungi was first tested by using Student's t test and Holm-Bonferroni method to adjust P values. The MGI values among all entomopathogenic fungi were compared using a beta regression model with the Bacillus species as the fixed effect. The significance of the Bacillus species effect was tested by comparing the full model with a model that removed the fixed effect term by using a www.nature.com/scientificreports/ likelihood ratio test. Post-hoc tests were conducted using a Tukey-adjusted pairwise comparison. The statistical analysis was conducted using the R packages "betareg, " "emmeans, " "lmtest, " and "multcomp. " Resistance of bacterial isolates to naphthoquinones. To examine the resistance of bacterial isolates to naphthoquinones, the growth of 11 predominant B. cereus/thuringiensis strains isolated from the principal ant host was compared with the growth of 20 environmental bacterial isolates (10 Bacillus and 10 non-Bacillus) using two naphthoquinones, respectively. Because fungal naphthoquinones are currently not purified and commercialized, the two naphthoquinones prepared for the experiment, plumbagin 39 and lapachol 40 , were those found in plants.
They were dissolved in a 30% dimethyl sulfoxide (DMSO) water solution 39 . Naphthoquinone concentrations were determined from the serial dilutions in which three randomly selected bacteria from the ant host and three from the environment had the most distinctive growth rate. In this experiment, the bacterial isolates were first inoculated in LB medium at 20 °C overnight and were then refreshed to the exponential phase with LB medium for 3 h. The bacterial concentration was adjusted to ~ 1.5 × 10 8 cells/mL. Next, 10 μL of the bacterial suspension and 180 μL of the Mueller Hinton broth medium (Sigma-Aldrich, St. Louis, USA) were added to either 10 μL of the naphthoquinone solution or 10 μL of the 30% DMSO water solution for the control. The growth of bacterial isolates at 20 °C was monitored by measuring the O.D. value at 600 nm with a Multiskan GO microplate spectrophotometer (Thermo Scientific, Waltham, USA) every hour for 12 h. Four bacterial isolates (one B. cereus/thuringiensis from the ant host, plus two Bacillus and one non-Bacillus from the environment) were omitted from the analysis due to low growth rate in the media with DMSO (O.D. value lower than 0.05 at the end of 12 h). Thus, ten predominant Bacillus from the ant, eight Bacillus from the environment, and nine non-Bacillus bacteria from the environment were used to represent the naphthoquinone tolerances of each group. Each combination of bacterial isolate and naphthoquinone or control was replicated twice.
The resistance index of each bacterial isolate was calculated by the normalized difference of the O.D. value in the naphthoquinone-treated medium versus the control medium ([naphthoquinone − DMSO]/[naphthoquinone + DMSO]). Values closer to 1 represent higher resistance to the presence of naphthoquinone. Resistance index was compared among the bacterial isolates from different resources (B. cereus/thuringiensis from the ant host, Bacillus from the environment, and non-Bacillus from the environment) using a linear mixed model with resource as the fixed effect, bacterial isolate as a random effect, and growth time (5-12 h) as a nest effect. The significance of resource as a fixed effect was assessed by model reduction and the likelihood ratio test. Post-hoc multiple comparisons were made using Tukey's all-pair comparisons. The model building and hypothesis tests were conducted using the "lme4" and "multcomp" packages in R.
Results
Relative abundance and diversity of cultivated bacteria in infected ant hosts. In total, 247 and 241 bacterial isolates were obtained, with 106 and 178 strains identified by the RAPD patterns, from infected P. moesta and P. wolfi, respectively. The 16S rDNA partial sequences for each of the strains were uploaded to the NCBI database with the Genbank accession numbers provided in Supplementary file 1 and 2. According to the UPGMA results, the bacterial strains from the two hosts were clustered into 31 and 37 clades, respectively (Fig. S2).
All the bacteria identified from the ant hosts belonged to the phyla Firmicutes and Actinobacteria. Ten genera were identified in P. moesta and 17 genera were identified in P. wolfi, while six genera were found commonly in both hosts (Table S1, Fig. 2). Bacillus was the most predominant genus in both species of ants. Bacillus comprised 56.68% (140/247) of total bacterial isolates from P. moesta and 65.98% (159/241) of total bacterial isolates from P. wolfi. We note that the bacteria identified as Bacillus might not be a monophyletic group based on UPGMA results. Five (m7, m8, m11, m13, m18) of the nine "Bacillus" clades from P. moesta, and four (w11, w12, w20, w30) of the eight from P. wolfi were not clustered into the same clade with other Bacillus. Despite this, excluding these clades from Bacillus did not change the predominance of Bacillus in the bacterial community because most of the Bacillus clades (except m4, w4, w8, w30) were low in abundance, with less than 10 individual isolates. Two Bacillus clades (m4 and w4) from each of the ant hosts were the most abundant, comprising 44.94% and 39.00% of the total bacterial isolates in P. moesta and P. wolfi, respectively, whereas w8 (B. subtilis) and w30 (B. gibsonii) comprised 6.22% and 14.11% isolates. The species of the predominant clades (m4 and w4) was considered to be B. cereus/thuringiensis. Most of the bacterial strains belonging to these two clades were labeled as B. thuringiensis (22/47 in m4, 42/69 in w4) according to the BLAST results. However, these clades also contained some strains labeled as B. cereus (16/47 in m4, 17/69 in w4). Although most of the strains are suggested to be B. thuringiensis, the 16S rRNA gene sequences based on universal primers have shown a high similarity (> 99%) index between B. cereus and B. thuringiensis 41 . It is difficult to differentiate B. cereus from B. thuringiensis in routine diagnostics. The identification methods are expensive and laborious because current species designation is linked to specific phenotypic characteristics or the presence of species-specific genes 42 . Because the identification was done according to the 16S rRNA gene sequence only in this study, these clades were considered as B. cereus/thuringiensis rather than one or the other.
The number of the bacterial clades in infected P. wolfi (37 species, estimated sample coverage: 94.63%) was more than that in infected P. moesta (31 species, estimated sample coverage: 93.93%). Sample size-based rarefaction and extrapolation curves also showed a higher diversity of microbiota in infected P. wolfi in all three biodiversity indexes, whereas the difference in species richness was not as obvious compared with the other two indexes (Figs. 2, S4).
Production of hydrolytic enzymes by B. cereus/thuringiensis and B. gibsonii. B. cereus/thur-
ingiensis isolates from both ant hosts displayed protease, lipase, and esterase activities. Chitinase activity was detected in B. cereus/thuringiensis isolates from P. wolfi, but was not confirmed in the B. cereus/thuringiensis isolates from P. moesta because none of these isolates grew on the chitinase detection medium. Lipase activity was detected in all of the B. gibsonii isolates, but none of the isolates grew on either the chitinase detection medium, skim milk agar (for protease activity), or esterase agar plates (Table S5). Fig. 3b). The addi-
Figure 2. Species diversity and genus abundance of bacteria isolated from ant cadavers infected with
Ophiocordyceps unilateralis sensu lato estimated by using R software with the "iNEXT" package (https:// www.rproje ct. org/). www.nature.com/scientificreports/ tion of E. coli OP50 slightly increased the survival rate of nematodes co-cultured with hemolytic B. cereus/ thuringiensis isolated from P. wolfi (Fig. 3b), but this impact was not seen when using hemolytic isolates from P. moesta (Fig. 3a).
Growth inhibition of entomopathogenic fungi by Bacillus isolates.
Most of the bacterial strains examined in this study inhibited growth of the co-cultured entomopathogenic fungi significantly, although it was not statistically significant in a few of the strains, including JYCB395 and JYCB403 co-cultured with A. nomius, and JYCB395 and JYCB398 with T. asperellum. The strains that did not affect the fungal growth significantly belonged to clade w30, identified as B. gibsonii. The intensity of the growth inhibition of the three entomopathogenic fungi was different among the three bacterial species. (A. nomius: X 2 = 87.21, d.f. = 2, P < 0.001; T. asperellum: X 2 = 51.06, d.f. = 2, P < 0.001; P. lilacinum: X 2 = 76. 33, d.f. = 2, P < 0.001). The B. cereus/thuringiensis from each of the ant hosts inhibited growth of the entomopathogenic fungi noticeably, whereas B. gibsonii inhibited growth to a significantly lesser degree (Fig. 4).
Resistance of B. cereus/thuringiensis to naphthoquinones.
In a pretest with three randomly selected bacterial strains, the differences in the naphthoquinone tolerance in the bacteria isolated from the ants and the environment were more obvious with concentrations of 45 µg plumbagin/mL and 64.5 µg lapachol/mL (Fig. S3). These two concentrations were used in the following experiment. Bacterial isolates of all three categories (B. cereus/thuringiensis from the ant hosts, Bacillus from the environment, and non-Bacillus from the environment) displayed similar resistance to lapachol (X 2 = 1.87, d.f. = 2, P = 0.392, Fig. 5a), growing similarly to bacteria cultured in the control medium (resistance indexes close to 1). In contrast, bacterial isolates from the environment, particularly the Bacillus, grew much slower in the presence of plumbagin than in the control medium, whereas B. cereus/thuringiensis from the ant host displayed a higher resistance to plumbagin (X 2 = 6.91, d.f. = 2, P = 0.0316, Fig. 5b).
Discussion
We found that B. cereus/thuringiensis, which occupied nearly 40% of the total bacterial counts, predominated the bacterial community in O. unilateralis s. l.-infected ant cadavers. In a study of the closely related fungus, O. sinensis, the Ophiocordyceps-associated microbiomes improved the development and formation of fungal metabolites 13,14 . However, the main bacterial taxa found in the O. sinensis-infected host cadavers was not Bacillus 13,14 . In the present study, the proliferation of Bacillus in ant cadavers could be from an invasion of the naphthoquinone-resistant population from soil. The genus Bacillus is one of the main bacterial groups in soil 43 . Furthermore, the predominant B. cereus/thuringiensis from ant cadavers displayed higher resistance to a specific naphthoquinone (plumbagin) than the bacteria isolated from the surrounding environment. Although such differences in the naphthoquinone effect have not been seen in lapachol, unequal antimicrobial activity has The B. cereus/thuringiensis isolates from both principal and alternate sympatric hosts are broadly aggressive to potential saprophytic invaders, including entomopathogenic fungi and nematodes. One possible factor is their hemolytic ability 45,46 . In comparison with non-hemolytic Bacillus, hemolytic B. cereus/thuringiensis isolates displayed higher fatal effects to free-living nematodes and growth inhibition of co-cultured entomopathogenic fungi. Collaborative bacteria have been known to assist parasites in occupying the host body by excluding potential invaders, such as nematode scavengers or insect predators 37,47,48 . For these collaborative bacteria, some of the antibiotic activity against invaders may be moderated by the need to compete with invaders for the necessary nutrition 49 , while not obviously affecting the host's survival 50 . In contrast, for the B. cereus/thuringiensis isolates, in which host survival is no more a concern, the antibiotic activity could be more intensive and efficient. The B. cereus/thuringiensis isolates might play a role resembling that of the symbiotic bacteria, Xenorhabdus and Photorhabdus, in entomopathogenic nematodes. Invasion of the insect host by the nematode rapidly causes the insect's death, and septicemia occurs with the proliferation of symbiotic bacteria. The nematodes then colonize the insect cadaver with the symbiotic bacteria, which now serve as "body guards" to defend against invasion by saprophytic or parasitic organisms 48 . Like the B. cereus/thuringiensis isolates, hemolysis is also detected in X. nematophila and likely plays a critical role in killing and preserving the insect host 51 . The outbreak of hemolytic B. cereus/thuringiensis isolates can be lethal to the ant host, and might explain the lack of endotoxin-related and biosynthetic genes in O. unilateralis s. l. due to overlapping functions. Another explanation for the lack of www.nature.com/scientificreports/ endotoxin-related genes is that the proliferation of B. cereus/thuringiensis occurs after the host death. However, the precise timing of the outbreak is currently unknown. Regardless of the timing of B. cereus/thuringiensis proliferation, B. cereus/thuringiensis is potentially beneficial to O. unilateralis s. l. in protecting and consuming the host cadaver. Bacillus species produce several extracellular enzymes 52,53 , including alkaline proteases, which have been used commercially 54 . The ant cadaver contains nutritious and protein-rich niches for microbiota, but these are only made available in the presence of proteolytic enzymes, which digest the macromolecular proteins into smaller peptides and free amino acids 55 . The protease activity detected in all of the examined B. cereus/thuringiensis isolates suggests that symbiotic bacteria are advantageous to O. unilateralis s. l. in digesting the host tissue. In addition to improving consumption of the host, proteases are important for host epidermal decomposition and enhanced virulence to assist O. unilateralis s. l. in causing host death 56,57 . Chitin is another major component of insects that functions as a scaffolding material 58 . In addition to releasing nutrients, digesting chitin can be also fatal to the insect host and its invaders. Chitin is a necessary component of the peritrophic matrix secreted by the entire midgut in most insects, and it functions as a protective barrier against abrasive particles and microbial infections 59 . Chitinase secreted by bacteria weakens the insect's peritrophic membrane, and consequently promotes the penetration of bacterial toxins to the gut epithelia during pathogenesis 60,61 . Chitinase produced by bacteria has also been found to present antagonistic activity against fungi, given that chitin is a primary component of fungal cell walls 62 . Digestion of lipids, in contrast to proteases and chitinase, might be less harmful for the living ant because lipids function mainly as storage structures. Nevertheless, lipids are a concentrated source of energy and primary nutrient reserves for fungal spores 63 . Extracellular lipid digestion by B. cereus/thuringiensis suggests that this bacterium efficiently harvests the energy from the ant cadaver and can symbiotically benefit O. unilateralis s. l. The hydrolytic enzymes, secreted to the microenvironment, may enhance the utilization of host resources. The importance of B. cereus/thuringiensis in consuming the host might be further supported by phylogenetic analysis of enzyme sequences. Unlike the genes for producing secondary metabolites, genes for producing hydrolytic enzymes in O. unilateralis s. l. display lower species specificity, which suggests a lack of positive selection among host species 64 . Sharing of the task by sympatric microorganisms reduces the indispensability and selective pressure on fungal hydrolytic enzymes.
The biological properties associated with an outbreak of B. cereus/thuringiensis suggest potential benefits to O. unilateralis s. l. The sympatric bacteria have recently been noted to play crucial roles in the growth of parasitoid fungi [65][66][67] . However, the proliferation of B. cereus/thuringiensis appears coincidental rather than the result of long-term coevolution. The bacterial communities in non-infected ants were not examined in this study. Despite 69 . The predominance of B. cereus/thuringiensis may have originated from a small population in the living ant, the invasion of the bacteria from soil, or that carried by the fungal spore. Currently we do not have data supporting these hypotheses, but the outbreak of B. cereus/thuringiensis is still significant ecologically because it was detected in the two infected host species. In addition, the net effect of the inevitable competition with B. cereus/thuringiensis for limited resources may prove to be antagonistic rather than synergistic. Co-occurrence of fungi and bacteria can promote the growth of both entities 70 , but it can also accelerate the depletion of resources. Tradeoffs have been reported in fungi between growth and tolerance toward bacteria 1 . In this study, we demonstrated the predominance of B. cereus/ thuringiensis in the bacterial community associated with ant cadavers infected by O. unilateralis s. l. These bacterial isolates displayed the capabilities of hemolysis, production of hydrolytic enzymes, antagonistic effects to co-cultured nematodes and entomopathogenic fungi, and higher tolerance toward naphthoquinone. At present we still do not have evidence to conclude that the outbreak of B. cereus/thuringiensis is antagonistic or synergistic toward O. unilateralis s. l. However, study of sympatric bacteria will improve our understanding of the parasitic life history and potential selective pressures in O. unilateralis s. l. In addition, the antibiotic activity of B. cereus/ thuringiensis isolates has potential as a biocontrol agent. With antagonistic effects on entomopathogenic fungi and nematodes, B. cereus/thuringiensis also has potential agricultural applications in controlling pathogenic fungi 71 and root-knot nematodes 72 . Through behavioral manipulation of the O. unilateralis s. l.-ant parasitic associations, the bacterial diversity revealed in this study is a step forward in understanding the impact of microbial communities in parasitic life cycles. | 7,712.6 | 2021-11-22T00:00:00.000 | [
"Biology"
] |
Does a Mind Need a Body?
This question of whether the mind needs a body is a long-standing philosophical dispute, so I do not imagine that we are going to settle it once and for all today; nevertheless, I am going to argue that a mind does, in fact, need a body. Some of these remarks are reflected in my paper “What Do We Owe to Novel Synthetic Being and How Can We Be Sure?” which will be appearing in the July issue of the Cambridge Quarterly. In that paper, I argue that any account of our obligations to Novel Synthetic Beings (NSBs), whether they are machine-based or (synthetic) biologically based, will be incomplete and faulty if we only consider them in terms of how “intelligent” they are, in the narrow cognitive sense that we tend to often use the word. It is easy to overlook the role of embodiment and assume that the only component that is essential for moral status is mental sophistication, since it is this that enables a being to self-reflect, have a conception of its own future, plans, values, an awareness of its own desires, and so on. But in fact, as I argue in the paper, doing this excludes the role that embodiment has to play in the having of those capacities. In the paper, I defend the view that we will have an incomplete account of what we owe to NSBs, what their rights are, and what they are entitled to, if we do not take into account the terms of their embodiment, what it enables and forbids them to do. My argument is that the physical aspect of their existence cannot be disaggregated from their mental capacities, and so to properly understand the nature of minds, in artificially intelligent beings in the case of the paper, we must also take into account how the putatively exclusively mental processes are physically instantiated. Those thoughts are the basis for where we stand now. I am arguing for the purposes of this debate that a mind does indeed need a body. As a starting point, I think there are two interpretations of “need” at play here. The first is whether or not the processes of mind are intelligible in the absence of the body; that is to say, whether we can even conceive of them as occurring in the absence of the body, or whether we can get a sense of what themental is without taking into account the physical vehicle in which it is instantiated. The second interpretation of “need” is not just about intelligibility but whether, as a matter of logic, there must be body for there to be mind. Since, for reasons I will defend, the answer to this second interpretation is “yes,” insofar as mind cannot logically exist without body, the other question, of intelligibility, is a red herring. This is because if we hold that the mental logically requires the physical, then the notion of disembodied mind cannot be properly intelligible. My claim that there is no mind without body is grounded on an underlying ontological physicalist naturalism, and more specifically Strawsonian physicalist naturalism.1 It is fair to say that if you dispute the coherence of the underlying ontological picture, you will find the arguments that I make unpersuasive, and I flag this as a caveat and potential weakness of the arguments that I am going to make. Nevertheless, I find it a persuasive ontological account, and according to this account, the physical is coextensive with all there is. Everything exists within the physical universe. There is nothing beyond it; the substrate of everything is physical. As such, all phenomena are grounded in the physical. Because
nothing exists beyond the physical, anything and everything that exists is necessarily physically instantiated, including processes of mind. Even though the association between the two can seem mysterious, nevertheless, there are no processes in mind that are not physically instantiated.
There is a second and more difficult challenge to my argument that I also need to flag. This is the difference between embodiment and mere instantiation. I accept this challenge. All bodies are physical, but not everything physical is going to count as body-or at least there can be legitimate dispute about what a body is. For example, it does not appear to make sense to say that the air could count as "a body," just because it is physical.
In response to this challenge, however, one could say that equally we do not have the privilege of defining what definitely is and is not a body. We know that the bodies are the delimited seats of conscious experience in humans and animals, but they do not necessarily have to rigidly conform to the way that we use the word "body" currently in perpetuity. And if our technological trajectory is one where at some point we will be able to create novel forms of consciousness-true AI, for example-then at some point, questions about the meaning and self-understanding of their physical form are likely to arise.
If both the development of AI and synthetic biology were to succeed in this way in future, then we may be on the path to widening the scope of what count as bodies. We already know there is more than one kind of body: human bodies vary, within certain limits, and there are numerous kinds of animal bodies. Therefore, the scope is wide for what other physical forms could legitimately count as bodies. A useful intuition pump here for starting to think about this is Chrisley's fourfold typology of embodiment, which has been developed in the context of AI but is valuable for thinking more generally about different ways that embodiment might be legitimately understood. 2 In addition, what it is to have a mind is integrated with what it is like to exist and experience the world. And the only way to experience the world is by the physical operators that enable you to do that. So, the notion of mind breaks down and becomes unintelligible without that physical coupling being part of the conceptual picture. 3 This is not true only for relatively cognitively sophisticated beings such as humans, which not only have mental processes but are also consciously aware of having them. It is easy to forget that what we count as "mental" properties are not necessarily only a set of higher-order processes that enable us to have conversations like the one we are having in this debate, but also much more primitive, overtly "physical"that is to say, rather than overtly intellectual-processes. 4 We often tend to think of the mental in terms of the former-like the ability to reflect on one's values, or doing mathematics, or whatever it might be. But of course, even basic functions necessary for getting by in the most simple way 5 -such as the ability to perceive and be aware of one's environment or negotiate one's environment via a particular form of locomotion-require both the capacity (realized in a brain or comparable organ or component) to interpret the information coming in and the mediation of this information by physical sensing apparatus. 6 Indeed, given the underlying ontological position from which I am starting, going down to an even lower level of sophistication, mental processes are necessarily physically instantiated, or embodied, and as such, the idea of mind existing without an interface with the world outside is incoherent.
So, to summarize briefly: since mind is mediated by the physical, so there are no minds which are not grounded in the physical, and since bodies are necessarily physical, a mind requires a body. A reasonable objection to this final claim is that not everything physical is a body, but the response to that is, again, to say that the notion of a nonphysical body is in coherent, so whatever else a body is or is not, it is definitely physical. Therefore, I think what is really up for grabs and at the core of the dispute here is what does and does not count as a body.
David Lawrence: Opening Argument in the Negative I have a difficult task, I admit, in making a case for the negative. Somewhat controversially I will begin by telling you first why I doubt my own position, and why I had a hard time constructing this stance. You will all be familiar, if not previously so then certainly following Alex's well-made argument, with the fundamentals of the "mind-body problem" discourse-(physicalist) Monism versus Dualism. An impasse between those who claim that the mind, the self, perhaps consciousness, are intrinsic to the brain and its biology 7 ; and those who cannot accept that, who see the mind as something other, is that there is some distinction between mind and matter. 8 In this debate, it might be more useful to talk about monism in terms of the physical, Alex being so concerned with the material reality of the brain as being all that there is, and so I will use both physicalism and materialism somewhat interchangeably from now on, the distinction between the two not seeming to matter too much for our question. 9 An easy approach to this argument would have been to decry a purely physicalist approach, to just wholeheartedly endorse dualism, but I do not see it as my place to try to convince you of some ephemeral, invisible other. I would struggle with that position, and I always considered myself a materialist-I struggle to accept arguments for which, by their nature, we cannot present observable evidence. Ironically, I have faith in science explaining everything, so I cannot blindly endorse a classic dualism. Unfortunately for me, that faith, or the desire for that to be true, at least, does not prevent a niggling doubt I have, which I want to explore here.
The question "does a mind need a body?" presupposes a couple of things. It requires us to know what a mind is, and it requires us to know what a body is. The formulation of the question implies, perhaps, that a body is a host for a mind, a place it resides in some way. That works very nicely for my opponent's physicalist viewpoint-if "mind" is just a word we use for whatever biological processes we do not yet entirely understand (and lest we forget, cannot yet identify)-then those processes need meat, and they need anatomy in which to occur. A body, of course, provides that. We tend to think of a human body when we say the word "body," but since there is not really a platonic idea of what a body is or should be, we know at least that it does not have to be a plantigrade biped with cranium uppermost, like us. Format does not matter much, and is in many ways banal-so long as there is a material structure to support the processes that we tend to call "mind." Whatever those might prove to be, the body for this view could just be a host for these processes.
But of course, we do not actually tend to have such an open perspective, particularly in general conversation. When we talk about the body as a physical object in this discussion on the mind-body problem, we tend to be actually using it as what Peter Wolfendale refers to as an "index of authenticity," a "stand-in for whatever it is that supposedly enables actual human cognition…[with] little reason offered for this beyond its centrality to the current form of life we share." 10 In other words, we are only really using the term, because it is what we recognize, and not because we actually mean something specific by it. Alex, really, has kindly made this argument for me, much better than I could. "Real meat" is, as Wolfenden puts it, our common denominator, so that is what we tend to think of. To me, that tendency makes us miss something important in this debate, which is that we do not really have any good reason to pick some specific thing as representing "body" here. It also leads us to certain assumptions about what a mind is.
Of course, we can observe some cognitive processes, seeing where and how they take place and connect within our neuroanatomy. 11 We can observe similar processes in other carbon-based, organic bodies. We can see how those processes might differ slightly in different animals, in different body plans, and in different body complexities. We sometimes use what we understand about one kind of body to deduce the biological processes in another kind of body. We use animal testing for a reason, after all. And I accept some examples of bodies have more complex architecture, more capable hardware, so to speak. They can perform mental processes that others cannot. Clearly, some examples of bodies have the capacity for higher levels of cognition than others. I would not stand here and suggest to you that, in that respect, the body of a mouse is equal to that of an orangutan. But… they are all "A Body." I will come back to this point.
The Subjective Mind
The other issue raised by the debate question was that we are assumed to know what a "mind" actually is. As I have said, I find it hard to accept the idea of some nebulous "other" substance interposed on or occupying the same physical space as our body. Most sensations we have are rooted in some observable physical way in our biology. I do not presume to dispute that, but here is the thing I mentioned that creates doubt within me. There is a lot about our physical bodies that might well be physically instantiated, but which we do not experience in any meaningful way. There is nothing much you can describe about the workings of your kidneys, your gall bladder, or your liver-they all fulfil vital, important functions, but they only time we become aware of them is because some malady causes other bodily systems to draw our attention to them. The flipside of all this, of course, is that there is a lot of experience that we do perceive, but that is not obviously physically instantiated. Relevantly here, the experiences of our mind are, or at least include, subjective experiences. These would be by nature unobservable, at least in terms of their character.
According to Thomas Nagel, who I accept is very commonly invoked in this debate, this subjectivity defeats reduction, because the subjective character of experience cannot be explained by a system of functional or intentional states. 12 How can every possible subjective experience-and degree of such-be tied to an individual physical property? It simply cannot be proved, because it is impossible to have an objective answer. Subjective experiences cannot be objective, as they cannot be verified-by nature, they are one point of view. We could not know what it is like for a bat to be a bat, because we can never be a bat ourselves-only imagine it, as a human and from our experience as humans. It is not possible for us to shed that experiential bias.
Daniel Dennett's argument against Nagel is that the "interesting or theoretically important" 13 elements of mind or consciousness could be observed, and that subjective experience, therefore, does not matter-because the mindset of a bat does not matter. In the grand scheme of things, I suppose this is true, bats are not important, but this seems to me to willfully miss something valuable. A copy of a mind-if such a thing is possible-would not be a copy if it was missing some element. Without the subjective experience, a copy would be a fundamentally impoverished account of that individual and specific mind. In the best case, it would be a blank slate-a mind in function and form, perhaps, but crucially not the one you tried to copy. This is sometimes called a philosophical zombie. 14 Similarly to the bat, it is not possible for me to know what it is like to be Alex. I cannot know his experience, or at least how he perceives experience. I cannot know how he perceives color-maybe it is the same way as I do, but maybe not, and we just have no way to prove it one way or the other. Even if we were to scan our cognitive activity, and the records seemed comparable, nothing much could arise from that save to demonstrate that our substrate is the same and we operate according to the same laws of physics and biochemistry (which would be some relief, even if not useful). It could not say anything significant about what he experiences, or if our experiences are the same. You cannot reduce things to a pure physicalist view without ignoring that subjectivity.
We have developed various ways of trying to discuss these subjectivities and label them, isolate them, and package them off. We often use the term qualia 15 to denote a single "unit" of subjective experience, but this only does a partial job in capturing what that experience IS. Qualia is a nice way of saying that something is a subjective experience, but it cannot evoke that experience, and it cannot tell us what that experience is. You see the color blue. There is a physical instantiation of this experience. A wavelength of light is hitting your retina, and signals are sent to the brain conveying that. The signals will say something about the shade of the blue, maybe its intensity and its brightness-lots of characteristics that can be explained in terms of color physics-but there is also a subjective experience here: the blueness of the blue. The blueness is the qualia in this example, and it is something that cannot be verified. I simply cannot know what it is you are experiencing when presented with "blue." We can both understand what we see to be blue, but there is no indication that what we see is actually the same. You cannot describe to me this experience of perception in a way that would tell me what it is to see what you consider "blue." Frank Jackson's classic "knowledge argument" goes that you could not explain blue to a person raised in a black-and-white room, without showing them-even if they know everything there is to know about the color physics, they still could not understand or imagine the experience of seeing blue until they have actually done so. 16 Furthermore, Moreland Perkins argues that qualia do not necessarily link to objective causes: a smell does not bear any obvious resemblance to the molecule we inhale. Physical sensations do not necessarily actually take place where we perceive them to; we have referred to sensation. 17 You could not look at a molecule or a site of pain and know what the experience of that sensation would be to an individual.
All of this is to say that qualia only go so far as a descriptor. It can tell us that there is a sensation, but not what that sensation is. Further still, it does not appear that there actually is any way to communicate that sensation, or, indeed, that it is possible to know what the sensation as perceived by another even is. So, despite my desire for, and faith in, science to explain the brain and explain the mind, to explain what, who, and how we are, it does not seem possible for science to answer this problem-to codify the blueness of blue. Per Nagel, again, "if we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue about how this could be done." 18 So, there is some aspect of our lived experiences-some aspect of our minds-that cannot be reduced to the physical, at least not in a useful or observable way.
My opponent could jump in here and say that I am endorsing some plain dualism, just as I said I would not do. But I would like to occupy a more subtle ground-presumably, whatever this subjectivity is, it must be in some way a product of its physical basis. Without a physical basis, just as Alex has said, there could be no capacity to perceive qualia, and there could be no experience. This leaves me with an irreconcilable problem-I do not accept the mind as some aetheric force, it must, in some way, exist in our universe, but at the same time, it is not an ontologically simple existence.
Irreducibility
In some ways, one could argue that this comes close to ideas of an "emergent materialism"; that mind is a novel nonphysical property born of a complex system, and it cannot be reduced to a given physicality in itself. 19 Mind may be only metaphysically dependent on the brain. 20 If I were to endorse this, I find myself dangerously close to giving ground to Alex. Even if body is only metaphysically necessary, that may be enough to say that a mind needs one. However, I believe we can introduce sufficient separation.
David Chalmers holds that consciousness, or mind, is entailed by information, and information is an ontologically separate fundamental property of the universe, explaining that: In physics, it occasionally happens that an entity has to be taken as fundamental. Fundamental entities are not explained in terms of anything simpler. Instead, one takes them as basic, and gives a theory of how they relate to everything else in the world. 21 This greatly resembles the ways in which we often talk about minds. Chalmers suggests that qualia are informational-the blueness of blue is information of which we are aware, although which we cannot convey to others. Qualia do not follow logically from the physical facts of the brain or body, and so they are what is sometimes referred to by, for example, Derek Parfit as further facts. 22 , 23 Qualia, then, exist only as information-they are irreducible any further.
Information, or subjective experience, means nothing in and of itself. It is also metaphysically possible that it does not exist without being observed, per the famous "If a tree falls in a forest and no one is around to hear it, does it make a sound?" thought experiment. The process of observation, of receipt of that information, is what I think we refer to as mind. These are cases in which we have a human patient, say in a permanent vegetative state, where the brain architecture exists, but there is no perception, no receipt, or no processing of information, and correspondingly, we see no evidence of a mind. The brain may be the engine, but without informational fuel, what use is that engine?
The corollary of this is that without instantiation of some kind, there is not anything present to receive, to interpret, or to experience the information regarding the blueness of blue. We need a tank in which to pour that fuel. Mind must, therefore, have some physical property-it must be instantiated within the universe. It must exist somewhere in order to accept inputs of information. But it is also nonphysical-it is a gestalt of, for want of a better term, functional informational capacities such as the ability to understand, or to perceive qualia. The mind is inhabiting an odd, quantum space; higher mental processes seem to be somehow phenomenologically different from physical cognitive processes. I am, therefore, forced to admit that my views on this debate may lay closer to property dualism (or at best a nonreductive physicalism) than I perhaps previously thought, with nonphysical mental properties supervening on physical substance. And, this is as close as I will come to saying that my opponent has a point-it appears our minds do need a host or substrate to function, and if you choose to call that a body, then very well. But there is no reason to say that this host need be any specific body.
Function over Form?
The chick is not the eggshell. But without the shell, the chick could not form. It is conceivably possible to remove the embryo from the shell and place it in another shell, where it will grow. It may now be a slightly different chick, but it will still be a chick. So, it may be with the mind. I do not dispute that modes of instantiation-or to return to the language of our initial question, embodiment-will affect the mind, or rather, that they will affect our subjective experience. It is almost undeniable-if the body is a conduit through which inputs are filtered, the incoming sensations, whatever they are, are going in some way to be colored by the medium through which they are transferred. There are wavelengths of electromagnetism that our Homo sapiens eyes cannot receive. As such, looking at a light source does not transfer to us the experience of seeing infrared-although our retinas are being impacted by infrared wavelengths of light. An eye that could see infrared would include that input, and we would presumably subjectively experience it as… something. As I have tried to point out, we cannot know what that experience would be.
If the functionality of the mind is the core of the matter, as I believe, it will presumably proceed to enact these functions-to receive information and to have subjective experiences-no matter its mode of instantiation (or its shell) if we can assume a similar degree of complexity for that embodiment. It may not have these experiences in the same ways, the informational input may be inflected or colored or even expanded by the conduit, but an experience would nonetheless be had. The mind would have performed its role in perceiving that input, and interpreting it as an experience. The brain-in-a-vat concept 24 is frequently misused, but it seems apt here to demonstrate this idea of the body as a conduit that does not, in itself, matter. Assuming, for a moment, that such a thing is possible, then the vat-its various, presumably electronic and chemical, inputs would act in the same way as our senses. The mechanism by which these operate does not seem to be significant, but it does not seem unrealistic to suggest that we could, in future, emulate the types of electrochemical activity that enter our brains "naturally." The mind residing in that vat would still take that activity as an input and experience it subjectively, although that the information would be transferred not through ears or eyes and neurons, but through a chemical soup and wires. So, rather than the mode or medium of ingress, the significant element would be the information contained in that electrochemical activity. The process of perception, of mind, would be unchanged. The mind, then, needs a body, but it could be any sufficiently complex body, and anything that we could conceive of as a body.
Concluding Remarks
The question "Does a Mind Need a Body" is, essentially, flawed. Each is reliant on the other to constitute themselves. A mind may be metaphysically dependent on a substrate, a host. But a substrate is not a body without a mind-it is just an interchangeable shell. If a mind cannot be reduced to a physicality, then specific embodiment does not matter. It has an effect, and it matters in as much as it does shape the mind, gives it structure, and colors our experiences. But that is a byproduct, not the main event. It seems to me that it is not what a mind is but what a mind does that matters, and what a mind does is to subjectively experience information, to interpret embodied inputs, and to have purpose in and of itself.
If these capacities are emergent properties that we cannot reduce solely to their underlying biology, properties with some nonphysical element that we cannot objectively measure, about which there is an unsolvable epistemological gap, then this says that the first assumption of our central question is problematic. We do not know, objectively, what a mind is and we cannot know, because we cannot escape our own perspective and subjectivity. We are limited by an inability to be objectively introspective; so, there is a systemic and epistemic barrier to any attempt to understand the basis of our minds. 25 Just as we cannot know what it is like to be a bat, we can only know what it is like to be a human, and that tells us nothing of the nature of a mind in and of itself, removed from a given host. One method may be to attempt to build a mind for someone or something else, perhaps an artificial general intelligence-but the very inability we have to escape ourselves will likely cause us to create their minds in our own image.
If we cannot know mind beyond its abstract functionality, then we cannot say definitively what it is, and where it resides. If we cannot say where it resides, then we cannot say that that must be in a given body. If we cannot definitely say it must reside within our body, then we cannot definitively say that a mind needs a body. That the mind needs a given body relies wholly on the idea that a mind entirely can be quantified and reduced. Because that argument fails, it cannot be proved, despite how much I may wish it could be, then we cannot in good faith say that a mind needs a body in any way that we should care about.
Alex McKeown's Rebuttal: Does David Really Disagree with Me? One thing that I am concerned by is the extent to which David is actually disagreeing with me. Perhaps in the process of discussing this, we will tease out what it is he is arguing with me about, but it seems to me there was a lot that he accepted. And on my reading of it, what he accepts should lead him to accept my conclusion rather than his.
For example, David accepts that there needs to be some kind of physical host for mind. He accepts that it has to be somewhere. But then, he says it does not have to be any particular host and, you know, I suppose I understand what he means by that. But he does not dispute that mind has to be instantiated in some way and so he does not dispute the underlying physicalist picture. And once you have accepted that the mind needs to be extended in space somewhere, it does not seem to me that there is enormous scope for disagreement.
David also accepts my argument that humans do not necessarily have any privileged account of what counts as a body. There could be many different kinds of bodies, so we are not entitled to point to instances of different kinds of physical instantiations of mind and say that is definitely not a body. I flag that I kind of anthropomorphic uncertainty, and David accepts that as well.
So David accepts the physicalist picture and he accepts that we do not have a privileged account of what a body is or is not. Really what he is disagreeing about is whether I am right in saying that there are lots of things that count as bodies, given that this move can be construed as claiming a kind of anthropomorphic certainty to which I am not entitled. However, just because something is different embodied in a way that we could not possibly imagine does not mean they are not entitled to say that they are embodied.
David Lawrence's Rebuttal: Embodiment Does Not Matter
Alex, in his argument, rightly gave a great deal of attention to the idea of embodiment. This is not the first forum in which he and I have taken opposing sides on that subject, and I dare say it will not be the last either.
To rebut, I will return to my characterization of the body-whatever we are understanding that to be-as a "conduit." I want to make two points about this; the first being to reiterate that the specific body does not seem to me to matter very much. I contended that while embodiment surely does matter in as much as any medium will exert an effect on whatever passes through it, this does not fundamentally change the fact that experiences would still be had, and mind would continue to exist and function. Whatever color or inflection the conduit imparts, the signal still arrives and is still processed, as it were.
The Banality of Substrate A hardline, reductive physicalist might liken the brain to a machine, some piece of complex electronics. This is, ironically, a helpful analogy here-electronic circuitry imparts resistance to the electrons flowing through it, but the signals travel. A slightly different alloy used in the circuit would provide a different resistance, and that altered resistance might affect the character of the signal slightly-less strong, saybut it would still travel, and still incite the response at the point of receipt. Providing it is capable of transmitting the signal, the alloy-the conduit-does not matter.
Let us say you were born as an able-bodied, neurotypical person, who suffers some accident that affects your body plan, such as the loss of an appendage. Your embodiment is now altered, and your embodied experience is changed. You would, undoubtedly, experience things differently, even once you were adapted to the physical differences. However-your mind would remain your mind, and you would still experience things by the very same processes that it would have, but for the accident. The changing of your embodied shape or self does not self-evidently change the function or purpose of the mind. If we accept that this function is, in part, to experience subjectivity, and we accept that subjectivity is a, or even the, distinguishing feature of a conscious mind, then we could go further to say that the perception of these subjective experiences is us in all the ways which matter.
Here, I would like to invoke John Harris, who discusses the continuation of narrative identity and the possibility of "successive selves" in radical life extension. As he puts it, Suppose [an individual] has three identities, A, B, and C, descending vertically into the future… A will want to be B, who will remember being A; B will want to become C, who will remember being B but possibly not remember being A. 26 Although Harris uses this to discuss an extreme life span, it holds true for an ordinary one. We do not generally remember the subjective experience of being a baby, beyond possible flashes of certain physical sensations that made a great impression on us, or maybe broad emotional states. We certainly do not recall qualia-and in many ways, this simply does not matter. We are having experiences as who we are now, and who we were does not seem to be significant beyond that it led us to the present. Furthermore, our embodiment has changed-no element of our bodies is the same as it was when we were small children, or "A." This is instrumentally true in a real cellular sense, but also true in as much as we are physically different, larger, a different shape. We have, necessarily, a very different embodied experience of life, and as "C" you likely do not remember-and cannot know-what it is like for a young child to be a young child, just as we cannot know what it is like for a bat to be a bat. 27 Despite this, we do not consider that that change in embodiment has made us some new person, something significantly different from who we were as "A." This ought also to be true of the mind. A different shell, or substrate-a different embodiment, here-does not prevent subjective experiences taking place in and of themselves, so what seems to be at stake in our original question-does a mind need a body-is whether or not that body has much to do with the actual function of mind.
One subject of much discussion within transhumanist-leaning literature is the idea of whole-brain emulation, or mind upload. Anders Sandberg and Nick Bostrom suggest that: The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. 28 We can more simply think of this as a perfect copy of the type sometimes considered as a means of digital immortality. 29 As discussed in my opening argument, a perfect copy seems to require copying the element of subjectivity as well, so let us imagine that our copy achieves this. Once you have sat in the copy machine, the you (you-X) that was copied can get up and walk away (assuming a nondestructive process) and continue living and having subjective experiences. You-X's mind continues to operate, and you-X's life is generally unchanged. The copied you, you-Y, branches off. For a single instant, the minds of you-X and you-Y will be identical, until it began to have its own subjective experiences as soon as it came "online." You-Y would be having different experiences to you-X, by virtue of its inputs being different, despite being processed by the same mental architecture. The inputs would enter through a different conduit, dependent on whatever the copied mind resides in-that is, its body.
The distinction between you-X and you-Y, though, is not because they are instantiated in different ways. It is simply that once these minds are no longer exactly identical, it is not possible for one to know the experience of the other. Their mode of embodiment would affect their experience, just as losing a hand would affect it. However, for both you-X and you-Y, the fundamental fact is that they would still be having subjective experiences, in parallel but in the same manner as the predivergent you. Both you-X and you-Y would remain you, although on a new path and unable to know each other's mind. Their housing does not seem to be significant.
Malleability
The second point I would like to make is that even if you disagree on the primacy of the function of mind as having experiences, embodiment seems to be malleable. If it is malleable, it cannot be fundamentally significant for the existence of mind.
There is an increasing amount of research into how to manipulate the neurological basis of the hypothetical body schema-our internal "map" of our bodies-into accepting new embodiments. 30 This is primarily of use in the field of prosthetics, wherein it is desirable to help a user adapt to their prosthesis, consider it "part of their body," and reduce phenomena such as "phantom limb." Bioengineers also present a "soft embodiment," 31 where neural and cognitive body mechanisms are repurposed to allow the embodiment of nonorganic additions, perhaps even things that are nonbiomimetic. A similar affect can be achieved externally-via application of the body transfer illusion, most recognized as the "rubber hand illusion." Here, the subject's organic hand is hidden, and a rubber hand is placed within sight. The hidden organic hand is stroked, but the subject experiences the sensation as though it were in the rubber hand. Perceptual mechanisms can override our knowledge about the material reality of our bodies, and give an illusion of a different embodiment than the strict truth. 32 If such a simple experiment can induce a new embodied experience-even if only temporary-how fundamental can our true embodiment be to our mind and mental processes?
Furthermore, we induce this in ourselves frequently and without intending to, without the use of illusion. With experience and practice, we frequently describe tools or other objects as being "a part of my body." A snooker player's cue becomes an extension of their arm, and they experience the strike of the ball at the tip and not in their hand. 33 A heavy plant operator need not think about the levers they pull to extend the hydraulic arm, to draw the scoop. The machine moves as though it is an extension of the operator. There are countless other examples-driving being the most common. After an adjustment period, we are simply aware of the bounds of our vehicle, we know the spaces in which it can fit-without needing to get out and measure.
Far from being trite examples, these demonstrate that our embodiment is far from concrete. In all the ways that appear to matter, these nonorganic additions to our bodies function and are experienced as though they are part of our bodies. I experience the qualia of the strike on the ball; however, it is mediated, and I go on to have whatever resultant emotion or thought or experience that stems from that. I draw away from the needle threatening "my" rubber hand, because I want to avoid the pain I instinctively think will ensue. 34 If our embodiment is so malleable, and our experiences continue through these new body parts that are so easily incorporated into our schema, then it does not seem to me that embodiment matters in any significant way.
The Flawed Language of Mind
The final, brief point I would make perhaps serves both Alex and myself. Our debate has relied on circumlocution, and we have both struggled to articulate entirely accurately what we think mind to be. This is borne out across the literature of the mind-body problem and more widely in philosophy of mind-it is extremely difficult to effectively discuss something which is, by its nature, entirely nebulous and unknown. Furthermore, we are limited by the language we have available to us. All the terminology we use-"mind," "consciousness," and even "body"-can be understood in myriad ways. We invoke possessive terms constantly, and in so doing necessarily suggest that "my" mind is "me," when this view is not shared-for instance, by my learned opponent. We cannot describe qualia, because we utterly lack the words for it, in any language. We analogize-and I have done so extensively here-because we are trying to describe an invisible and quite possible nonphysical process, but the analogies themselves are limited to things that we can observe, things that, therefore, fundamentally cannot entirely represent such a process.
Until we can solve some of these issues-until we can agree some definitions, some limits-it does not seem likely that this is a debate we can conclude one way or the other. I maintain, then, that even if my arguments in these statements fail in themselves, there must remain a reasonable doubt that mind "needs" body.
Alex McKeown: David Lawrence's Best Argument
The strongest argument David makes relates to the use of the definite article when talking about mind, given we do not we do not know what "a" mind is and that it is probably better understood as a "functional Gestalt." A key difference between mind and body is that the body, which in humans and other animals includes the brain, can be seen and observed by empirical investigation, whereas even though processes of mind are physically instantiated, you are not going to open up a brain and find "a mind." Rather, "a" mind is a shorthand for describing a collection of functions that are characteristic of the kinds of beings that we are. 35 This of course admits a degree of uncertainty about what mind is and is not, however, so David's skepticism about this is reasonable and there is a risk that I am in fact over-anthropomorphizing the picture. So, to me, that is probably David's strongest argument, because there is some legitimate skepticism about what "a mind" is and where it resides. There is also the general skepticism about knowledge of other minds because of the radical uncertainty about what other people's subjective experience is like.
As an aside, of course, if one were a panpsychist-we all know that the panpsychist debate has been raging recently and let's not get into that here-one could just say, "well, the mental is an intrinsic feature of the physical, because this sidesteps the hard problem of emergence". Not all experience will be as sophisticated as our mental events, and matter in general might just have an unimaginably primitive form of experience; that is, one that is not self-aware or self-referential and so on. Nevertheless, if you buy the conclusion that matter is experiencing, because this is a more parsimonious explanation for the existence of mind than the dualist picture in which mind and body are separate, then you might have some sympathy with the view that there is necessarily no non-physically instantiated mind.
Having said this, and to finish off, I think David articulated his strongest point well at the end there, which is that in spite of these arguments it is possible to have reasonable doubt that mind and body are inseparable. There really does seem to be something different about mental experiences from physical ones, we experience those things differently, and this phenomenological aspect is a challenge that might instill reasonable doubt as to whether, in fact, you cannot have one without the other.
David Lawrence: Alex McKeown's Best Argument
For all my contention that the mode of embodiment does not matter, I have to accept-and I was forced to admit during my own arguments-that I find it nearly impossible to deny the necessity of physicality, even if that necessity is purely instrumental. All phenomena must be grounded in the physical, in some way even those we cannot easily reduce to it. There is no aetheric substance experiencing subjectivitythe mind, whatever it proves to be, must be in some way instantiated. It must be subject to the fundamental laws of physics, even if it may be that we do not entirely understand how as yet. If subjectivity is information, and if the information can be considered a fundamental component of physics-as Chalmers might have it-then perhaps subjectivity, too, has a rational scientific explanation to be discovered. Physicality does not have to mean we can touch something, merely that it be subject to physics, and materialism such as that which Alex relies on merely requires that there is matter and material interaction in the process of mind. In this regard, I cannot deny him.
This problem returns us to one of the flaws of our central debate topic-what we understand to count as a body. It is a reasonable argument to make to say that whatever host, substrate, or conduit mind resides in could be called its body. If so, then that is impossible to repudiate-even if it makes "body" so vague a concept as to be useless. | 10,880.8 | 2021-10-01T00:00:00.000 | [
"Philosophy"
] |
Observer based sliding mode control for subsonic piezo-composite plate involving time varying measurement delay
In this study, an observer based sliding mode control (SMC) scheme is proposed for vibration suppression of subsonic piezo-composite plate in the presence of time varying measurement delay by using the piezoelectric patch (PZT) actuator. Firstly, the state space form of the subsonic piezo-composite plate model is derived by Hamilton’s principle with the assumed mode method. Then an state observer involving time varying delay is constructed and the sufficient condition of the asymptotic stability is derived by using the Lyapunov-Krasovskii function, descriptor method and linear matrix inequalities (LMIs) for the state estimation error dynamical system. Subsequently, a sliding manifold is constructed on the estimation space. Then an observer-based controller is synthesized by using the SMC theory. The proposed SMC strategy ensures the reachability of the sliding manifold in the state estimate space. Finally, the simulation results are presented to demonstrate that the proposed observer-based controller strategy is effective in active aeroelastic control of subsonic piezo-composite plate involving time varying measurement delay.
Introduction
Aeroelastic phenomenon is a kind of harmful vibration, which results from the interaction between aerodynamics, inertial force, and structural dynamics. When experiencing aerodynamic load, flexible structures such as wing, helicopter/wind turbine blade, beam, plate, shell may vibrate strongly, resulting in structural fatigue failure. To suppress the adverse vibrations, various kinds of control theories have been proposed or developed including PID control, 1 robust control, 2,3 adaptive control, 4,5 sliding mode control, 6,7 LQG regulator, [8][9][10] adaptive nonlinear optimal control. 11 Instead of full state measurements or observer design, Singh et al. 12 proposed a feedback controller based on partially available measurements to achieve flutter suppression.
However, in the above studies, time delay (TD) is ignored. In practice, time delay is objective, which may result from data collection system and actuation system. 13 TD may make the controller fail and even cause the aeroelastic system to switch from a stable state to an unstable state. 14,15 In Ramesh and Narayanan, 16 the dynamics of a two-dimensional airfoil with a constant TD is investigated by PID control strategy using a single state feedback signal. Yuan et al. 17 focused on the nonlinear dynamical character of a two-dimensional supersonic lifting surface with constant TD. They found that TD has a significant effect on the bifurcating motion. For example, it could transfer subcritical Hopf bifurcations to supercritical. Similarly, Zhao 18,19 proved that TD has a huge impact on flutter boundary of the controlled aeroelastic system. Thus, TD should not be ignored in the design of active aeroelastic control system. Almost all of the above studies are focused on the effect of TD on the dynamic characteristics of the system, few researchers are devoted to the design of active control aeroelastic systems invoving TD. By using the Lyapunov-Krasovskii function, free-weighting matrix and LMIs, Zhao et al. 20 presented H ' control of a flexible plate with control input delay theoretically and experimentally. Luo et al. 13 adopted the model transformations method to deal with the flutter of a 2D airfoil using SMC considering the control input delay. Ming-Zhou and Guo-Ping 21 adopted the finite-time H ' adaptive fault-tolerant control technique to depress the vibration of the 2D wing airfoil. Recently, a robust passive adaptive fault control for a 2D airfoil model with control input delay was studied by Li et al. 22 . In their study, a fault tolerant observer was designed to estimate the wing flutter states for control system.
However, all these studies are about 2D airfoil aeroelastic system with wing flap as the control surface. As best as the authors know, there is no literature report on the controller design for the high-dimensional aeroelastic system in the presence of time varying delay. Furthermore, piezoelectric materials can replace the wing flap for sensing and actuating functions, so as to avoid the complexity of mechanical systems and improve system reliability. Inspired from this, the purpose of this study lies in the design of an observerbased sliding mode controller for the vibration suppression of a high-dimensional aeroelastic system with time varying measurement delay, using the piezoelectric actuator.
SMC is a typical variable structure control method, which is famous for its robustness and insensitivity to uncertainty. SMC has been used to achieve the trajectory tracking of nonlinear robotic manipulator under varying loads problem, 23 with uncertainties and external disturbance problem, 24 or with backlash hysteresis problem. 25 An adaptive fractional-order non-singular fast terminal SMC law was designed for a lower-limb exoskeleton system. 26 Using SMC, a hybrid robust tracking control scheme was proposed for an underwater vehicle in dive plane. 27 Researchers have also studied the observer-based SMC for the system with unmeasurable states. A disturbance observer-based super-twisting SMC was proposed for formation maneuvers of multiple robots. 28 An extended state observer-based SMC strategy was proposed for an under-actuated quadcopter UAV. 29 However, the literature concerning observer-based SMC for the aeroelastic system is very few. 30 In our recent work, by using the PZT actuator, we have studied the observer-based SMC scheme for suppressing bending-torsion coupling flutter motions of a wing aeroelastic system with constant time measurement delay. 31 In this paper, the model of the subsonic plate is chosen as the high-dimensional aeroelastic system, and we focus on observer-based controller design for the system with time varying measurement delay. Hamilton's principle with the assumed mode method is applied to establish the aeroelastic model. Then, by using the Lyapunov-Krasovskii function, descriptor method, an observer is designed and the sufficient condition for the asymptotic stability of the observer is guaranteed in terms of linear matrix inequalities (LMIs). Then, the sliding mode control is employed on the estimation space to achieve the observer-based controller design. Lastly, the controller performance is verified by numerical simulation.
Aeroelastic model and solution methodology
Mathematical model of the piezoelectric plate subjected to subsonic aerodynamics A uniform simply-supported rectangular plate with piezoelectric patch actuator bonded on its top surface subjected to subsonic aerodynamics is considered. As shown in Figure 1, the plate has thickness t b and its dimensions along x and y directions are a and b, respectively. Also the piezoelectric layer possesses thickness t p and its location coordinates along along x and y directions are l 1x , l 2x and l 1y , l 2y , respectively.
The stress-strain relations of the base plate is presented as follows (refer to page 64 in Jalili 32 ). where s 1 , s 2 , t 6 , and S 1 , S 2 , S 6 are the normal and shear stresses and strains, c D Ã represent the stiffness coefficients, ∂x∂y , and w represents the transverse displacements of the plate.
It is noted that where the piezoelectric patch is attached (l 1x \ x \ l 2x , l 1y \ y \ l 2y ), the neutral surface is changed to z n = respective Youngs modulus of elasticity for plate and piezoelectric materials. Then the corresponding strain equations is modified to ∂x∂y and the constitutive equations are modified as follows where c P Ã represent the elastic constants of the piezoelectric material, e 31 represents the piezoelectric constant, D 3 represents the electric displacement, e 33 represents the dielectric constant E 3 = V t ð Þ t p is the electric intensity, and V(t) represents the input voltage.
The total potential energy can be given as follows The total kinetic energy can be given by where r x, y ð Þ is the variable density of the combined piezoelectric and plate materials defined as with the piezoelectric/plate section indicator function G x, y ð Þ given by and H x ð Þ is the Heaviside function, r b and r p are the respective plate and piezoelectric volumetric densities.
The virtual work by the aerodynamic pressure can be written as For low Mach number subsonic flow, the aerodynamic pressure is approximately given by Dowell and Ashley 33 where A 0 = 1 p Ð 1Àx=a Àx=a ln h j jdh. Substitute equations (4), (5), and (8) into Hamilton's principle where t 0 and t f denote two arbitrary motions of time.
In order to use the assumed mode method, w is written as the following linear combination For a uniform plate which is hinged at all the four edges, the mode shape function can be chosen as c mn x ð Þ= sin mpx a À Á sin npy b À Á , then substituting the (11) into (10), we can get the aeroelastic model of ordinary differential form the state space form can be given as follows: Time-varying delayed output measurement y is assumed as C are known constants and output matrix, respectively.
At the end of this section, the following lemmas about matrix inequalities which will be needed in the deduction of our result are introduced: Lemma 1 (Jensen's inequality, 34 page 87) For any n3n matrix R . 0, scalars a, b with 0 \ a \ b, and a vector function f : ½a, b ! R n such that the integrations concerned are well defined, the following matrix inequality holds: Lemma 2 (page 97 34 ) Let R 1 2 R n 1 3n 1 , . . . R N 2 R n N 3n N be positive matrices. Then for all e 1 2 R n 1 , . . . , e N 2 R n N , for all a i . 0 with P i a i = 1 and for all S ij 2 R n i 3n j , i = 1, . . . , N, j = 1, . . . , i À 1 such that: the following inequality holds: Observer-based sliding mode controller design and analysis
Observer design and stability analysis
In this section, an observer-based sliding mode controller is designed, and the sufficient condition for the asymptotic stability of the observer and controller systems are derived in terms of LMIs. The state observer is constructed of the following form for the delayed output system (14) wherex represents the estimate of the system states x, and L is the observer feedback matrix to be designed later.
In view of (13) and (18), define e t ð Þ = x t ð Þ Àx t ð Þ, the state estimation error dynamics is given by: The following theorem gives the sufficient condition for the asymptotically stability of the state estimation error dynamical system (19).
Theorem 1 If there exist scalar e, matrices P 1 . 0, R 2 . 0, Q 3 . 0, and Q 4 . 0, Z, S 12 , P 2 satisfying the following two LMIs: And à denotes the symmetric terms in a symmetric matrix. then the error dynamical system is asymptotically stable. Proof 1 Choose the Lyapunov-Krasovskii functional candidate as: Taking the time derivative, it follows that By employing the following representation, we have By lemma 1, applying Jensen's inequality to both terms in equation (24) Substitute equation (26) into equation (23), we obtain Then, by adopting the descriptor method, 34 where the right-hand side of the following expression with proper dimensions matrices P 2 , P 3 is added to the right-hand side of equation (27).
T , we can rewrite equation (27) into matrix form yields where O = Hence, for x t ð Þ 6 ¼ 0 it follows from (29) that _ V 2 \ 0 if O \ 0. This implies that the error dynamical system is asymptotically stable. By defining Z = P T 2 BL, and P 3 = eP 2 , we can obtain the linear matrix inequality equation (20).
Sliding mode controller design and stability analysis
For the state estimate system (18), a sliding manifold can be constructed as: where P 2 R 838 is an symmetrical positive determined matrix. According sliding mode control theory, setting _ s t ð Þ = 0, we can obtain the equivalent control law Combining the robust term, our proposed terminal sliding mode controller is designed as: where k, h . 0, 1pt p, q are positive odd integers and p . q. Substituting equation (33) into equation (18) yields The finite time reachability and stability analysis of the sliding mode control has been given in Appendix.
Similar to our previous work, 32 the auxiliary state feedback matrix P can be obtained by solving the inequality on the variables X and F as follows where X = P À1 . See the Appendix for the detailed derivation process.
Numerical simulations
In this section, numerical simulations are carried out to illustrate the effectiveness of our proposed control strategy. The lay-up configuration of the plate and material properties of the piezo-actuator are displayed in Table 1.
In the simulation, choose M = 4, N = 1 in equation (11), then the modal variable x t ð Þ is a eight-dimensional vector. Adequate structural modes only affect the precise description of the system and do not affect the effectiveness of our proposed method. The initial condition is assumed as x 0 ð Þ = 0:1 0:1 0:1 0 0 0 0 ½ T . The Figure 4 studies sensitivity of vibration response to y-direction with fixed x=a = 0:5. As can be seen from Figure 4, the vibration responses at all coordinates have converged. Increasing or decreasing y=b with coordinate x=a = 0:5, y=b = 0:5 ð Þ as the reference, the response amplitude will gradually decrease as evidenced in Figure 4. Comparing Figure 3 with Figure 4, we can find that the vibration response is sensitive to both x=a and y=b coordinates, while the x=a coordinate has the more significant effects on the system instability.
To achieve vibration suppression, active control strategies will be implemented. In the closed loop system, Figure 5 shows the time history of the time varying measurement delay t t ð Þ in equation (14) and h = 0:7. Output matrix C is assumed as identity matrix. The observer gain matrix BL and the controller auxiliary feedback matrix P can be obtained by solving the LMIs (20) (35). Other control parameters are selected as m = 0:01, h = 2, p = 5, q = 3. The input saturation is also considered and assumed u t ð Þ 2 À600V, 600V ½ . Furthermore, an LMI-based control method is used as a comparative study. 35 (Remark: for clarity, we only drew the result curves with obvious differences.) Figures 6 and 7 shows the simulation results of the closed loop system. From these figures, it's shown that Figure 8 shows the control signals of piezo-actuator, and the maximum input voltage is 6600V. From the figure, we can see that the chattering phenomenon is well suppressed, and the curve is much smoother compared with the LMI-based control method. Figure 9 shows the vibration response along the x-direction with fixed y=b = 0:5. It can be seen from the Figure 9, all of the vibration responses converge, and have the same settling time. Comparing Figure 9 with Figure 3, we can see that under the control law, the original divergent states (at x=a = 0:2, x=a = 0:6, x=a = 0:9) changes to the convergent states, and the original convergent state (at x=a = 0:5) remains convergent. From these simulation results, it can be concluded the proposed controller and observer are effective to deal with the active aeroelastic control problem with time varying delay output.
Conclusion
In this article, an observer based sliding mode controller was proposed for active aeroelastic control of subsonic piezo-composite plate subject to time varying measurement delay. The piezoelectric patch was bonded on the top surface of the plate as an actuator. Making using of simplified unsteady aerodynamics and adopting the two-dimensional piezoelectric actuation theory, the coupled dynamical model of the aeroelastic system has been formulated by means of Hamiltons principle with assumed mode method. The instability bound has been calculated by solving the system eigenvalues. The vibration responses at different coordinates are investigated. To achieve vibration suppression, the observer based sliding mode controller was designed, and the corresponding gain matrices are obtained by solving the LMIs. The asymptotic convergence was guaranteed by Lyapunov stability theory. The major conclusions can be drawn as follows 1. In the open loop system, the vibration response depends on both x=a and y=b coordinates of the plate, while the x=a coordinate has the more significant effects on the system instability. 2. The proposed observer based sliding mode controller is effective to eliminate unstable response and stabilize the system in finite time.
The results presented in this article indicate that our proposed control strategy can effectively deal with active aeroelastic control of subsonic piezo-composite plate problem with time varying measurement delay.
Authors' Note
Na Qi is also affiliated with KeiHin Electronic Device Research and Development (Shanghai) Co., Ltd.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 3,991.4 | 2021-01-24T00:00:00.000 | [
"Engineering"
] |
ENGAGING YOUTH THROUGH SPATIAL SOCIO-TECHNICAL STORYTELLING , PARTICIPATORY GIS , AGENT-BASED MODELING , ONLINE GEOGAMES AND ACTION PROJECTS
The main goal of this paper is to present the conceptual framework for engaging youth in urban planning activities that simultaneously create locally meaningful positive change. The framework for engaging youth interlinks the use of IT tools such as geographic information systems (GIS), agent-based modelling (ABM), online serious games, and mobile participatory geographic information systems with map-based storytelling and action projects. We summarize the elements of our framework and the first results gained in the program Community Growers established in a neighbourhood community of Des Moines, the capital of Iowa, USA. We conclude the paper with a discussion and future research directions.
INTRODUCTION
Current procedures and activities in urban planning almost completely neglect youth and their involvement in the planning processes and co-creation of our cities and environments.In our research, we contribute to a better understanding of the methods that can be used for empowering youth as community leaders in urban planning and local community action.The main goal of this paper is to present the conceptual framework for engaging youth in planning through an innovative integration of IT tools and methodologies as storytelling mechanisms that support the creation of online geogames and action projects.The framework concentrates on engaging youth from underrepresented populations as co-creators in, and co-designers of, their community.By working with youth on visible community projects, we not only generate "community agency" for the youth but also enhance potential for local change, aiming to make youth community leaders in community capacity building.
We are conducting a community engagement action project involving GIS-based socio-technical storytelling and serious geogames with youth in a resource-vulnerable neighbourhood in the East Bank of Des Moines, Iowa.This project begins with an 8-week program, called Community Growers, that involves a leadership-minded group in the Boys & Girls Club at the local middle school, using primarily geographic information system (GIS) with some inclusion of agent-based modelling (ABM) both to create and tell the youth's story of the assets of their neighbourhood and then devise an action project the group will plan-in this case, one associated with their school's community garden.
The ideas and maps the youth generate through the program will play a key role when they share their process and action project with neighbourhood residents, the local neighbourhood coalition Viva East Bank!, and City of Des Moines officials at community events.At these events, researchers will collect further data from residents that will, along with the geogames the youth help shape, serve as input for the models and further action for the neighbourhood and city-with the aligned goal to make the East Bank and Des Moines overall a more sustainable, equitable, and connected community.
Our Community Growers program empowers youth as fellow knowledge-producers with both researchers and community leaders.For the research community, this work aims to optimize the narrative capabilities in GIS and ABM to calibrate lived experience with systems-level science, creating a practical strategy for better aligned community-researcher partnerships and outcomes.As a longer term goal, this work, by empowering youth, helps fosters a new generation of community leaders and practitioners who see the utility of GIS and models as interfaces that can join personal experience with systems-thinking, creating a socio-technical culture of action and decision making.
As researchers, we are interested in testing the development and implementation of IT methods and tools that are unique in their combination and have not been often used in civic engagement of youth.The methods we are suggesting are combined in an engagement framework presented in this paper.We will: experiment with using GIS, ABM, and online geogames observe how the youth utilize these tools assess if these tools empower the youth in formulating their visions for their neighbourhood and community evaluate how these tools foster the youth's leadership skills as they conduct action projects in the community garden.
The methods we are suggesting are combined in an engagement framework presented in this paper.This paper introduces the conceptual model for engaging youth with a combination of IT tools, presents our engagement approach implemented in Des Moines, Iowa and our initial results.We summarize the main research concepts, outline our future research directions, and conclude the article with a discussion.
Engaging underrepresented youth
Involving marginalized, underrepresented populations and, additionally, youth in urban planning has long been a challenge, yet these individuals are among the most crucial to engage as cities seek to foster the next generation of decision-makers as well as create more resilient, connected communities.Due to such factors as work-life stresses and language barriers, these populations are the most difficult to engage in feedback and data-collection mechanisms and, therefore, often remain absent from municipal and community decision-making (Maloff 2000).
When researchers seek to engage these populations, the typical strategy is to begin with local power-brokers such as landlords, business owners, public officials, developers and other stakeholders.This approach affirms hierarchical relationships that privilege leaders and public officials more than residents.Conversely, some researchers take a less-travelled path and place youth at the centre of community engagement; however, much work is still done for youth, rather than with youth (Cherry 2011, Derr et al., 2013, International Institute for Child Rights and Development 2015).Youth are rarely included in decision making at either the city or neighbourhood level: "Many youth's voices are absent from community-building processes, deepening the gaps of miscommunication and contributing to community exclusion" (Blanchet-Cohen and Salazar 2009, p. 5-6).Integrating the voices of youth into city and neighbourhood decision-making as well as transdisciplinary scientific research is unusual but also crucial for developing forward-thinking planning and research related to sustainability.
Spatial socio-technical storytelling
As a central component to our methodology, we take as a starting place the notion of storytelling-a longstanding cornerstone of community engagement practices in design, urban planning, and the social sciences.Sharing personal stories and experiences authorizes under-represented populations and non-credentialed stakeholders as fellow knowledge-producers in the creation of new policies and practices (Goldstein et al. 2015).The language of story is also emerging in data science, often as the bridge between data and action (Fuller 2015).Mohanty et al. (2013, p. 255) describe the data scientist as one who brings "scenarios to life by using data and visualization techniques: this is nothing but storytelling."Digital storytelling implements digital media to help create stories, and this focus on the personal works well with youth.The youth initially share their personal experiences but then, using GIS and ABM, begin to "scale up" those stories by using these technologies to help them place their lived experience into larger systems.In particular, using GIS in this capacity is already underway.
Recent advancements in GIS-based storytelling focus on the creation of map-based stories, which include multi-media such as a combination of pictures and videos that are geo-located and presented on the map, offered online to everybody with internet access.
In our team, Shenk is developing a methodology-sociotechnical storytelling-that uses the narrative, collaborative, and systems-level capacities of data analytics platforms GIS, VGI and agent-based modelling (ABM) to help communities, and particularly youth, tell larger stories that single-teller, personal reflections of traditional storytelling cannot capture.The focal term of storytelling for this methodology emphasizes the integral role of personal experience and narrative as the foundation, while "socio" introduces the way these elements, through the use of data technologies, get situated within larger systems that have social impact.
For the Community Growers program, working with GIS will help the youth situate their action projects for the garden within issues of food access, personal values, and cultural diversity in their community-issues that expand and deepen the garden as a place solely for food production.Likewise, through use of the ABM, they can explore the social dynamics that lead to powerful community change, giving the youth a space to consider how these dynamics suggest ways of conceiving their work as community leaders.In each case, the data technologies empower new stories that suggest systems-level impacts that personal stories alone may not-at least so swiftly-uncover.
Participatory GIS (PPGIS) and Volunteered Geographic Information (VGI)
Public Participatory GIS (PPGIS) as the research area was established and extensively discussed in the mid-90s (Schroeder 1996).It mostly concentrated on the desktop GIS applications and further development of GIS platforms by adding participatory functions and operations (Kingston et al. 1999, Carver 2001).These applications were first implemented and tested for their technical capabilities and also for their newly included participatory functions (Steinmann, Krek et al. 2004).Later, many researchers stressed the importance of the user interface and usability of PPGIS (Haklay and Tobón 2003, Poplin 2012b, Poplin 2015) for non-GIS experts and technicians.Development of new technologies and especially mobile devices lead to novel ways of collecting geographic data on a volunteered basis; in such situation citizens may act as sensors (Goodchild 2007a(Goodchild , 2007b(Goodchild , 2007c) ) contributing data to the GISbased systems that are often freely available online and/or on a variety of mobile devices.Goodchild coined such spatial volunteered applications Volunteered GIS (VGI).In a VGI environment, the user contributes her knowledge about the environment and is able, through a user-friendly user interface, to enter this information/knowledge/data into the system, which stores the data in a geographic database.
Research on favourite places and place attachment
Storytelling is connected to memories, perception, and the way people feel and remember.How do people perceive and experience places in their neighbourhood?Especially how do youth experience these places?What is important to them?Knowing more about the places and perceptions of places may enable urban planners and designers to design more sustainable, pleasant and happy places.Korpela and Hartig studied favourite places (Korpela 1992, Korpela and Hartig 1996, Korpela 2012), which are defined as places that afford restorative experiences and may aid emotional and self-regulation processes.In their initial studies, they worked with young people, especially with adolescents.They asked them to compose an essay describing their favourite places.The adolescence reported about going to their favourite places to relax, calm down, and clear their minds.They also described the experience of beauty, freedom, and escape from social pressures.Their favourite places were described as aesthetically pleasing and engaging.Natural settings such as parks, proximity to water, green areas were overrepresented among favourite places and underrepresented among the unpleasant places.The adolescent participants reported the reduction of anxiety, fears, and social pressures while being at their favourite places.Their research provides indications that there is a link between favourite places and restorative experiences.
Agent-based modelling research
Agent-based modelling (ABM) is a type of simulation modelling wherein multiple autonomous agents with intelligence (internal logic) have the ability to make complex decisions and engage in complex interactions with other agents and objects within their environment to achieve one or more identifiable goals.Such interactions may lead to dynamic agent adaptations (i.e., learning)an agent may collect information about an interaction and then apply this new knowledge to future decisions and behaviours (Gilbert and Troitzsch 2005).The system properties that are introduced through these agent interactions and adaptations over time often result in a system that exhibits behaviour that cannot be predicted by examining the behaviour of its individual parts (Pathak et al. 2007).One of the most important advantages of ABM is that it enables humans to be realistically characterized and modelled as boundedly rational agents with complex psychologies that are capable of making subjective choices via explicit decision rules (Bonabeau 2002).Additionally, ABM is a bottom-up modelling approach in which individuals can be represented as discrete heterogeneous agents, which is far more realistic than using a statistical aggregate to model diverse human attributes and behaviours and decentralized decision making (Epstein and Axtell 1996, p. 2).ABM also allows individual agents to learn and adapt their behaviours based on accumulated experiences, which is difficult to do with other modelling approaches.ABM is therefore a particularly appropriate tool for capturing emergent system behaviour as neighbourhood residents interact with each other and their environment.
Participatory online serious games
Recently, games and particularly online serious games have been utilized for engaging stakeholders in participatory activities for urban planning and design.Serious games are games that are designed for more than just entertainment and fun (Ritterfeld et al. 2009).They are sometimes called games for change or learning games.Serious online games may be used in planning and civic engagement for the digital representation of urban plans in a virtual environment merging physical and virtual interaction (Gordon andManosevitch 2010, Yamu, Poplin et al. 2017) enabling civic engagement and public participation in novel ways (Krek 2008, Poplin 2011, Poplin 2012a, Poplin 2014) in order to be able to overcome what is called rational ignorance of the citizens (Krek 2005).Rational ignorance is a concept according to which the citizens decide that it is not worthwhile to participate in urban planning due to the high cost of acquiring the information and getting informed about the planned activities.
Games (Koster 2004, Salen andZimmerman 2004) may also be used for collective reflections about the urban planning issues (Devisch et al. 2016) enabling the citizens to express their opinions, co-create and co-design the environments in which they live.They can be designed as geogames, games that intend to represent the environment in a realistic or close to realistic way (Schlieder et al. 2005, Ahlqvist et al. 2012), often based on maps, satellite images or aerial photographs.We hypothesize that games, online games and game-based simulations may provide environments in which players/citizens can be immersed and feel like being in the flow enjoying the engagement and participatory activities.
The conceptual framework for engaging youth
Inspired by these research areas, we envision a combination of these methods in a mix-method engagement framework that aims at finding the combination of these methods that will result in engaged and empowered youth and consequently in better connected communities in the targeted neighbourhoods where underrepresented and marginalized population lives.Figure 1 presents the conceptual framework we envision for engaging youth in the co-creation of their neighbourhood.Our proposed conceptual framework for engaging youth in reflections about their neighbourhood and co-creation of places and spaces includes using modelling tools such as geographic information systems (GIS), agent-based modelling (ABM), story-telling, and mobile volunteered geographic information (VGI).
Figure 1.The framework for engaging youth in co-creation of their neighbourhood using modelling tools The framework integrates data analytics platforms GIS, VGI and agent-based modelling (ABM) directly into communityengaged work through socio-technical storytelling.We adopt an asset-based (strengths-oriented) approach responsive to the community's expressed interests-interests that, significantly, emphasize youth.ABMs are natural tools for socio-technical community engagement: these quantitative computation simulation models show how the individual micro-decisions of agents can lead to macro-level collective action in complex adaptive systems.Research in ABMs is becoming more datadriven and participatory, involving larger ranges of empirical data.
We build on our previous work (Poplin 2012, Passe et al. 2016, Shenk et al. 2016, Shenk, Krejci et al. 2017, Poplin 2017) and include GIS, ABM and online games that are well suited to joining individual, lived experience in a defined spatial area with scalable, systems-level interactions.They are used in a combination for creating and fulfilling action, which at aligned multiple scales has not yet been explored.In our proposed conceptual model for engaging youth create socio-technical stories that help plan and document local, tangible, positive action, placing their work within the widening circles of systems-level significance.In turn, such telescoping between the personal and the larger systems helps integrate the views of youth as leaders into the even-larger systems of city and community leaders, thus allowing, we anticipate, the formation of more youth-official-researcher partnerships and collaboration.
Research focus: A broader research agenda
Our project includes attention to engaged research as yet another step in mentoring future leaders, citizens, and scientists for community-connected, sustainable cities even as we, as credentialed experts, benefit from the knowledge the youth and their community bring to our work.In our research we concentrate on the following questions: Which IT methods and techniques can be optimally used to engage and empower youth? In which phases of planning can these methods be used? How can these different IT tools be best combined in an engagement process? How will the use of geographical information systems (GIS) expand youth understanding of their neighbourhood? Is GIS an appropriate, stimulating, intriguing tool for youth engagement? How can GIS be best presented, thought and implemented in the phases of youth engagement? Can youth feel empowered by using GIS, VGI, ABM, and online geogames? How can ABM be presented and utilized to engage youth? Will youth feel engaged and empowered by being able to design and use GIS and design their GIS-based stories about their neighbourhood?We are in the phase of testing the presented conceptual framework for engaging youth into collective reflections and co-creation in the selected neighbourhoods in the capital of Iowa, Des Moines.
Des Moines neighbourhood: Our study area
Our study area is located in Des Moines, the capital of, and most populous city in Iowa, USA with a population of 203,433 inhabitants as of 2010 census data.Des Moines is a major centre of the U.S. insurance industry and is located along the Des Moines River (Figure 2).Our test site is located in the city's East Bank neighbourhoods, which were selected as the main focus of our study because they have among the highest levels of under-represented, resourcevulnerable populations in the city.In these neighbourhoods, the median income is about half that of Des Moines overall ($46,290), and all neighbourhoods have a dramatically higher percentage of minority residents, nearly double that of Des Moines overall.The total population of all three neighbourhoods is 8,673.Over 34% of the population is under 18.30% of adults that do not have a high school diploma or other higher educational attainment.Nearly 30% of the population is living below the poverty line.
Several of these neighbourhoods are mostly Spanish speaking, and the inhabitants experience challenges integrating with English speaking neighbours.The inhabitants, and especially the youth of these neighbourhoods, are in the centre of our study.
East Bank neighbourhoods in Des Moines: A common focus on youth
All three of the East Bank neighbourhoods have high populations of young people, ages 5-17, with percentages averaging 8% higher than Des Moines overall, and all three neighbourhood plans emphasize the priority of supporting their youth.Our partnership with the Boys & Girls Club aims to support the community's already-established goal to create opportunities for youth engagement-a situation that both establishes credibility with adult community members and, as Cherry's work (Cherry 2011) on youth and climate change has demonstrated, encourages adults themselves to participate in sustainable practices.In our work, we aim at empowering youth as a crucial entry-point to working with residents in these target neighbourhoods (Shenk et al. 2016).
We begin this work with youth by working with them to create socio-technical stories that help plan and document local, tangible, positive action, placing their work within the layer of systems-level significance of their work.Twenty-two boys and girls signed up to be part of our 8-week program.We meet with them twice a week, on Thursdays and Fridays, for 45-minutes each session, involving them in experimenting with GIS, ABM, story-telling and creation of geogames.An exciting adventure!
Community Growers: Spatial storytelling with GIS
We have called this program "Community Growers" to acknowledge several key aspects: the local community garden connected to the middle school that will form the centre of the action project, the youth's own leadership skills, and the youth's role in nurturing community capacity building.Within this program, the youth will learn about GIS, how to use ArcGIS Online, experiment with map creation and online mapping tools.They will help generate GIS maps that first enable visioning work about their favourite places and strengths of the neighbourhood and then begin to integrate how the garden fits into that vision.These explorations will enable them to see their environment from several different perspectives.
One of the perspectives is their own, individual perspective (my favourite places) and the other one is the neighbourhood (we, our garden, our community), a more systemic perspective that embeds their personal perspectives into the vision for the community.With these ideas in mind, the youth will explore how they can conduct an action project that tells and adds to the story of what the community garden does, and can, mean to the community.Part of this work with its focus on the Hiatt garden will involve exploring with the youth a more holistic design approach to local food systems and how food systems are related to community values.
Using GIS, they will map how revitalizing the garden connects to neighbourhood systems (community-building, food, health).
Using the integrated capacity of GIS to include layers and narrative (through image, text, and video), the youth will generate a story about the garden that situates it within the larger systems of food, community, spaces, and equityall items that are to decision-makers at the neighbourhood coalition level, the city, and aspects of our larger research team's models.
Mapping favourite places with VGI
The initial activities with the youth include mapping their favourite places, taking pictures of these places, and describing them with words.In the next stage, this data will be inserted into a GIS and into a VGI platform.We plan on using Maptionnaire (maptionnaire.com) for the discussions about their experience of the environment in which they live.
Maptionnaire is an online GIS-based participatory platform that enables crowdsourcing, and citizens' engagement into discussions based on map representations of the area subject to the discussions.It also includes analytical capabilities and the ability to store the comments in a common geodatabase.Maptionnaire will be designed, developed and implemented for the needs of this project and especially for the needs of the youth involved into the project.The implementation will first be tested with the ISU students in a pilot study and then used in testing the mobile participation with VGI component in the pilot project with Boys & Girls Club in Des Moines.The advantage of Maptionnaire is that it stores the entered data in a geographical database.This data can be then further used for a variety of spatial analysis, spatial queries and statistics.
Community Growers, Growing Community: ABM
We will introduce a prototype agent-based model (ABM) that shows how community connectedness, collaboration, and knowledge sharing can magnify the impact of individual residents' decisions to improve a community's overall quality of life.Introducing youth to the concept of ABM and a developed ABM prototype aims to support the powerful significance of the youth's work to generate projects that create and foster community.
The prototype ABM is a neighbourhood-level model that conveys possibilities for individual decisions and interactions to improve community sustainability (Figure 3).
Figure 3. ABM created for weatherization related to sustainable cities This model will suggest smaller, actionable stories that align with the strategies that decision-leaders and researchers are addressing at the larger scales.Figure 5 shows the user interface of the ABM, which integrates social network analysis to determine the effects of residents' social connections on neighbourhood-level behaviour.These ABM-based discussions may encourage the youth to include and plan a community event at the garden where they can not only share their storymaps and action project but situate that work within an event that naturally encourages further community capacity building.
Through an introduction to ABM, they will be empowered to shape and determine an action project that they will work on over the summer.
Participatory design: online serious game e-footprints
The game e-footprints was designed in an interactive process over several months by our research team.It concentrates on energy use and consumption with the focus on built environment and has three main goals.First, it aims to engage citizens/inhabitants in online discussions and reflections about their homes and in particular about the consumption of energy in their homes.Second, it enables data collection about human behaviour in relation to the consumption and saving of energy.Third, it is designed as a teaching and learning tool that aims to provide new insights into the possibilities for energy saving and other activities related to the maintenance of the home and its improved energy efficiency.The online energy game e-footprints takes the player into a room of the previously selected house (Figure 4); the room is interactive and enables the players to set up the temperature inside displaying the temperature outside, open/close the window, and turn on/off the air conditioner.In order to increase the playfulness and the "conflict" in the game, random events and wisdom puzzles are introduced on the next level of the game.
Random events appear suddenly as a surprise and may include extreme heat/extreme cold events or the energy bill might go up.Wisdom puzzles are included as learning tools and enable additional simulations, provide links to the learning material, and mostly concentrate on learning about activities related to the energy of a home, and the effects of activities and human decision-making on energy consumption and saving.Currently we are in the phase of testing the developed e-footprints game prototype in order to be able to improve the user interface, graphical visualization, and aesthetics of the game prototype.
Community garden: the action project
Over the summer and early fall, members of our research team will conduct the action project and a community festival with the Community Growers group.This work is based on our previous involvement with the community garden.The youth will play key roles in transforming the garden into a more welcoming space that emphasizes unity and partnership.
Figure 5. Community garden project and its leader Linda Shenk Figure 5 shows the end result as a colourful artwork.In the continuation of this project, we intend to connect the activities on the garden with storytelling, and using GIS, ABM and online games to place the garden into a broader perspective.The broader perspective encompasses looking at the garden as part of the food system of the neighbourhood, but also as a meeting place, perhaps even power places or a favourite place for some of the students.It may also be seen and perceived as the place that can connect the youth of the neighbourhood with their parents, mentors, teachers, and city leaders.
FUTURE RESEARCH
Our presented framework for engaging the youth in an underrepresented neighbourhood is currently in the testing phase.We have started the 8-week program with the Boys & Girls Club in which we will test the framework for engaging youth in reflections about their neighbourhood.Our future research involves additional steps in the integration of the following models.
Online geogame e-footprints and urban modelling interface (umi).The e-footprints game-based simulation aims to integrate the simulations and results gained from the urban modelling interface (umi), which is a modelling platform to design and improve new and existing neighbourhoods via measures of energy use, daylighting, outdoor comfort, and sustainable transportation (Reinhart andCerezo Davila 2016, Carezo Devila et al. 2017).
umi and GIS
The integration of umi and GIS will enable umi model to extract geometric and building property information from the city's GIS model and store various performance indicators, such as operational and embodied energy, indoor and outdoor access to daylight and neighbourhood walkability.
ABM and GIS
We will be working on an integration of GIS and ABM; they both have their strengths.ABM is able to model dynamic, changing objects modelled as agents.GIS on the contrary can very well represent geography and space and can be very exact in doing so.We will explore how these two models can be integrated in modelling residents' energy consumption, saving, and decision-making.
ABM and e-footprints online game
Based on the patterns of the behaviour in the game will be explored for the possibility to create personas, agents with their specific patterns of behaviour.These personas will serve as an input information in defining the behaviour and decisionmaking of the agents modelled in the agent-based model.
CONCLUSIONS
In this paper we summarize the conceptual framework for youth engagement into urban planning, especially into the process of co-creation and co-design of their neighbourhood and spatial reflections about the places in their neighbourhood.The presented framework concentrates on the utilization of IT tools such as geographic information systems (GIS), volunteered geographic information systems (VGI), and agent-based modelling (ABM) in the process of engaging youth into reflections about their living environment.This framework is in the testing phase in the selected neighbourhoods in Des Moines, the capital of Iowa.
By bringing together the narrative capabilities of data science through GIS and some ABM with community engagement with youth, we aim to empower youth to connect personal experience with systems-thinking, creating a youth-generated sociotechnical culture of action and decision-making that can move from personal to neighbourhood to city scales.The initial steps for devising the action project will focus on the garden within community systems, and, as those plans are forming, these plans will expand to include other aspects of sustainability.Our work demonstrates how GIS and ABM, together, can help generate community stories in which residents and communities can "scale up" their lived experience to have impact on larger systems.GIS will support spatial representations, visualizations and map-based discussions.VGI will enable exploring and mapping favourite places.The ABM we will use will help reveal and test how their work in community building and knowledge sharing supports and extends the impact of the action projects the youth create and share.The main goal of our research is to observe how youth use these IT tools, what are the possibilities for their implementation in the phases of spatial reflections and co-creation, and whether these tools can empower the youth with skills needed for them to become future community leaders, citizens, scholars, and scientists poised to better our communities.
Figure 2 .
Figure 2. Des Moines, the capital of the state Iowa Our project partners include the City of Des Moines, the Viva East Bank!coalition, and, most crucially for the Community Growers program, the Baker chapter of the Boys & Girls Club of Central Iowa which has its facility in the Hiatt Middle School.
The youth will use GIS to discuss what the favourite places mean to them and their community indicating about what their community values, what brings people together, and what the spatial placement of their community garden could suggest about how this space could add to these favourite networks.
Figure 4 .
Figure 4.The room of the online serious game e-footprints | 6,977.6 | 2017-09-25T00:00:00.000 | [
"Computer Science",
"Education",
"Geography",
"Sociology"
] |
Metrics and quasimetrics induced by point pair function
We study the point pair function in subdomains $G$ of $\mathbb{R}^n$. We prove that, for every domain $G\subsetneq\mathbb{R}^n$, the this function is a quasi-metric with the constant less than or equal to $\sqrt{5}\slash2$. Moreover, we show that it is a metric in the domain $G=\mathbb{R}^n\setminus\{0\}$ with $n\geq1$. We also consider generalized versions of the point pair function, depending on an arbitrary constant $\alpha>0$, and show that in some domains these generalizations are metrics if and only if $\alpha\leq12$.
Introduction
During the past few decades, several authors have contributed to the study of various metrics important for the geometric function theory. In this field of research, intrinsic metrics are the most useful because they measure distances in the way that takes into account not only how close the points are to each other but also how the points are located with respect to the boundary of the domain. These metrics are often used to estimate the hyperbolic metric and, while they share some but not all of its properties, intrinsic metrics are much simpler than the hyperbolic metric and therefore more applicable.
Let G be a proper subdomain of the real n-dimensional Euclidean space R n . Denote by |x − z| the Euclidean distance in R n and by d G (x) the distance from a point x ∈ G to the boundary ∂G, i.e. d G (x) := inf{|x − z| | z ∈ ∂G}. One of the most interesting intrinsic measures of distance in G is the point pair function p G : G × G → [0, 1) defined as This function was first introduced in [3, p. 685], named in [6] and further studied in [4,10,11,12,13]. In [3,Rmk 3.1,p. 689] it was noted that the function p G is not a metric when the domain G coincides with the unit disk B 2 .
In order to be a metric, a function needs to be fulfill certain three properties, the third of which is called the triangle inequality. The point pair function has all the other properties of a metric but it only fulfills a relaxed version (2.1) of the original triangle inequality, as explained in Section 2. We call such functions quasi-metrics and study what is the best constant c such that the generalized inequality (2.1) holds. Namely, it was proven in [10, Lemma 3.1, p. 2877] that the point pair function is a quasi-metric on every domain G R n with a constant less than or equal to √ 2, but this result is not sharp for any domain G.
For this reason, we continue here the investigations initiated in the paper [10]. We give an answer to the question posed in [10, Conj. 3.2, p. 2877] by proving in Theorem 4.14 that, for all domains G R n , the point pair function is a quasi-metric with a constant less than or equal to √ 5/2. For the domain G = R n \{0} with n ≥ 1, we prove Theorem 4.6 which states that the point pair function p G defines a metric. In Lemma 4.17, we explain for which domains the constant √ 5/2 is sharp. We also investigate what happens when the constant 4 in (1.1) is replaced by another constant α > 0 to define a generalized version p α G of the point pair function p G as in (5.1). In particular, we prove that, for α ∈ (0, 12], this function p α G is a metric if G is the positive real axis R + (Theorem 5.2), the punctured space R n \{0} with n ≥ 2 (Theorem 5.11), or the upper half-space H n with n ≥ 2 (Theorem 5.13). Furthermore, we also show in Theorem 5.15 that the function p α G is not a metric for any values of α > 0 in the unit ball B n .
The structure of this article is as follows. In Section 2, we give necessary definitions and notations. First, in Section 3, we study the point pair function in the 1-dimensional case and then consider the general n-dimensional case in Section 4. In Section 5, we inspect the generalized version p α G of the point pair function in several domains. At last, in Section 6, we state some open problems.
Preliminaries
In this section, we introduce some notation and recall a few necessary definitions related to metrics.
We will denote by [x, y] the Euclidean line segment between two distinct points x, y ∈ R n . For every x ∈ R n and r > 0, B n (x, r) is the x-centered open ball of radius r, and S n−1 (x, r) is its boundary sphere. If x = 0 and r = 1 here, we simply write B n instead of B n (0, 1). Let H n denote the upper-half space {x = (x 1 , ..., x n ) ∈ R n | x n > 0}. Furthermore, hyperbolic sine, cosine and tangent are denoted as sh, ch and th, respectively, and their inverse functions are arsh, arch, and arth.
For a non-empty set G, a metric on G is a function d : G × G → [0, ∞) such that for all x, y, z ∈ G the following three properties hold: (1) Positivity: d(x, y) ≥ 0, and d(x, y) = 0 if and only if x = y, If a function d : (1), (2) and the inequality for all x, y, z ∈ G with some constant c ≥ 1 independent of the points x, y, z, then the function d is a quasi-metric [9, p. 4307], [15, p. 603], [16,Def. 2.1,p. 453]. Note that this term "quasi-metric" has slightly different meanings in some works, see for instance [1,2,14].
The point pair function defined in (1.1) is a metric for some domains G R n and a quasi-metric for other domains [10, Lemma 3.1, p. 2877]. Note that the triangular ratio metric introduced by P. Hästö [7], is a metric for all domains G R n [7, Lemma 6.1, p. 53], [3, p. 683] and, because of the equality p H n (x, y) = s H n (x, y) [4, p. 460], the point pair function is a metric on H n . However, the point pair function is not a metric for either the unit ball [3, Rmk 3.1, p. 689] or a two-dimensional sector with central angle θ ∈ (0, π) [10, p. 2877].
The point pair function in the one-dimensional case
In this section, we prove that, for every 1-dimensional domain G, the point pair function p G is either a metric or a quasi-metric with the sharp constant √ 5/2, depending on the number of the boundary points of G (Corollary 3.8).
To do this, we need to establish the following lemma which is also required for the proof of another important result, Theorem 3.6.
Proof. I) First we will investigate the function At every critical point of F (x, y, z), we have ∇F (x, y, z) = 0 and, therefore, From the two latter equalities above, we can deduce that By combining these expressions of A and B with the equality (3.3), we have and, consequently, x + y = x + z. This implies that y = z and we see that F (x, z, y) has no extrema in the domain D.
II) Now, let us investigate the case where (x, z, y) is a boundary point of the aforementioned domain D. If x = −1, x = 0, z = y or y = 1, then evidently F (x, y, z) ≥ 0. Thus, we only have to consider the case z = 0. Without loss of generality, we can assume that x > −1 and y < 1. Since the inequality (3.2) in the case z = 0 is equivalent to the inequality , −1 < x < 0 < y < 1. By denoting s = −x/(2 + x) and t = y/(2 − y) for 0 ≤ s, t ≤ 1, we have
and (3.2) is equivalent to
This can also be written as Now, let u = s + t and v = st. Then 0 < 4v ≤ u 2 < 4, and we can write (3.5) in the form After simple transformations, we obtain 5v + 4u + 2 + 1 5v Since v/u 2 ≤ 1/4, we only need to prove that 5v + 4u .
By denoting v 1/2 = ζ, we can write the last inequality in the form It is easy to see that h ′′ (ζ) ≥ 0 for ζ ≥ 0, therefore h is convex for positive ζ. Consequently, Proof. We need to show that the function satisfies the inequality (2.1) for all points x, y, z ∈ (−1, 1) with the constant c = √ 5/ 2. If x, y and z are all either non-negative or non-positive, then p G (x, y) ≤ p G (x, z) + p G (z, y) trivially. In the opposite case, either one of the points is negative and other two points are non-negative, or we have one positive and two non-positive points. Because of symmetry, we can just consider the first possibility. If z is negative, then the inequality p G (x, y) ≤ p G (x, z) + p G (z, y) holds for all x, y ∈ [0, 1). Consequently, we can assume that x < 0 ≤ z ≤ y. In this case, our inequality can be simplified to The inequality above follows from Lemma 3.1. Since, in the case x = −1/3, z = 0 and y = 1/3, the equality holds we see that the constant √ 5/2 is the best possible.
We note that, for any 1-dimensional domain G R, the boundary ∂G consists of either one or two points. Using this fact, we formulate: Proof. First, we note that the point pair function is invariant under translation and stretching by a nonzero factor. If card(∂G) = 1, then for some x 0 we have ∂G = {x 0 } and with the help of the function f : x → a(x − x 0 ), a = 0, we can map the domain G onto the positive real axis R + . The function f preserves the point pair function, i.e. p R + (f (x), f (y)) = p G (x, y) for all x, y ∈ G. Therefore, from the very beginning we can assume that G = R + . Since p R + (x, y) = |x − y|/(x + y) coincides with the triangular ratio metric s R + (x, y) for all x, y ∈ R + , we conclude that in this case the point pair function is a metric.
If card(∂G) = 2, then we have 1). We see that, as above, f preserves the point pair function, therefore we can assume that G = (−1, 1), and the result follows from Theorem 3.6.
The point pair function in the n-dimensional case
In this section, we investigate the quasi-metric property of the point pair function by considering its behaviour in n-dimensional domains, n ≥ 1. Our main results are Theorems 4.6 and 4.14. First, we will establish Lemma 4.1, which has quite complicated and technical inequalities but is necessary for the proof of Theorem 4.6. We note that some results close in spirit to those described in Lemmas 4.1 and 5.8 below are established in [8]. Moreover, Proof. It is sufficient to prove that By squaring both sides of (4.4), we have After simple transformations, we obtain To establish the inequality above, it is sufficient to prove that Squaring this inequality, we have By the inequality of arithmetic and geometric means, we have 2sh x ch y sh y ch x sh uch v sh v ch u ≤ sh 2 x ch 2 y sh 2 v ch 2 u + sh 2 u ch 2 v sh 2 y ch 2 x, therefore, sh 2 x ch 2 y sh 2 y ch 2 x + sh 2 u ch 2 v sh 2 v ch 2 u + 2sh x ch y sh y ch x sh u ch v sh v ch u ≤ [sh 2 xch 2 y + sh 2 uch 2 v][sh 2 ych 2 x + sh 2 vch 2 u].
This inequality implies (4.5), therefore, (4.2) is proved. The inequality (4.3) can be obtained by applying the function th to both sides of (4.2).
Theorem 4.6. For n ≥ 1, the point pair function is a metric on G = R n \{0}.
Proof. Because the point pair function trivially satisfies the properties (1) and (2) of a metric, we only need to prove the triangle inequality. Therefore, we will show that p G (x, y) ≤ p G (x, z) + p G (z, y) for x, y, z ∈ G = R n \{0}. Note that, for all points x, y in this domain, 1) First we consider the case n = 2. Then we can identify points of R 2 with complex numbers.
Because of homogeneity of p G (x, y), we can assume that z = 1, so that the triangle inequality becomes We can assume that R ≥ r.
First we will show that if either 0 < r ≤ R ≤ 1 or 1 ≤ r ≤ R, then (4.7) holds. Let us fix some u and v such that x = u 2 , y = v 2 . Then (4.7) is equivalent to the inequality If 0 < r ≤ R ≤ 1, then |u|, |v| ≤ 1 and Therefore, if we put then, by (4.9), we have sh p ≤ sh q + sh s. But this immediately implies p ≤ q + s and th p ≤ th q + th s what is equivalent to (4.7). Since the inequality (4.8) does not change after replacing u and v with u −1 and v −1 , we see that for the case 1 ≤ r ≤ R the inequality (4.9) is also valid.
Thus, we only need to consider the case r ≤ 1 ≤ R. We have consequently, the inequality (4.7) can be written in the form This can be simplified to If we denote then the inequality (4.10) takes the form (4.12) Let a = | sin(φ − ψ)|, b = | sin φ|, c = | sin ψ|, u = arsh b and v = arsh c. Then a ≤ sh (u + v), since a = | sin φ cos ψ − sin ψ cos φ| ≤ | sin φ cos ψ| + | sin ψ cos φ| If A, B and C are as in (4.11), then we have A = sh (arsh B + arsh C). By applying the function th to the inequality (4.2) of Lemma 4.1 and combining this with the inequality th (u + v) ≤ th u + th v for u, v ≥ 0, we obtain (4.12).
2) Now we consider the case n = 2. If n = 1, then the statement of the theorem immediately follows from the case 1). Therefore, we will assume that n ≥ 3. Consider the subspace E of R n containing the points 0, x, y and z. Since the Euclidean distance and the function p G are invariant under orthogonal transformations of R n and for n ≥ 2 the triangle inequality is valid, we can assume that E coincides with R 3 .
Without loss of generality we can put |z| = 1. Now consider the vectors Ox, Oy, and Oz from the origin to the points x, y, and z, respectively. Let 2α be the angle between Ox and Oz, 2β be the angle between Oz and Oy, and 2γ be the angle between Ox and Oy; α, β, γ ∈ [0, π/2). Then, by the law of cosines, where R = |x| and r = |y|. Applying the same arguments as above in the case n = 2, we see that we only need to prove the inequality (4.13) where A, B and C is defined by (4.11). Denote a = sin γ, b = sin α, c = sin β. Consider the triangular angle, formed by the vectors Ox, Oy and Oz. It has plane angles equal 2α, 2β and 2γ. Since each plane angle of a triangular angle is less than the sum of its other two plane angles, we obtain 2γ ≤ 2α + 2β, therefore, γ ≤ α + β.
Theorem 4.14. On every domain G R n , n ≥ 1, the point pair function p G is a quasi-metric with a constant less than or equal to √ 5/2.
Proof. To prove that the point pair function p G is a quasi-metric, we only need to find such a constant c ≥ 1 that for all points x, y, z ∈ G. Let where x, y, and z are distinct points from G. Define where the supremum is taken over all domains G R n and triples of distinct points. We will call such domains and triples admissible. We need to prove that c * = √ 5/2. Let us fix a domain G R n and two distinct points x, y ∈ G. Since G = R n , the boundary ∂G = ∅, therefore, there exist points u, v ∈ ∂G such that d G (x) = |x − u| and d G (y) = |y − v|. In the general case, the points u and v might not be unique because there can be several boundary points on the spheres S n−1 (x, d G (x)) and S n−1 (y, d G (y)).
We note that the value p G decreases as G grows, i.e. if G ⊂ G 1 , then p G 1 ( x, y) ≤ p G ( x, y) for all x, y ∈ G.
Consider the two following cases. 1) If u = v, then we can set G 1 = R n \{u}. It is clear that G ⊂ G 1 and p G (x, y) = p G 1 (x, y). Taking into account the invariance of p G under the shifts of R n , we can assume that u = 0. Then, by Theorem 4.6, we have p G 1 (x, y) ≤ p G 1 (x, z)+p G 1 (z, y) for all z ∈ G 1 . From the monotonicity of p G with respect to G, we obtain Therefore, c(x, y, z; G) ≤ 1.
2) Let now u = v. We put y). Consequently, c(x, y, z; G) ≤ c(x, y, z; G 1 ) and the supremum in (4.15) is attained on domains of the type G u,v 1 . Denote a = |x − z|, b = |z − y|, c = |x − y|, ρ = |x − u|, r = |y − v|. By the triangle inequality, we have With the help of the triangle inequality, we also have and this implies d ∆ ( z) ≥ d G 1 (z). Using the obtained inequalities and the fact that the function t → t/ t 2 + γ 2 is increasing on R + when γ is a real nonzero constant, we have and, similarly, p ∆ ( z, y) ≤ p G 1 (z, y). From this, we deduce that c(x, y, z; G 1 ) ≤ c( x, y, z; ∆).
Since the point pair function is invariant under shifts and stretchings, we can assume that ∆ = [−1, 1]. But, by Theorem 3.6, the point pair function fulfills the inequality for all points x, y, z ∈ (−1, 1) with the constant √ 5/2. Therefore, we have c * ≤ √ 5/2. To prove that c * = √ 5/2, consider ∆ = [−1, 1] ⊂ R 1 as a part of R n . Let u = −1 and v = 1 be the endpoints of ∆. Consider the domain G 1 = G u, v 1 . For all x, y ∈ ∆ we have p G 1 ( x, y) = p ∆ ( x, y). Since in (4.16) the constant √ 5/2 is sharp if we take x, y and z from (−1, 1), we obtain that it is sharp for G 1 and, therefore, for the class of proper subdomains on R n . The theorem is proved. Now, we will investigate the sharpness of the constant √ 5/2, if a proper subdomain G of R n is fixed.
Lemma 4.17. If a domain G R n , n ≥ 1, contains some ball B n (z 0 , r) and there are two points u, v ∈ ∂G such that the segment [u, v] is a diameter of B n (z 0 , r), then c = √ 5/2 is the best possible constant for which the inequality y)), x, y, z ∈ G, is valid.
Proof. By Theorem 4.14, the point pair function is a quasi-metric with the constant √ 5/2. The sharpness of this constant follows from the fact that the equality holds for the points x = z 0 + (u − z 0 )/3, z = z 0 and y = z 0 + (v − z 0 )/3 (see Figure 1).
Figure 1.
A domain G and points x, y, z ∈ G for which the equality (4.18) holds.
It follows from Lemma 4.17 that the point pair function p G is a quasi-metric with the best possible constant √ 5/2 if the domain G is, for instance, a ball, a hypercube, a hyperrectangle, a multipunctured real space of any dimension n ≥ 1, or a two-dimensional, regular and convex polygon with an even number of vertices.
The generalized point pair function
In this section, we will consider the generalized version of the point pair function. Namely, note that, by replacing the constant 4 with some α > 0, we obtain the function Let us first consider the case where the domain G is the positive real axis.
Theorem 5.2. For a constant α > 0, the function is a metric if and only if α ≤ 12.
Proof. To prove that for every fixed 0 < α ≤ 12, the function p α R + (x, y) is a metric on the positive real axis, it is sufficient to establish the triangle inequality p α y). Fix first two points x, y > 0. By symmetry, we can assume that x ≤ y. Next, we fix z such that the sum p α R + (x, z) + p α R + (z, y) is at minimum. Without loss of generality, we can assume that 0 < x ≤ z ≤ y because, for all z ∈ (0, x), , y), and, if y < z, then the triangle inequality p α R + (x, y) ≤ p α R + (x, z) + p α R + (z, y) holds trivially. Because the function p α R + is invariant under any stretching by any factor r > 0, we can assume that z = 1.
then it is easy to show that consequently, If u → u 0 or v → v 0 and either u 0 or v 0 equals 1, then G(u, v) tends to a non-negative value. Similarly, this condition holds if u 0 or v 0 equals +∞. Therefore, to prove that the inequality (5.4) holds we only need to show that G(u, u) ≥ 0, u ≥ 1 or, equivalently, This inequality can be written as By denoting t = u 2 + u −2 , we will have t ≥ 2, and the inequality (5.6) takes the form or, equivalently, The inequality (5.7) is valid for α ≥ 12 and t ≥ 2 because for such α and t we have Consequently, the inequality (5.3) holds and, for 0 < α ≤ 12, the function p α R + is a metric. It also follows that the constant 12 here is sharp because, for α > 12, we have 3t 2 − αt + 2α − 12 < 0, 2 < t < (α − 6)/3, and, therefore, the inequality (5.7) is not valid at every point of [2, +∞).
In Theorem 5.11, we prove a result about the function p α G similar to Theorem 5.2 but for the case where G = R n \{0}. See Figure 2 for the disks of the function p α G in R 2 \{0}. However, in order to prove Theorem 5.11, we need to first consider the following lemma.
Lemma 5.8. If A, B, C > 0 and a, b, c ≥ 0 are chosen so that the inequalities Proof. Since the functions t → t/ √ 1 + t 2 and t → t/(1 + t) are increasing on [0, ∞), we can assume that the equality takes place in (5.9).
Consider the function We need to prove that F (x, y) ≥ 0, x, y ≥ 0. We have F (0, 0) = 0. Assume that for some x, y ≥ 0, not equal to zero at the same time, F (x, y) < 0. Consider now the function g(t) = F (tx, ty), t ∈ [0, 1]. It is continuous and g(0) = 0, g(1) < 0. We will show that for small positive t, the inequality g(t) > 0 holds. Actually, , as t → 0. Now, we will show that (5.10) Denote Then a 1 = b 1 + c 1 < 1 and In this notation, the inequality (5.10) can be written in the form It is easy to prove that and it is therefore sufficient to show that which follows from the fact that a 1 > b 1 and a 1 > c 1 .
Theorem 5.11. For a constant α > 0, the function is a metric if and only if α ≤ 12.
Proof. We will only outline the proof because it is similar to that of Theorem 4.6 but, instead of (4.11), we use the following values: By Theorem 5.2, such values satisfy (5.9). We also note that the parameters a, b and c, considered in both the first and second parts of the proof of Theorem 4.6, satisfy the
Open questions
On the base of numerical tests, we propose the following conjectures.
Conjecture 6.1. For a constant α > 0, the function , x, y ∈ R n \B n , n ≥ 2, (6.2) is a metric if and only if α ≤ 12. Conjecture 6.5. The point pair function p G is a metric on the domain G = R 3 \Z, where Z is the z-axis. Remark 6.6. From the proof of Theorem 5.15 it follows that, for α > 0, the generalized version p α B n of the point pair function can only be a quasi-metric in the unit ball with a constant c(α) that has the following lower bound: By differentiation, we have It can be shown that the square-root expression on the right hand side of the inequality (6.7) obtains its maximum with respect to k ∈ (0, 1) at the point k = α + 3 − √ 4α + 9 α + 2 ∈ (0, 1).
Declarations: Availability of data and material Not applicable, no new data was generated. Competing interests On behalf of all the authors, the corresponding author states that there are no competing interest. Funding The work of the second author is performed under the development program of Volga Region Mathematical Center (agreement no. 075-02-2022-882). The research of the third author was funded by the University of Turku Graduate School UTUGS. Authors' contributions DD contributed new ideas and checked the results. SN organized this research and contributed several theorems. OR suggested research ideas and contributed several results. MV did several experiments and suggested problems. Acknowledgments The authors are grateful to the referees for their work. | 6,606.6 | 2022-02-17T00:00:00.000 | [
"Mathematics"
] |
GridTracer: Automatic Mapping of Power Grids using Deep Learning and Overhead Imagery
Energy system information valuable for electricity access planning such as the locations and connectivity of electricity transmission and distribution towers, termed the power grid, is often incomplete, outdated, or altogether unavailable. Furthermore, conventional means for collecting this information is costly and limited. We propose to automatically map the grid in overhead remotely sensed imagery using deep learning. Towards this goal, we develop and publicly-release a large dataset ($263km^2$) of overhead imagery with ground truth for the power grid, to our knowledge this is the first dataset of its kind in the public domain. Additionally, we propose scoring metrics and baseline algorithms for two grid mapping tasks: (1) tower recognition and (2) power line interconnection (i.e., estimating a graph representation of the grid). We hope the availability of the training data, scoring metrics, and baselines will facilitate rapid progress on this important problem to help decision-makers address the energy needs of societies around the world.
I. INTRODUCTION
Providing access to sustainable, reliable, and affordable energy access is vital to the prosperity and sustainability of modern societies, and it is the United Nation's Sustainable Development Goal #7 (SDG7) [1]. Increased electricity access is correlated with positive educational, health, gender equality, and economic outcomes [2]. Ensuring energy access over the coming decades, and achieving SDG7, will require careful planning from non-profits, governments, and utilities to determine electrification pathways to meet rapidly growing energy demand.
A crucial resource for this decision-making will be highquality information about existing power transmission and distribution towers, as well as the power lines that connect them (see Fig. 1); we collectively refer to these infrastructures as the power grid (PG). Information about the precise locations of PG towers and line connectivity is crucial for decisionmakers to determine cost-efficient solutions for extending reliable and sustainable energy access [3]. For example, this information can be used in conjunction with modeling tools like the Open Source Spatial Electrification Tool (OnSSET) [4] to determine the optimal pathway to electrification: grid extension, mini/microgrids, or off-grid systems like solar.
Unfortunately, the PG information available to decisionmakers is frequently limited. Existing PG data are often incomplete, outdated, of low spatial resolution, or simply unavailable [5], [6]. Furthermore, conventional methods of collecting PG data, such as field surveys or collating utility company records, are either costly or require non-disclosure agreements. The importance of this problem and the lack of PG data has recently prompted major organizations such as the World Bank [5] and Facebook [6] to investigate solutions to it. In this work we propose to address this problem by using deep learning models to automatically map (i.e., detect and connect) the PG towers detectable in high-resolution color overhead imagery (e.g., from satellite imagery and aerial photography) using deep learning models.
A. Mapping the grid using overhead imagery
Recently deep learning models -namely deep neural networks (DNNs) -have been shown to be capable of accurately mapping a variety of objects in color overhead imagery, such as buildings [7], [8] roads [8], [9], and solar arrays [10], [11], [12]. Since PG towers and lines are often visible in overhead imagery, these results suggest that mapping the PG may be also be feasible. However, PG mapping presents several unique challenges compared to mapping other objects in overhead imagery.
The most immediate challenge of PG mapping is the structure of the desired output. The PG is generally represented as a geospatial graph, where each tower represents a graph node with an associated spatial location, and each PG line represents a connection between two nodes [5], [6]. This representation is compact, and well-suited for subsequent use by energy researchers and decision-makers. Therefore, we require that any automatic recognition model produce a geospatial PG graph as output.
A second challenge is that PG infrastructure exhibits weak and geographically-distributed visual features in overhead imagery, making the problem both unique and challenging. PG infrastructure. Looking closely at Fig. 2(a) it is apparent that PG towers exhibit very few visual features (if any), aside from their shadows. Shadows are useful for detection however their strength and visual appearance varies, and they are not always present (e.g., depending upon time of day). As illustrated in Fig. 2 PG lines typically appear as thin white or black lines that are only intermittently visible due to their varying contrast with the local background (e.g., white lines become faint, or disappear, as they cross pale background). From Fig. 2 it is also notable that transmission infrastructure tends to be relatively easy to detect because it is much larger, however, it is also much more rare than distribution infrastructure.
As a result of these two major challenges, PG mapping is a unique and challenging problem for existing visual recognition models. Fig. 3(a) demonstrates the desired output structure of the PG mapping where towers are represented in boxes or nodes and lines are represented in edges. Fig. 3(b) illustrates the large-scale topology of the PG in one region, which might be leveraged for recognition. On the contrast, modern DNNs for example rely primarily upon local visual features for object recognition [13], [14], making them poorly-suited for PG mapping. Additionally most existing DNNs do not typically produce output in the form of a geospatial graph. Some work has recently been conducted for inferring geospatial graphs on overhead imagery for the problem of road mapping [9], [15]. Unfortunately, however, these approaches make fundamentally different assumptions about the structure of the underlying graph, and the visual features associated with it, limiting their applicability to the PG mapping problem (see Section II for further discussion). Therefore, effective PG mapping will require the development of novel models that can address its unique challenges. This raises a third major challenge of PG mapping: the absence of any publicly-available benchmark dataset to train and validate recognition models. Furthermore, it is unclear how to score a geospatial graph so that different models can be compared. Without a publicly-available dataset and scoring metrics, it is not possible to effectively study this problem.
B. Contribution of this work
In this work we make two primary contributions and lay the foundations for a practical PG mapping approach with overhead imagery. First, we develop and publicly-release a dataset of satellite imagery encompassing an area of 263km 2 across seven cities: two cities in the U.S. and five cities in New Zealand. In this work we employ imagery with a 0.3m ground sampling distance. Our primary motive for choosing this resolution is because it is the highest resolution that is also widely employed for research on object recognition in overhead imagery [8], [9], [16], [7]. Imagery at this resolution is also commercially available across the globe, making any techniques developed using it widely applicable. Our dataset includes both imagery and corresponding hand annotations of transmission and distribution towers (as rectangles) and the power lines connecting them (as line segments), making it possible to train and validate deep learning approaches for PG mapping. We also perform additional tests to evaluate the quality of the hand annotations. To enable future benchmark testing, we describe a data handling procedure for training and validating models, and we propose metrics to score the graphical output of the models.
Our second contribution is a novel deep model -termed GridTracer -that makes a first step towards addressing the unique challenges of PG mapping in overhead imagery. Grid-Tracer effectively splits PG mapping into three simpler steps: tower detection, line segmentation, and PG graph inference. In tower detection we use a deep object detection model to identify the location and size (e.g., with a bounding box) of individual PG towers. In line segmentation we use a deep image segmentation model to generate a pixel-wise score indicating the likelihood that a PG line is present. In PG graph inference, we integrate the output of steps one and two over large geographic regions, with the goal of estimating which towers are likely to be connected by power lines. The final output of GridTracer is a geospatial graph, in which graph nodes represent PG tower locations, and graph edges (node connections) represent PG lines.
As we discuss in Section II, existing DNN-based approaches are not suitable to solve the PG mapping problem, and therefore we propose GridTracer as a baseline for future work. To this end, we use our new benchmark dataset to comprehensively study the performance of GridTracer, including ablation studies to analyze the major designs and hyperparameter choices of GridTracer. We also compare GridTracer's performance to human-level PG mapping accuracy on our benchmark to provide insights on the level of PG mapping accuracy that may ultimately be achievable, and thereby how much further GridTracer could be improved with further research. We hope the availability of these resources (e.g., data, scoring metrics, and baseline) and analyses will facilitate rapid progress on this important problem to help decision-makers address the energy needs of societies around the world.
II. RELATED WORK
Mapping the power grid in remotely-sensed data. Until recently, the majority of work related to PG analysis using remote sensing techniques focused around identifying vegetation encroachment for monitoring known transmission lines [17], [18], [19]. More recently, Facebook [6] and Development Seed (in partnership with the World Bank) [5] have both proposed approaches to map PGs in overhead imagery within the past year.
Facebook's approach uses nighttime lights imagery to identify distribution lines (medium voltage). This approach maps grid connectivity using VIIRS 750m-resolution day/night band nighttime lights imagery to identify and connect communities using electricity. Since nighttime light data is not perfectly correlated with energy access, especially in regions with lower electricity access rates, such an approach could potentially be less accurate in locations where it would be most beneficial for extending electricity access. Additionally, the 750m-resolution of the underlying data prevents individual towers and lines from being directly observed. This approach culminated in the release of a medium-voltage transmission dataset [6], which, while global in scope, only made PG estimates at a resolution of one estimate per square kilometer, and reported their performance in a scale no smaller than 1km 2 ; both of which are coarser than what is needed for many types of electricity access planning.
A summary of the approach proposed by Development Seed is published online, along with software and their output PG mapping dataset 1 . Their approach relies on identifying high voltage transmission infrastructure in color overhead imagery, making it more similar to our proposed approach. However, because their approach uses a human in the loop, it will be costly to scale it to large geographic areas, or utilize their approach for repeated PG mapping over time.
Our work builds on the foundation laid by each of these approaches, seeking to achieve a high resolution mapping of the PG (i.e., individual towers and connections), and to do so with a fully-automated and scalable pipeline, akin to that of Facebook's approach. Furthermore, and in contrast to both existing approaches, we map both transmission and distribution PG infrastructure, providing more comprehensive support for electricity access planning. Our experimental results also include comprehensive analysis of the proposed GridTracer model; providing support for its overall design, its performance under different deployment scenarios, and its sensitivity to hyperparameter settings. 1 https://github.com/developmentseed/ml-hv-grid-pub Object detection in overhead imagery. One of the first components of our approach is to identify the PG towers (or poles); for this step, we employ object detection. Object detection algorithms identify and localize the objects of interest within a given image, typically by placing a bounding box around the object. Recent state-of-the-art object detection algorithms employ deep neural networks (DNNs). The studies [20], [21], [22] first use DNNs to propose regions of interest (RoI), after which a second neural network is used to (i) classify and (ii) refine the bounding boxes identified by the first-stage network. This two-stage process generally yields higher accuracy on benchmark problems [23]. Based upon this approach, [24], [25], [26] have developed detectors specifically for the overhead imagery to address the specific challenges in remote sensing including the various scales of the objects as well as arbitrary orientation of their bounding boxes [24]. However, to our best knowledge there has been no previous work on object detection with PG towers, may limited by the lack of available datasets.
Segmentation of overhead imagery. The second step of GridTracer relies on a deep learning model to automatically segment PG lines in the overhead imagery. Although a variety of segmentation models have been employed on overhead imagery, we focus on two models: the U-net model, and StackNetMTL. The U-net model employs an encoder-decoder structure with skip connections between the encoder and decocder to maintain fine-grainned imagery details to support accurate pixel-wise classifications [27], [28]. The U-Net [27] and subsequent variants (e.g., Ternaus models [29]) have recently yielded top performance on the Inria (2018) [16], DeepGlobe (2019) [8], and DSTL (2018) [30] benchmarks for object segmentation in overhead imagery. In addition to their benchmark success, PG lines are very small (sometimes just a single pixel wide) and therefore we hypothesize that fine-grained features will be crucial to detect PG lines.
The StackNetMTL [31] is a recent model that achieved state-of-the-art performance specifically for the segmentation of roads in overhead imagery. Like PG infrastructure, roads exhibit weak local visual features (though to a lesser extent than transmission lines), and considering the visual features over a larger context can improve recognition [31]. The StackNetMTL, for example, also trains a model to predict both the label of a pixel (i.e., road, or not) and its likely road orientation. The joint learning process helps the network to better learn the connectivity information, and can also be applied to PG lines in a similar fashion. For these reasons, and its state-of-the-art performance, we explore the StackNetMTL for PG line segmentation.
Graph extraction from overhead imagery. Once the PG towers are identified, we can infer a graphical representation of the grid (i.e., a map), with PG towers as nodes, and PG lines as edges; this is the final step of GridTracer. To our knowledge, the most closely related problem to ours is road mapping (e.g., [31], [32], [33]. Historically, road mapping was treated largely as a segmentation problem [33], [34], [35], however, in recent years road mapping has also been formalized as a geospatial graph inference problem, in which road intersections are graph nodes and any intervening roadway are treated as graph edges. Two recent and well-cited models for road graph extraction have recently been proposed: RoadTracer [9] and DeepRoadMapper [15]. Unfortunately, neither of these methods is directly applicable to the PG mapping problem. The most immediate challenge is the way in which these two methods create graph nodes: RoadTracer places nodes at regular spatial intervals without any regard for any local visual cues, and DeepRoadMapper assigns graph nodes at any location where a road segment (identified by a segmentation model) has a discontinuity. This leaves both methods without any clear mechanism to enforce graph nodes to reside on PG towers, and therefore these methods are not directly applicable to PG mapping (e.g., as baselines for comparison). For these reasons, we develop GridTracer as the first approach that is applicable to the PG mapping problem, and we propose it as a problem-specific baseline approach, upon which future methods can be developed and compared.
III. THE POWER GRID IMAGERY DATASET
The PG imagery dataset consists of 264 km 2 of overhead imagery collected over three distinct geographic regions: Arizona, USA (AZ); Kansas, USA (KS) and New Zealand (AZ). Some basic statistics of the dataset are presented in Table I, where the PG infrastructure statistics are derived from human annotations. We chose these diverse geographic regions so that we (and future users) could demonstrate the validity of any PG mapping approaches across differing geographic settings.
Although our dataset includes both 0.15m and 0.3m resolution imagery, we resampled all of the imagery to 0.3m for our experiments. This was done, in part, to maintain consistency of testing results. A second reason was to enhance the practical relevance of our results; while it is likely that utilizing higher resolution imagery would yield greater PG mapping accuracy, 0.15m resolution imagery is only available via aerial photography, whereas 0.3m imagery is available from satellites (e.g., Worldview 2 and 3 satellites). Satellitebased imagery offers much greater geographic coverage and imaging frequency compared to aerial photography, while also being less expensive. Our aim here is to explore this problem for imagery that could ultimately support applications across the globe, including areas currently transitioning to electricity access. By employing 0.3m resolution imagery we will better support these objectives.
A. Ground truth representation
There are two major classes of objects that we annotated in the imagery: towers and lines. For the purpose of PG mapping, we need to precisely localize each PG tower in the imagery, as well as provide information about its shape and size to support the training of object detection models. Therefore each tower was annotated with a bounding box (i.e., a recangle), which is parameterized by a vector t = (r, c, h, w) where (r, c) encodes the pixel location of the top left corner of the box (the row and columns of the corresponding pixel), and (h, w) encodes the height and width (again, in pixels) of the rectangle.
Let T denote the set of all t-vectors in the ground truth of the dataset, which specifies the location of each tower. Given T , the PG lines can be represented very succinctly by observing that PG lines always form straight line segments between the centroids of PG tower bounding boxes. Therefore, the precise visual extent of a particular PG line can be accurately inferred simply by knowing which two towers are connected by that line. Therefore, we can succinctly represent the PG lines in the imagery by an adjacency matrix, A, where A ij = 1 indicates that there is a connection (a power line) between the i th and j th towers in T , and A ij = 0 otherwise. Adjacency matrices are commonly used to succinctly represent graphs, and therefore the PG is naturally conceptualized as a graph. However, the nodes in the PG graph are each associated with a geospatial location, distinguishing them from generic graphs in mathematics. Therefore, we refer to the PG as a geospatial graph, which is characterized by a set of node locations as well as an adjacency matrix.
B. Annotation details
All ground truth labels were acquired via manual human annotation of the color overhead imagery. The imagery was split into non-overlapping sub-images, termed "tiles", approximately 5k × 5k in size. Each tile was manually inspected and annotated using a software tool especially designed for the rapid annotation of overhead imagery. The tool allows users to choose between two primary types of annotation: rectangles for towers (label "T") and line segments for lines ("L"). These annotations are then used to generate the T and A ground truth matrices discussed in Section III-A. One separate T and A matrix was generated for each image tile, so that each tile could be processed and scored independently.
Rectangular annotations were drawn so that they enclosed the entire physical extent of each tower, excluding shadows. Examples of tower annotations are provided in Fig. 4 as blue squares. In a small subset of cases the annotators were uncertain about whether a tower is an electricity tower or some other type of tower e.g., streetlights. The annotators were instructed in those cases to assign an "Other Tower" category ("OT"). "OT" annotations are still included in the ground truth matrices as graph nodes however, the "OT" indicator is included in the ground truth meta data so that users can decide how to use these towers. In this work we use "OT" towers for training since many of these objects look similar to PG towers and the models may benefit from the additional training imagery. However, we exclude "OT" labels from evaluation because we want to measure performance only on true PG infrastructure.
Annotators were instructed to draw line segments between any two towers that were connected by a power line. In such cases, a line segment was drawn from the center of the first tower's rectangle to the center of the second tower's rectangle. Examples of line segment annotations are provided by the green lines in Fig. 4.
It sometimes occurs that PG lines connect towers in two neighboring tiles. This potentially creates substantial additional complexity when annotating and processing the imagery. This can also substantially slow down the processing of imagery with deep learning models because large image tiles may not fit into the memory of graphics processing units. In order to circumvent these potential problems, we created artificial "Edge Nodes" (EN) that were placed at any location where a PG line crossed the boundary of an image tile. An example EN node is shown in Fig. 4. This formalism allows us to maintain a fully self-contained graph representation of each image tile so that each tile can be processed separately. Similar to the "OT" nodes, the EN nodes were included in all ground truth as graph nodes. However, because EN nodes do not actually represent PG towers, we do not use "EN" nodes for training or evaluating tower detection or PG graph inference output.
Each annotator was trained to recognize PG towers (including the OT and EN designations) and lines using two especially challenging tiles of imagery that were identified by our team. To ensure overall quality and consistency in the dataset, each annotator's training annotations were reviewed for accuracy before that annotator was permitted to annotate more tiles. We also note that annotators were asked to annotate any substations ("SS") that were present in the imagery. These substations were not included in our experiments, however for completeness, we include their annotation details in the Appendix.
C. Dataset characteristics and analysis
In this section, we present qualitative and quantitative analyses of the dataset to (i) illustrate the diversity of the dataset, as well as (ii) provide useful information for algorithm development and analysis of our experimental results. Basic statistics regarding the dataset are presented in Table I. From these basic statistics we can see that there are substantial differences between the three regions. For example, New Zealand has the highest density (per unit area) of PG infrastructureby a large margin -while Kansas has a greater density than Arizona. New Zealand also has the greatest number of line connections per tower. Fig. 5 presents several other useful statistical features associated with the dataset, stratified by location. We briefly summarize some of the main observations. The first row indicates that New Zealand has substantially smaller towers than the US locations on average. All three plots in the top row also show that the towers in all three regions are usually connected to two towers, with a small number of towers having three or more connections, especially in Arizona. The second row indicates that there is a wide, but relatively similar, distribution of line lengths across all three locations. This is a useful distribution for limiting the potential towers connected to a given tower (e.g., we need not consider towers further than 100 meters away). Furthermore, we observe New Zealand has substantially more power lines compared to the other three regions. Finally, in row three, we create a crude measure for the complexity of the PG (per unit area); we count the number of unique line angles within each tile, using 18 discrete possible angle bins. A histogram of the number of unique line angles, per tile, is created for each location. The results indicate that the PG in New Zealand is substantially more complex, on average, than the others, since the histogram suggests that power lines in New Zealand have various different orientations. This analysis provides important information such as the two regions in US are relatively similar, except Kansas has more power line connections and the grid pattern is slightly more complex. The New Zealand region has substantially more complex PG compared to the other two regions.
D. Annotation Quality Assessment
To assess the quality of our annotations, we chose approximately 10% of the image tiles randomly (distributed equally across the three regions) and produced two independent sets of annotations for each tile: one set made by each of two unique annotators. We then computed the agreement between the annotations made by the two annotators. Evaluating annotator agreement is a common strategy for assessing the quality of annotations of machine learning training data [36], [23]. Two towers were declared as matches if their centroids were within 3m of one another; two lines were declared as matches if both ends of each line segment matched with both ends of another line segment, within 3m in each case. Fig. 6(a) summarizes the agreement between the annotators. Tower annotations exhibit a 70-90% agreement across the three locations. Line annotations exhibit slightly lower agreement for each location because line agreement depends upon first correctly-identifying the tower locations. Overall, the results show strong agreement among the annotations, suggesting that consistent PG mapping results may be feasible, given a sufficiently-sophisticated recognition model (e.g., approximated by a human analyst in this case). Similarly, these results also suggest our annotations are suitable for measuring the recognition accuracy of automatic models. For example, we expect that a sufficiently-sophisticated automatic recognition model should be capable of achieving a 70-90% agreement with our annotations. By contrast, if our human annotations had instead demonstrated very little agreement between annotators (e.g., an approximation of a sophisticated recognition model), then it will be difficult to distinguish between poor detectors and good detectors, suggesting automatic PG mapping may be infeasible.
IV. GridTracer: A BASELINE MODEL FOR POWER GRID MAPPING IN OVERHEAD IMAGERY
In this section we present our baseline model GridTracer for the PG inference problem. We break down the PG graph inference problem into three sub-problems: tower detection, line segmentation and graph inference. The processing pipeline of GridTracer is illustrated in Fig. 7.
A. Tower Detection
The goal of tower detection is to predict the centroid of each PG tower. However, our ground truth annotations provide richer information about each tower -full rectangles. This makes it possible to naturally apply and train state-of-theart object detection models for tower detection. Due to the challenging nature of tower detection, we focus on maximum accuracy and employ a two-stage (as opposed to one-stage) object detector: the Faster RCNN [21]. We trained faster RCNN with Inception V2 [37] on our proposed PG dataset to detect towers. In section VII-B we provide results from an ablation study using different backbone choices and find that the Inception V2 generally yields the best results. Due to the relatively small size of PG towers in imagery compared to other objects to which object detectors have been applied [23], [38], we had to significantly reduce the size of the bounding box anchors to achieve good results.
At inference time, we first extract the raw imagery into 500 × 500 sub-images. We apply the tower detector on those sub-images and only keep the boxes with a confidence higher than 0.5. Since we are only interested in the location instead of the size of the towers, the centers of the bounding boxes were retained as predictions. We use Non-Maximum Suppression (NMS) [39] to remove redundant predictions for nearby bounding boxes. The result of this process is a list of estimated PG tower centroid locations, termedT , illustrated in Fig. 7.
B. Line segmentation
As described in section I, the PG lines are conceptualized as edges in the PG graph. As a result, the precise location (e.g., pixel-wise segmentation) of the PG lines are unnecessary for the final output of GridTracer, however, having such information is useful to determine whether a line exists. As a result, as an intermediate step, GridTracer employs a state-ofthe-art segmentation model to infer all locations throughout the imagery that may indicate PG lines. The output of this model will then be utilized in the next stage of processing (graph inference) to infer which towers are most likely to be connected by PG lines.
To extract a segmentation map of PG lines, we employ the StackNetMTL, which has recently achieved success for road segmentation [31]. As discussed in II, the StackNetMTL includes a greater visual context when inferring target labels, which we hypothesize may also benefit PG line segmentation. We provide ablation studies in section VII-B indicating that this is indeed the case.
In order to train our segmentation models, we must create ground truth imagery that indicates which pixels are PG lines (pixel value of one), and which are not (pixel value of zero). To do this, we use our manual annotations to draw straight lines between the centroids of each pair of connected towers. Each line is 30-pixels-wide and all pixels in the line are set to a value of one. This width is chosen to ensure that the ground truth labels encompass the real power lines, whose exact locations in the imagery are unknown. Once trained, we apply StackNetMTL to produce a map of pixel-wise PG line probabilities, termedĈ, that is illustrated in Fig. 7.
C. Graph inference
The goal of this step is to infer an adjacency matrix,Â, where ij = 1 indicates that there is a connection between We use a segmentation model to infer an image,Ĉ, where each pixel intensity indicates the probability that a line exists at that location. (c) UsingT and C we infer the connections between each tower, which is given by an adjacency matrix,Â. The final output of the model is a geospatial graph of the PG characterized by and its associated PG tower locations,T . the i th and j th towers inT , and ij = 0 otherwise. To infer these connections, we will rely upon the output of the PG line segmentation model,Ĉ. Each pixel inĈ indicates the relative likelihood that a power line exists at pixel k. GridTracer will label ij = 1 if and only if two conditions are met: (i) the distance between tower i and j is less than some user-defined threshold, d; and (ii) S ij ≥ γ, where γ is a user-defined threshold. S ij is the estimated likelihood that a connection exists between tower i and j, based upon integrating the pixel-wise output of a power line segmentation model (StackNetMTL), along the path between the two towers, given by Here P ij is the set of pixels in the path between the towers, which is a straight-line segment, of width w, between the two towers. This simple operation allows us to integrate visual cues well beyond the field-of-view of the segmentation model. Once S ij is obtained, we retain the connection between tower i and j if S ij exceeds a threshold value, γ, and the distance between the two towers is smaller than a predefined parameter, d. In practice, we only consider connections between towers that are within d pixels of one another, dramatically limiting the number of candidate connections we need to consider. As illustrated in Fig. 7, the final output of GridTracer is a geospatial graph of the PG characterized by and its associated PG tower locations,T .
V. EXPERIMENTAL DESIGN
In this section we describe the major experimental design details. A major goal of this paper is to establish a benchmark for PG mapping, and therefore we prescribe a proposed data handling scheme for training and evaluating PG mapping models. We also propose a set of scoring metrics for evaluating models. Finally, we describe the implementation details of GridTracer.
A. Data handling for model training
We explore three data handling schemes, as illustrated in Fig. 8. In all data handling schemes, we use the same subset of the imagery from each location for testing (first 20% of imagery of each city) so that, regardless of the data handling scheme, the exact same testing dataset is always employed. Fig. 8(a) is the "conventional" data-handling scheme, in which training imagery is available from all testing locations, and models are trained on all available training imagery. This approach is labeled as the "conventional" because it is commonly-employed on overhead imagery recognition benchmark problems (e.g.,DeepGlobe [8], DSTL [30]). For this reason we prescribe this as the primary data handling scheme for our PG mapping benchmark dataset. Our main results in section VI are obtained using this scheme. Fig. 8(b,c) presents two additional data handling schemes that we utilize in Section VII for further analysis. We describe the motivation for these designs in Section VII.
B. Scoring: tower detection
As discussed in the introduction, we split the PG mapping problem into two sub-problems: tower detection and tower connection (i.e. line interconnection).
For tower detection, we adopt the mean average precision (mAP) metric because it is widely-used for object detection tasks (e.g., [38], [23], [23]). mAP is computed by first assigning a label to each predicted box,b ∈B, indicating whether it is a correct detection, or a false detection. This label is based upon whether a given predicted box achieves a sufficiently high IoU with at least one ground truth box, b ∈ B. Mathematically, we have where l i is the label assigned to the i th predicted bounding box, and τ is a user-defined threshold. In this work we utilize an alternative matching criteria that depends instead on the the distance between the centroid of the predicted and groundtruth boxes. Mathematically, we have where d is the distance between the centroids of the bounding boxes. We term this modified metric distance-based mAP (DmAP ). We also utilize Eq. 3 for our PG graph scoring metric (discussed next in Section V-C), since it also requires linking predicted and ground truth towers. We rely upon eq. 3 for linking because, in PG mapping, we are primarily concerned with the accuracy of the locations (e.g., centroids) of the predicted towers rather than their precise shape and size. Furthermore, we find that our mAP scores for PG tower detection are often very low, even while our DmAP scores are high, indicating that IoU may indicate a poor prediction even while the location of the predicted tower is accurate. We present results in Section VII indicating that this is indeed the case for our benchmark dataset and our models. Fig. 9 presents a typical example of such a scenario.
As a result of this problem we utilize the linking criterion in eq. 3 for all of our benchmark scoring metrics, unless otherwise noted. When using eq. 3, we use τ = 3m because it reflects the variability of human annotations made over the same towers. As illustrated in Fig. 6 (b), centroids of human annotations fall within 3m of each other roughly 99% of the time. Fig. 9. A tower detector prediction example. Where the blue box is the annotated ground truth and green box is the prediction with confidence score on the top left.
C. Scoring: power grid inference
For the evaluation of tower connections, our goal is to reward true power line connections between real towers, and penalize any predictions that are incorrect (including those between falsely detected towers). There is no previously defined metric for assessing the accuracy of PG network predictions, and therefore we propose one here to reflect our goals. One existing metric that captures these goals well is the SGEN+ proposed in [40] to score predictions of graphical structures in scene understanding problems. This metric includes a "recall" measure, indicating the proportion of true graph connections (i.e., power lines) identified by a model, but no "precision" metric. This is because in [40], the authors are trying to study the relationship between the object pairs contained in the image and the annotators are not able to use language to describe all relationship between object pairs. Therefore, the "precision" metric is not included since the authors do not want to penalize models for predicting relationship that is not described by the annotators. However, in the PG mapping task, the connection relationship is well defined by the power lines, therefore we propose the scoring metric as: Here, R and P represents recall and precision, respectively. C(·) is the counting operation and T , L stands for correctly recognized nodes (towers) and edges (lines), respectively. N pred represents the total number given by predictions and N truth represents the total number of objects and relationships given by the ground truth. As discussed in Section V-B, we use the criterion in equation 3 to determine when detected towers match ground truth towers.
D. Implementation details of GridTracer
There are two deep learning models in GridTracer: a tower detector and a tower connector (for identifying power line connections). We train each of these two components separately. We train the faster RCNN tower detectors (one for each set of training data) and evaluate them on 500 × 500 uniformly extracted sub-images from our raw input imagery. We use anchors with area of {10 2 ,25 2 ,50 2 ,100 2 ,200 2 } pixels, and aspect ratio of {0.5,1.0,2.0}. These boxes are much smaller than the original RCNN [40], inspired by similar work with overhead imagery in [41]. The first two anchor sizes were chosen specifically to capture the small-scale towers especially in New Zealand. During training, we augmented the training data using both random horizontal and vertical flips as well as 90, 180, and 270 degree rotations. For all of the experiments, we train the models with a batch size of 5 for 50,000 iterations using the aforementioned training image partition method. We adapted a manual learning rate scheduler which uses a learning rate of 3e −3 for the first 10k steps and drop by a factor of 0.1 after every 10k steps.
At inference time, after the locations of the towers are predicted, we use gridTracer with the predicted tower locations to predict the PG. For the hyper-parameters, we set γ = 0.2, d = 600m, w = 9. We present ablation results for selecting these parameters in Section VII-D.
E. Human-level performance estimation
In order to aid our analysis of the PG mapping problem, and GridTracer, we estimate the level of performance that a human annotator may achieve on our dataset. Humanlevel performance is often used as a benchmark for visual recognition tasks [42], [43], [44], [45] because humans often (though not always) achieve strong performance on visual recognition techniques, and furthermore, a sufficientlysophisticated automatic approach should be able to achieve the same performance as a human. Therefore if an automatic approach does not reach human-level performance, it implies that the recognition model may be making incorrect or incomplete assumptions, and further investment might yield greater performance. Similarly, if human performance is poor on a given task, it may indicate that a visual task is difficult or even infeasible. We will use human-level performance to ensure the overall feasibility of our PG mapping problem, and assess the relative performance of GridTracer.
As discussed in Section V, to estimate human-level performance we randomly sampled 20% of the imagery from each of our three geographic regions, and had them annotated by a second group of human annotators. Then we treated these annotations as predictions, and assessed their accuracy using the same tower detection and graph inference metrics that we apply to GridTracer. In Section VI we will compare GridTracer to our human annotators on the same 20% subset of imagery.
VI. PG MAPPING BENCHMARK RESULTS
In this section, we present the performance of GridTracer using the "conventional" data handling scheme illustrated in Fig. 8(a). As discussed in Section V, this scheme has been employed in numerous recent benchmark for recognition in overhead imagery (e.g.,DeepGlobe [8], DSTL [30]). Although GridTracer is composed of three steps, here we present the two metrics of our PG mapping benchmark: (i) PG tower detection, and (ii) PG graph inference. We provide line segmentation results, along with other analyses, in Section VII.
The results here with GridTracer represent the first results using an automatic recognition algorithm for both transmission and distribution mapping in overhead imagery, and therefore they represent a baseline upon which other approaches can build. However, because this is a new problem, it is difficult to evaluate (i) the relative success of GridTracer and (ii) how much better we could expect to perform with further research? To address these important questions we estimate human-level performance for this problem, and compare it with GridTracer on the same 20% subset of our testing dataset. Please see Section V for methodological details. The results of GridTracer and the human-level performance will be presented on the 20% subset of our testing dataset below, in addition to GridTracer's performance on the full testing dataset.
A. Tower detection
The PG tower detection results for GridTracer are presented in Table II. The results indicate that the DmAP score is roughly 0.61 on average across our three testing regions, indicating that almost one out of every two detected towers is false, however, the level of performance varies substantially across regions. In Arizona, for example, only one in four detected towers is false, while it is closer to one in two in the other regions. We hypothesize that this difference may be caused by the differences in the background. The shadows, which is usually a critical feature for detectors, of the towers in Arizona, as shown in Fig. 10 top, usually blends with the nearby bushes. This makes the shadows more difficult to recognize, and therefore leads to a decrease in the detection performance.
To provide a reference point for judging the results of GridTracer, we can review the results in rows two and three, comparing the performance of GridTracer and humans over the same 20% of our testing data. First, we note that the performance of GridTracer in row one and three are similar, implying that our 20% testing subset is relatively representative of the full testing dataset. With this in mind, human annotators achieve a DmAP of 0.86 on average across the three regions, indicating that nearly nine of ten predicted towers will be correct. Although the level of performance needed to support energy-related decision-making and research will vary, these results provide both energy and computer vision researchers with an estimate of the accuracy that should be achievable with a baseline recognition model, given sufficient development.
Furthermore, these results help us understand the degree to which GridTracer can be improved, and possibly how it can be improved. Although GridTracer relies upon a state-of-the-art object detection model, human performance is substantially better, and more consistent across each of the geographic regions, suggesting significant improvements can be made. As discussed in Section I, modern DNNs rely primarily upon local visual cues to detect objects. The substantial performance advantage of humans suggests that they are therefore likely to be using additional cues to identify towers. We hypothesize that such cues may include the integration of non-local visual features, or exploitation of the known topology/structure of the PG. For example, it may be possible to infer the presence of a PG tower if its inclusion results in a more probable PG topology, even if there are limited visual cues for the tower itself.
B. PG graph inference
The PG graph inference results for GridTracer are presented in Table III. We apply a similar analysis here to the one before in Section VI-A. GridTracer's average F1 score of 0.63 indicates that approximately 63% of the underlying PG (i.e., towers plus connections) is identified, and that 63% of the inferred PG infrastructure is correct. This is roughly similar to GridTracer's performance for tower detection alone, although caution must be taken when comparing the results here to those in Table II due to differences in the scoring metrics.
Similar to the results for PG tower detection, we find that graph inference scores for GridTracer on the 20% subset are similar to those on the full dataset, indicating that the testing subset is representative of the full dataset. We also again find that human annotators achieve substantially better performance than GridTracer, indicating that further improvements can be made to the PG graph inference approach. However, it is notable that human performance is lower on the graph inference problem, indicating that (given the 0.3m resolution of our imagery), PG graph inference may have a lower achievable performance than tower detection. As with tower detection, the level of accuracy in these output data that is needed to support energy-related decision-making and research will vary. These results may therefore provide valuable insights regarding the potential utility of PG mapping in overhead imagery for different applications. Fig. 10 presents a visualization of the PG inferred by GridTracer compared to the ground truth in each of our three geographic test regions. These results illustrate the various types of errors that are made by GridTracer, such as undetected towers, which also necessarily result in one or more (usually more) undetected PG lines. Although GridTracer finds the majority of towers and connections, the effects of these errors is that the inferred PG graph is not consistent with common PG topology: e.g., towers are connected to two (or more) PG lines, and there are no disjoint subgraphs arising from a single missing connection or tower. By contrast, human annotations almost always fit these real-world constraints and we hypothesize that humans leverage apriori knowledge about the PG to infer the presence of PG infrastructure even when weak or non-existent visual cues are present. Given its current design, GridTracer does not exploit, or impose upon its predictions, most of these topological cues, and we believe this is an important direction for future work.
VII. ADDITIONAL ANALYSIS
In this section we present additional experimental results and analysis that provide further insights into the performance and design of GridTracer.
A. Object detection scoring metrics
Recall in Section VI-A we argued that the mAP -based metric tends to disagree with our DmAP metric, justifying our adoption of the DmAP scoring metric for our PG mapping benchmark. In this sub-section we provide experimental results supporting this claim. In Table IV we compare the DmAP metric to two commonly-used mAP -based metrics on our benchmark testing dataset: mAP 0.5 and mAP 0.75 . The subscript of each mAP score denotes the value of τ in eq. 2. We see that the DmAP score is higher (indicating better performance) compared to both of the mAP -based scores. This suggests (as argued in Section VI-A) that the shape and size of GridTracer's predicted bounding boxes often meet the centroid-based DmAP score -the metric of our primary concern in PG mapping -even if they do not meet the IoUbased mAP score.
B. Object detector encoder comparisons
In this section we report the performance of three pretrained backbone networks, or "encoders", that we considered for inclusion in GridTracer's tower detection model. It has been shown in several fields that large pre-trained encoders can offer performance advantages, including in overhead imagery [46], [29]. Here we consider three widely-used encoders, in order of their size: ResNet50 [47], ResNet101 [47], and Incep-tionV2 [37]. In Table V we compare the performance of faster R-CNN tower detector models, each using a different encoder, on our PG mapping benchmark task for tower detection. Among them, ResNet50 yielded the worst performance and the other two relatively larger backbones have significantly better results. This result is consistent with other recent findings [46], [29], and suggests that a large backbone is beneficial for extracting visual features for PG mapping.
C. PG line segmentation performance and model comparison
In this section we report the performance of two segmentation models that we considered for inclusion in GridTracer. The first is a UNet model with a ResNet50 [47] encoder that has been pretrained on the ImageNet [48]. Models of this form have recently achieved state-of-the-art performance for segmentation of overhead imagery [46], [29]. We also considered the StackNetMTL model (discussed in Section II, that recently achieved state-of-the-art performance on road segmentation. Due to the similarities between our task and road mapping, we hypothesized that StackNetMTL may yield better results. To assess these two segmentation models, we employed the intersection-over-union (IoU) metric since it is widely used in recent segmentation benchmark problems (e.g., [30], [8]). The results of this experiment are presented below in Table VI. As we see, StackNetMTL provides substantially and consistently superior performance compared to the UNet. As a result of this superior performance, we adopted the StackNetMTL in GridTracer.
D. Robustness to graph inference hyperparameter settings
The graph inference stage in GridTracer algorithm has three hyper parameters: γ, d and w. In Table VII we show GridT racer's benchmark performance when varying the value of each of these hyperparameters. We find that d = 600m and w = 9 yields the best performance among the settings we considered, but we also find that GridTracer is relatively robust with respect to their settings. We find that performance is somewhat more sensitive to γ; if it is set too small then we obtain large numbers of false PG line connections, reducing our performance. However, performance appears to be insensitive once we use larger values, achieving the best performance when γ = 0.2, and dropping only slightly if we set it higher. Overall we find the model is relatively insensitive to these hyperparameter settings.
E. GridTracer performance under varying testing scenarios
In this section we consider the performance of GridTracer under less conventional testing scenarios. Our benchmark testing results were based upon the data handling scheme illustrated in Fig. 8(a), which is the conventional approach used in most benchmark problems in overhead imagery. Here we consider the performance of GridTracer when tested using the data handling schemes illustrated in Fig. 8(b,c). These experiments are aimed to address two questions: (a) is training on geographically-diverse imagery beneficial? and (b) how well does GridTracer generalize to previously-unseen geographic regions?
Is geographically-diverse training data beneficial? As discussed in section III-C, the three regions in the PG dataset had significantly different visual characteristics, and somewhat unique PG grid topologies. Given the unique characteristics of these regions, it is unclear whether it is beneficial to train a single model on all regions simultaneously (the "conventional" scheme), as opposed to training a unique model that is tailored to the characteristics of each geographic region. We address this question by comparing the performance of GridTracer when using data handling schemes (a) and (b) in Fig. 8. In contrast to scheme (a), scheme (b) trains a model separately for each geographic region.
The results of this experiment are presented in Table VIII. In all three stages, the model trained with data handling scheme (a) generally outperforms the one trained with scheme (b).This suggests that sourcing training data from the same visual domain as the testing data tends to under-perform a more geographically (and thereby visually) diverse pool of training data. Generalization to unseen geographies. The conventional testing scenario in Fig. 8(a) is typical in computer vision research implicitly assumes that labeled training data is available in (or near) every geographic location on which we wish to apply our recognition models. In practice however it is cumbersome and costly to collect training imagery in each deployment location, and re-train the model with that imagery. In this section we consider how well GridTracer performs when evaluated in novel geographic locations -i.e., locations for which no training imagery is available in the training dataset. We use the data handling scheme in Fig. 8(c) to approximate this realistic scenario by limiting the model to be trained on only two regions and then testing on the third (unseen) region. The results are presented in Table IX, compared to the conventional testing scenario.
In scheme (c) both the tower detection and line segmentation yields poor results. This indicates that the model does not generalize well to novel geographic regions. This finding is consistent with, and corroborates, other recent findings in the literature indicating that deep learning models do not generalize well to new geographic regions [49], [16].
VIII. CONCLUSION
In this work we proposed a novel approach for collecting power grid information automatically by mapping (i.e., detecting and connecting) transmission and distribution towers and lines in overhead imagery using deep learning. We developed and publicly released a dataset of overhead imagery with ground truth information for a variety of power grids. To our knowledge, this is the first dataset of its kind in the public domain and will enable other researchers to build increasingly effective transmission and distribution grid mapping algorithms.
We also took the first steps towards tackling the PG mapping problem as well. We developed and evaluated baseline algorithms for two problems: tower detection and identifying tower interconnections through power lines. In particular, we developed GridTracer as baseline approach to solve the PG mapping problem. We also estimate the ability of human annotators to perform PG mapping, providing future researchers with an estimate of the level of PG mapping accuracy that may ultimately be achievable with a fully-automatic mapping algorithm. In particular, we found that GridTracer does not yet reach human-level PG mapping accuracy, suggesting that further improvements can be made to bridge this performance gap. Ultimately these results provide a strong foundation for the development of automatic PG mapping techniques, which offer a powerful tool to collect valuable information to support energy researchers and decision-makers. | 12,430.6 | 2021-01-16T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Scale evolution of double parton correlations
We review the effect of scale evolution on a number of different correlations in double parton scattering (DPS). The strength of the correlations generally decreases with the scale but at a rate which greatly varies between different types. Through studies of the evolution, an understanding of which correlations can be of experimental relevance in different processes and kinematical regions is obtained.
Introduction
An increasingly relevant aspect of proton-proton collisions at high energies is double parton scattering (DPS), where two partons from each proton interact in two separate hard subprocesses. DPS contributes to many final states of interest at the LHC. They constitute relevant backgrounds to precise Higgs boson coupling measurements and searchers for physics beyond the Standard Model. Our knowledge of DPS is fragmentary, and improvements are needed at both conceptual and quantitative level (see for example 1 ).
Schematically the DPS cross section can be expressed as whereσ i represents hard subprocess i, C is a combinatorial factor equal to two (one) if the partonic subprocesses are (not) identical and F (F ) labels the double parton distribution of the proton with momentum p (p). The DPDs depend on the longitudinal momentum fractions of the two partons x i (x i ) and the distance between them y. Implicit in this expression are the labels for the different flavors, colors, fermion numbers and spins of the four partons. This quantum-number structure is significantly more complicated in DPS compared to the case with only one hard interaction, because of the possibility of interference between the two hard interactions and correlations between the two partons inside each proton.
This is an Open Access article published by World Scientific Publishing Company. It is distributed under the terms of the Creative Commons Attribution 3.0 (CC-BY) License. Further distribution of this work is permitted, provided the original work is properly cited.
The correlations can be of kinematical type (between x i 's and y), or between the quantum numbers of the two partons. While the kinematical type affects dependence of the DPDs on the kinematical variables, the quantum-number correlations lead to a large number of different DPDs. The DPDs depend on long distance, nonperturbative physics and can thus not be calculated in perturbative QCD.
Including all the correlations and their DPDs in phenomenological calculations is cumbersome, and extracting all of them experimentally is unfeasible. An effective way of reducing the number of DPDs of experimental relevance is studying the scale evolution of the DPDs and the correlations they describe. For example, it has been demonstrated for quark and antiquark DPDs that color interference terms is suppressed by Sudakov factors at large scales 2,3 , which when combined with positivity bounds constraining the size of the correlations at low scales 4 set limits on the scale at which these correlations can be of experimental relevance. The scale evolution of the DPDs are described by generalizations of the usual DGLAP evolution equations. Two versions of this have been discussed in the literature: a homogeneous equation describing the separate evolution of each of the two partons and an inhomogeneous including also the splitting of one parent parton into the two partons that undergo hard scattering 6,7,8,9,10 . Which version is adequate for the description of DPS processes remains controversial in the literature 11,12,13,14,15,16,17,18 .
These proceedings reviews our study of the effects scale evolution has on DPS correlations 19 . For the numerical results presented in these proceedings we have used the homogeneous evolution equation. To solve the evolution equations numerically, we use a modified version of the code originally described in 9 .
2.
Correlations between x 1 , x 2 and y A number of arguments suggest an interplay between the dependence of DPDs on the longitudinal momentum fractions x 1 , x 2 of the partons, as well as between their momentum fractions and their relative transverse distance y 20 .
In this section we study the impact of evolution on the correlations between the momentum fractions and the interparton distance, for this purpose we need a model for the DPDs at the starting scale of evolution. We take a simple ansatz motivated by studies of GPDs as explained in 19 at the starting scale Q 2 0 = 2 GeV 2 , with The parameters are set to the values The DPDs evolve independently at each value of y, but the interplay between y and the momentum fractions x 1 and x 2 in the starting conditions has consequences for the scale evolution at different values of y. The Gaussian dependence of our starting condition (2) is approximately preserved by evolution up to large scales, as demonstrated by figure 1. This allows us to take a closer look at the evolution of the width of the y dependence. Figure 2(a) shows the evolution of h eff aa (x, x) (effective Gaussian width) at x = 0.01 for a = u − , u + and g. The effective Gaussian width decreases under evolution for both u − and u + , whereas it changes for the gluon. As the valence combination u − evolves to higher scales, partons move from higher to lower x values by radiating gluons. For partons at given x and Q, the width of the y distribution is therefore influenced by the smaller values of this width for partons with higher x at lower Q -leading to the decrease of h eff u − u − with Q in figure 2(a). The double u + distribution mixes with gluons and h eff u + u + approaches h eff gg with increasing scale, although it does so rather slowly. The difference between the transverse distribution of gluons and quarks, which we have assumed at Q 0 , remains up to large scales.
The dependence of h eff aa (x, x) on x is shown in figure 3(a), (b) and (c) for the different parton types. We see that the evolution is faster at smaller momentum fractions x, and at low x there is a rapid decrease of h eff aa (x, x) with Q 2 for all parton types. For u + this results in a region of intermediate x where h eff u + u + (x, x) increases with x at high Q 2 . For u − and g the curves for h eff aa (x, x) are approximately linear in ln(x) as long as we stay away from the large-x region. This allows us to extract an effective shrinkage parameter α eff a by fitting the effective Gaussian width in an appropriate region of x. The scale dependence of α eff a is shown in figure 2(b). We find that α eff a decreases quite rapidly for a = g and more gently for a = u − . In summary, we find that a Gaussian y dependence at the initial scale is approx- imately preserved under evolution, with a noticeable but relatively slow change of the effective Gaussian width. Despite the mixing between gluons and quarks in the singlet sector, the differences between their distributions remains up to high scales.
Evolution of Polarized Double Parton Distributions
We next investigate the evolution of spin correlations between two partons inside a proton. For this purpose we assume a multiplicative y dependence of the DPDs, As our focus is on the degree of parton polarization rather than on the absolute size of the DPDs, we set G(y) = 1. For the unpolarized DPDs we take a simple factorizing ansatz at the starting scale (Q 2 0 = 1 GeV 2 ), The single parton densities used will be either of the two LO sets MSTW 2008 21 and GJR 08 22 . To model the polarized DPDs is more difficult. There is no reason to believe that a decomposition of a polarized DPD, which describes the spin correlations between two partons, into polarized PDFs, describing the spin correlations between the proton and a parton, should be suitable even as a starting point. Instead in the scenario presented in these proceedings, we make use of the positivity bounds for DPDs derived in 5 . At the starting scale Q 0 of evolution, we maximize each polarized DPD with respect to its unpolarized counterpart. This gives polarized distributions equal to the unpolarized at the initial scale.
We will show a series of figures with curves for different scales. In each figure, the upper row shows the polarized DPDs and the lower row shows the ratio between polarized and unpolarized DPDs. The ratio indicates how important spin correlations are in the cross sections of DPS processes. We show the polarized distributions as functions of x 1 at x 1 = x 2 and as functions of ln(x 1 /x 2 ) for x 1 x 2 = 10 −4 .
Quark distributions
We start our examination of spin correlations with the DPDs for longitudinally or transversely polarized quarks and antiquarks.
The distribution for longitudinally polarized up quarks and antiquarks is shown in figure 4. The polarized distribution f ∆u∆ū evolves very slowly, but the degree of polarization decreases with the evolution scale. This is due to the increase of the unpolarized DPDs. We find a degree of polarization around 50% at Q 2 = 16 GeV 2 and above 20% at Q 2 = 10 4 GeV 2 for x 1 x 2 = 10 −4 and a wide range of ln(x 1 /x 2 ).
Transverse quark and antiquark polarization leads to characteristic azimuthal correlations in the final state of DPD processes 23 . They do not mix with gluons under evolution, nor with quarks or antiquarks of different flavors. Figure 5 shows the DPD for transversely polarized up quarks and antiquarks. There is a small decrease of the DPD with Q 2 over the entire x i range, but the suppression of the degree of polarization is mainly due to the increase in the unpolarized distributions. The evolution of the degree of polarization is similar to the case of longitudinal polarization, with a somewhat faster decrease. At intermediate and large x i values, the degree of polarization decreases slowly. For x 1 x 2 = 10 −4 it amounts to 40% at Q 2 = 16 GeV 2 and to 10% at Q 2 = 10 4 GeV 2 over a wide rapidity range.
The polarization for other combinations of light quarks and antiquarks is of similar size and shows a similar evolution behavior as for the case of a uū pair.
Gluon distributions
Gluons can be polarized longitudinally or linearly. The unpolarized (single or double) gluon density increases rapidly at small momentum fractions due to the 1/x behavior of the gluon splitting kernel. The absence of this low-x enhancement in the polarized gluon splitting kernels lead us to expect that the degree of gluon polarization will vanish rapidly in the small x region.
As can be seen in figure 6 for longitudinally polarized gluons, this is indeed the case. The distribution f ∆g∆g does increase with evolution scale, but at a much lower rate than f gg . Evolution quickly suppresses the degree of longitudinal gluon polarization in the small x i region. The degree of polarization at at Q 2 = 16 GeV 2 amounts to a degree of polarization equal to 30% and almost 20% at Q 2 = 10 4 GeV 2 for x 1 x 2 = 10 −4 , with a very weak dependence on ln(x 1 /x 2 ). Our knowledge of the single gluon distribution at the low scale remains poor, and using as an alternative to the GJR distributions the MSTW set the degree of polarization is reduced to around half the size or below.
Linearly polarized gluons give rise to azimuthal asymmetries in DPS cross sections 24 . The effect of evolution on the distribution of two linearly polarized gluons is shown in figure 7. We see that even the polarized distribution f δgδg itself decreases with the scale. Together with the rapid increase of the unpolarized two-gluon DPD this results in a rapid decrease of the degree of linear polarization, especially at small x i . As in the case of longitudinal gluon polarization, using the MSTW distributions at the starting scale results in an even faster suppression. In that case the degree of polarization is tiny already at Q 2 = 16 GeV 2 . We conclude that the correlation between two linearly polarized gluons is quickly washed out by evolution and can only be relevant at rather large x i or rather low scales. | 2,947 | 2014-11-17T00:00:00.000 | [
"Physics"
] |
Microscopic Congestion Detection Protocol in VANETs
provided the original
Introduction
The ever-increasing traffic density requires more effective traffic control technique to avoid the serious traffic jam [1].Intelligent transportation system (ITS) provides innovative transport and traffic management technique.So far, a number of research efforts have been devoted to traffic congestion detection in both infrastructure mode and infrastructure-free mode.And these protocols aim to monitor road traffic and to estimate vehicle speed, density, and the arrival time [2][3][4].Basically, sensing devices such as induction loop detectors [5], infrared detectors [6], microwave radars [7], and video recording devices [8] can be utilized to monitor vehicles on the highway.However, the coverage of long highway with the aforementioned dedicated devices becomes too expensive to afford the huge installations of these sensor devices.Compared to the aforementioned infrastructure method, the vehicular ad hoc network (VANET) provides us a costeffective infrastructure-free technique to support a variety of ITS applications, such as safety surveillance, road monitoring, traffic flow management, and vehicle density estimation [9,10].In the framework of the VANET, each vehicle acts as a sensor node to collect information for transportation congestion estimate.However, there are several technical problems that need to be addressed.
Basically, a reasonable information exchange mechanism among moving vehicles will be the basis to realize the traffic monitoring in the VANET framework.The direct adaptation of VANETs for traffic detection looks attractive, but it has to cope with challenges like bandwidth flooding and duplication [11], delay and inaccurate traffic evaluation [12], and the reliability problems [13].In [2], the contents oriented communication (COC) protocol was proposed for traffic congestion and accident detection.Whenever a vehicle receives a packet, it calculates the congestion and estimates speed by itself.By exchanging the calculated results, the COC offers a feasible scheme to gather real-time position for congestion detection and speed estimation.Nonetheless, too many content exchanges in the COC protocol will consume high bandwidth.Efficient congestion detection (ECODE) protocol was proposed for VANET in [14] to evaluate traffic characteristics as well.However, ECODE uses multihop through temporal congestion detection and speed estimation analysis.Thirdly, selected communication capability assessment was conducted to show the achieved network performance of the MCDP.In this paper, we have conducted a comprehensive study that combines transportation and communications, and we aim to address interdependent issues.In practice, vehicles with congestion detection and speed estimation capabilities can facilitate a safer and more comfortable driving experience.Our initial experimental analysis reveals that V2V communication among vehicles helps to refine traffic flow throughput, because it reduces the driver's perception-reaction time (PRT), which allows higher speed and safer movement of vehicles on road.All the analysis in this paper confirms that the proposed MCDP provides us with an effective network layer technique to detect and manage traffic congestion, even in the multiple lanes scenarios.
The remainder of this paper is organized as follows.In Section 2, previous works related to congestion detection are briefly reviewed to highlight the challenges.In Section 3, the MCDP and its working philosophy are presented.Analysis and simulation results are presented in Section 4 and the paper is concluded in Section 5.
The State of the Art
Traffic evaluation always remains a concentrative point for the ITS.Many existing protocols in literature are available for traffic evaluation.In this paper, we focus on the VANET based congestion detection technology.Some VANET based congestion detection and traffic evaluation technologies are summarized in Table 1.
In addition to the protocols of ECODE [14] and COC [2], the voting protocol was proposed in [25] to estimate congestion level from neighboring vehicle characteristics.Each vehicle within transmission range disseminates its own information to neighbor vehicles, and the neighbor vehicles estimate congestion by comparing the current moving speed with the maximal allowed speed in that specific zone.The receiver vehicle cumulates the neighbor speed and votes for or against the conclusion of congestion.The congestion decision depends on the majority of votes, because it is possible that on the highway some vehicles may voluntarily move slowly.More specifically, the majority voting for slow-moving implies that highway is congested at that moment.If the other surrounding vehicles are moving at a relatively higher speed, we may arrive at the conclusion that this vehicle is voluntarily driving slowly.In contrast, if a vehicle is moving slowly over a particular road segment and the surrounding vehicles are traveling at approximately the same speed, one can conclude that congestion has occurred.Each region requires relatively high traffic density to confirm congestion decision; otherwise, congestion assessment will be less accurate.
The clustered area approach was proposed in [26], wherein the geographic area on the highway is subdivided into small managerial zones.The passing time of each zone in normal traffic condition can be defined in advance.Every vehicle measures its passing time with respect to every zone and compares it with the normal coverage time.Each
Protocol
Architecture Transportation Bandwidth Accuracy ECODE [14] V2V Directed density detection High Optimal COC [2] V2V Density estimation High Good Voting [25] V2V Speed detection High Low Clustered Area [26] V2V Arrival time estimation Medium Low SOTIS [27] V2V Vehicle Speed High Good IFTIS [28] V2V Density detection Low Good StreetSmart [3] V2I Traffic Speed Low Low vehicle on highway exchanges its entering and leaving time with each other, which would become a prediction notice of congestion for ongoing vehicles.The frequent and large exchange of entering/exit messages with high transmission power is the primary issue in clustered area approach.Selforganizing the intervehicle network (SOTIS) [27] is another specialized approach, wherein every vehicle is assumed to be provisioned with an internal database of geographic information of highway and digital map.And all ongoing vehicles are assumed to exchange their locations, speeds, and road conditions with each other periodically.In this way, the forehead vehicle is able to inform the following vehicles about the road conditions.The main issues of the SOTIS approach are that now every vehicle is assumed to have proactive information about the highway, which will be utilized for comparison later on with the reactive information.
An infrastructure-free traffic information system (IFTIS) was proposed for vehicular networks in [28], which was the first approach in the literature to monitor road condition in a segment-wise manner.The primary advantage of the IFTIS approach is that it provides a protocol to assist the driver in selecting a noncongested lane.In the proposed IFTIS protocol, the investigated road will be segmented into overlapping location-based groups.Each group has a vehicle centralized to the cell called group leader, which has information about the whole group.The traffic density of each group is evaluated by the group leader and disseminate to the intersections.IFTIS is able to evaluate traffic density in both directions.Sometimes IFTIS experiences high density from one direction but low from other direction, and this may cause long end-to-end delays and decrease the packet delivery ratio, especially if the vehicles on the road are in the opposite direction to the desired destination.Secondly, the overlapping in clusters also reduces the accuracy of traffic evaluation across the road segment.
On the basis of clustering and epidemic communication strategy, the StreetSmart aims to find dynamic patterns and report to adjacent clusters [3].The dynamic patterns are filtered to find out an unexpected status that is forwarded to the rest of vehicles.Each vehicle summarizes gathered statistics locally and concludes the road situation.Unfortunately, the decentralized nature of StreetSmart does not prevent the flash crowding effect nor contributes to the optimized efficiency at a global scale.After a sudden change in traffic conditions, it does not update the delay information in a timely manner.The low level of accuracy is the main limitation of the StreetSmart because a number of vehicles can get involved in multiple clusters due to the overlapping cluster areas.Other congestion detection schemes for VANETs include Virtual Sink [29], V2X [11], and Lattice [4].
A comparative study of two approaches for road traffic density estimation from traffic video scenes was presented in [30].Both the extracted microscopic parameters (i.e., individual vehicle motion parameters) and the macroscopic parameters (i.e., global motion parameters) are applied in classifiers to enable classification of light, medium, and heavy road traffic status.It is shown that a very high accuracy can be achieved by using the traffic video classification.However, this kind of traffic monitoring is dependent on the widely deployed traffic surveillance camera, which may become too expensive to afford if we consider a very large number of cameras.Moreover, an infrared camera may be needed for the surveillance at night.Therefore, infrastructure-less solution, for instance, VANETs with reasonable traffic monitoring and congestion detection capabilities, may become highly desirable for ITS.
On the basis of the VANETs, a strategy was proposed in [31] to reduce traffic congestion, wherein periodically emitted beacons of V2V communication were proposed to enable traffic flow estimate and to warn drivers about the possible traffic breakdown.The research efforts in [31] were dedicated to the VANETS-assisted traffic jam reduction mechanism.In our work, we primarily focus on how to introduce transportation control domain in the existing network protocol header, such that each vehicle can count its neighbors and estimate the time spacing among vehicles.Because every vehicle with VANET capability is able to estimate vehicle density, flow, and average velocity in a microscopical manner, and the similar driver behavior recommendation strategy in [31] can also be employed in order to eliminate the forthcoming congestion.
Unlike all the aforementioned works, in this paper, we propose a novel approach called microscopic congestion detection protocol (MCDP), which detects the highway lane congestion and disseminate the detection results in both interlane and intralane manner.As will be shown in the following discussion, MCDP provides us with an inexpensive but effective transport congestion detection technique in VANETs.
The MCDP Mechanism
By integrating microscopic properties of vehicles [33] and external parameters, such as highway type and safety distance between vehicles, we propose a novel MCDP to support transportation congestion detection over the existing MANETs.MCDP operates in a fully distributed infrastructure-less mode that is independent of any additional information, such as traffic data from local authorities.And MCDP is independent of the highway length.In [18], the authors considered microscopic properties for density estimation and road monitoring by assuming a fixed length intervehicle distance , which is unrealistic because the fixed spacing between vehicles will affect safety regulations.Safety is an important feature of VANET, which relies on intervehicle spacing.A driver can make a reasonable decision to cope with the emergency situation only on the basis of accurate estimation of leading vehicle distance.In order to design a feasible transportation congestion detection scheme for VANETs, we should take into account the relationship between vehicle speed and interspacing.More specifically, we consider more realistic intervehicle spacing modeled as an exponential distribution.In addition, we consider macroscopic vehicle parameters that are collected microscopically for congestion detection.In MCDP, vehicles within communication range exchange macroscopic information locally, and each vehicle can thus estimate its surrounding vehicle density.The density estimation is enabled by measuring distinct neighbors and time headway (spacing coverage time) to predict upcoming congestion, as will be elaborated in detail in Section 3.1.Furthermore, every vehicle calculates its headway (spacing coverage time) and predicts upcoming congestion.The work flow of the MCDP scheme can be depicted in Figure 1, which is supposed to operating at every individual vehicle.
Information Dissemination.
Information dissemination specifies the periodic exchange of beacons messages in ad hoc vehicular networks without any complicated negotiation among vehicles, which is necessary for protocol maintenance.In order to realize the transportation surveillance, it is expected to transmit the aforementioned macroscopic information to some neighboring vehicles in very short time.Many existing works, for instance [2,14,23,[25][26][27][28][34][35][36], assume a control beacon based information dissemination mechanism as well.The information dissemination in MCDP can be subdivided into three types, namely, basic parameter exchange, speed assistance message, and interlane vehicle density.
(i) Basic Parameter Exchange: In this study, we assume that every vehicle has a GPS navigation system.On this basis, each vehicle is assumed to be able to collect vehicle current position, driving direction, speed, session time, and the topology of road networks from GIS.In the proposed MCDP framework, vehicle on the highway will periodically exchange beacon At the same time, the vehicle density on the road is assumed to follow a Poisson distribution; namely, the likelihood that vehicles are found in a space of meters can be expressed as where represents the vehicle density (unit: [veh/Km]).And the probability that there is no vehicle in the segment with length on the road is given by g The intervehicle distance is random with mean 1/ , where s is the vehicles spatial density.The probability that there is at least one single vehicle in the segment with length can be given by P r (at least one vehicle in s) = 1 − e −s (4) where is the vehicle arrival rate that can be modeled as Poisson distribution and the speed assigned to the vehicles typically follows the normal distribution.If is normally distributed and intervehicle distance is exponentially distributed, the time headway is also exponentially distributed, namely, It can be readily derived that the time headway between two vehicles must be greater than or equal to zero since Congestion detection is based on the time headway among the vehicles.If the time headway is greater than the safety transportation limit 0 , the congestion probability will be If the time headway lies between intervals of ( 0 , (1 + ) 0 ), the congestion probability will be where ℎℎ− corresponds to the speed limit of the corresponding highways [32], is the communication standard range [37], is the neighbor count, and ℎ is the length of the vehicle [38].In the case of the heterogeneous traffic flow, we may consider the following mean length of vehicle: And the time headway in (10) can be rewritten as If is less than the safety transportation limit 0 , congestion will be detected.According to travel guide instructions, two vehicles should have at least two seconds time headway.But in certain circumstances, the safety time can be extended due to weather situations [24].In a word, the transportation congestion detection c can be summarized as follows: 3.3.Driver Assistance.Another notable advantage of the MCDP is its driver assistance capability.The vehicle will analyze the traffic condition based on the calculated distinct neighbors and the time headway .And the appropriate speed can be estimated according to the underlying traffic status.
Here V is the suggested speed to the driver.The suggested speed can be multicasted intralane on the highway when c=1.The complete pictorial representation of MCDP is given in Figure 3, which shows the entire steps from information dissemination to congestion detection and lane alternation.Additionally, the proposed MCDP scheme can be easily extended to two-way road scenarios by separately counting the vehicles in two opposite directions on the basis of the proposed periodic beacon message exchange.Since the direction information is contained in the transportation control domain for periodic beacon messages, the two-way vehicles can be handled by two independent procedures.
Experimental Study
4.1.Analytical Assessment.Different Chinese roads are considered for analytical assessment and we also try to calculate their congestion boundaries in different scenarios.The experiments are performed with a road segment of 1 km having one to three lanes.The traffic is generated by Monte Carlo in Matlab R2015b [39].To assess the performance, the proposed MCDP is compared with the Green-Shield's car-following model [23,34].Green-Shield's car-following model is a well-known density estimation model.Table 3 summarizes all the assumed parameters for analytical assessment.The safety time headway is considered as a performance metric.4(a) and 4(b), where 37 and 21 vehicles are the beginning of congestion threshold, respectively.On the other hand, Green-Shield model congestion threshold begins after 37 and 21 as shown in Figures 4(a) and 4(b).The exceeding of congestion boundary causes 0 < 2 (safety time goes below than transportation limit), which further causes c=1 (congestion on the road).As illustrated in Figures 4(a) and 4(b), the congestion threshold of the proposed MCDP in single lane highway is less than that of the macroscopic Green-Shield's model, which implies that proposed MCDP is more sensitive to congestion detection and provides more safety traveling environment.Due to the macroscopic nature, Green-Shield model considered many irrelevant parameters (exact free flow speed and jam density), which causes communication and computational overhead.Secondly, Green-Shield model relies on expensive road monitoring devices, which can be utilized to cover specific road segment only.The congestion analysis of China express road and China expressway with multilane scenarios is shown in Figures 4(c) and 4(d).The congestion boundary of our proposed MCDP over two-lane China express road is 30, while there are 18 vehicles in case of Green-Shield model, as shown in Figure 4(c).In twolane highway, even in the high-speed limit, the proposed MCDP model is able to accommodate a larger number of vehicles, which is desirable in order to fully utilize both lanes.Hence in the multilane highway, MCDP outperforms the reference Green-Shield model in terms of its better utilization of all available lanes.The similar observations can be found in Figure 4(d) for three-lane China expressway, where the congestion threshold of our MCDP is quite more than Green-Shield.In short, by observing the numerical results of MCDP, we can conclude that our proposed MCDP protocol provides a promising approach to detect traffic congestion on the specific highway and to assist the driver.
Simulation Analysis.
In this subsection, the efficiency of MCDP is analyzed under different scenarios.Two wellknown simulators such as NS2 (release 3.25) and SUMO (release 0.25.0) are used for assessment.More specifically, NS2 is used for performance evaluation of MCDP while "Sumo 0.25.0" is used for VANETs scenarios generation.Two different types of scenarios (busy congested road and freeway) with varying numbers of connections, transmission time, and simulation time are generated.To precisely simulate such a traffic monitoring system, a federated (Ubuntu 14LTS) framework is required, which combines these two simulators through generic traffic control interfaces.
Traffic Simulation Setup.
Here SUMO is used for road traffic generation.Firstly, we designed road traffic scenario with SUMO.Roads of lengths 1 km and 7 km with two lanes are considered as an input for network simulation.The maximum speed was set to 40km/h for busy road and 50km/h for freeway scenarios, respectively.During the simulation, the red lights are turned ON to monitor all the traffic inside the fixed length of the road.The main traffic simulation parameters are summarized in Table 4.
Network Simulation Setup.
After the generation of the mobility traces, we set up the network simulation in the NS2 framework, wherein the network simulation parameters are given in Table 5.First, busy road scenario was analyzed through MCDP by varying transmission range and simulation time.Secondly, the freeway scenario was analyzed for the same purpose with the same parameters.And we also examined the performance of MCDP, DSR, and AOMDV over IEEE802.11P and IEEE802.11ac in terms of throughput, Packet delivery ratio (PDR) and end-to-end delay.
Let us briefly summarize the performance metrics utilized in the simulation analysis.
(i) Congestion Level: Congestion level is measured as a percentage of the additional travel time compared to normal traffic (or free flow situation).(ii) Estimated Speed: Congestion is a function of the reduction in speed and vice versa.Therefore, the setting of an estimated speed that is directly related to congestion level can be used to assess the traffic congestion.
(iii) Packet Delivery Ratio (PDR): PDR is the ratio of data packets reliably delivered to the destination, i.e., = / , where is the total number of received data packets and stands for the total number of transmitted data packets.
(iv) Throughput: Throughput can be utilized to assess the performance of a network by providing the average rate of successful delivery of packets towards the destination.(v) End-to-End Delay: End-to-end delay is the average duration that each packet can be received by the last node in the network.
Congestion Detection Analysis.
The proposed protocol assumes that each vehicle monitors local traffic congestion detection by analyzing the received beacon messages with detailed information from other vehicles, while each vehicle shares information about its position, direction, speed, and so on.To see the overall performance of the MCDP protocol, the average temporal congestion level and average estimated speed of all vehicles under two different traffic scenarios are shown in Figures 5 and 6, respectively.It can be observed that when the safety time goes below the safety time threshold value 0 , the congestion level starts to increase and the estimated speed starts to decrease.The results show that when red lights are turned ON, the congestion is at its peak level and the estimated speed tends to be negligible.The average estimated speed to the drivers in varying traffic densities is shown as well in both Figures 5 and 6.In all traffic and simulation setups, the following relationships among safety time , congestion level V , and estimated speed are concluded: S est ∝ t safe (16) In [40], the road gets into the congested state if the vehicle travel time exceeds the normal travel time at free flow.In order to do that, the scheme requires that each section of the road should be under surveillance all the time; meanwhile all the vehicles need to report their traversal time over each section to a centralized entity.In MCDP, the congestion detection can be realized without such an infrastructure.It should be addressed that the speed of a vehicle is directly affected by the level of traffic congestion in its surroundings, and it is reasonable to use it for traffic congestion detection and speed estimation.Additionally, MCDP quantifies the level of congestion locally from the information in the beacon message, which is remarkably important for delay-sensitive applications.
We illustrate the impact of transmission range (TR) for the MCDP in Figure 7.Here three different TRs of 300m, 600m, and 1000m are considered and the congestion detection and speed estimation technique are confined to the vehicle's visibility based on its transmission (DSRC) range.It can be noted that congestion detection level gets decreased with the increase in transmission range.As illustrated in Figure 7(a), if 1000m transmission range is assumed, even a traffic congestion happens, it may be ignored by the average congestion level detection assessment scheme.This suggests us that a reasonable setup of TR is important to achieve the reasonable congestion detection in the MCDP framework.A very small TR setup may make the system too sensitive to the traffic variation on road, while a too large TR will ignore the real traffic congestion.A reasonable TR setup can realize the proper tradeoff.
Communication Performance Analysis.
MCDP is a fullfledge protocol, which is easily implementable and executable to any sort of environment, but the performance of routing protocol also depends on how better the routing takes place in the network.Here we focus on the achieved throughput, end-to-end delay, and PDR performance to assess the routing capability and efficiency of MCDP.The communication performance simulation results of MCDP, DSR, and AOMDV protocols are shown in Figure 8 with a different numbers of UDP connections over IEEE802.11p and IEEE802.11ac,respectively.From Figure 8 it can be easily concluded that, with the increase in a number of connections, the throughput of all routing protocols gets increased.While the achieved throughput of both MCDP and AOMDV protocols is almost the same for all connection numbers.In fact, due to multipath nature, their throughout is better than that of the DSR protocol.The end-to-end delay is illustrated in Figure 9 by considering UDP connections over IEEE802.11p and IEEE802.11ac.It is shown that IEEE802.11palways outperforms IEEE802.11ac in all cases, which complies with the design objective of IEEE802.11p to support the communication link between the vehicles that might exist only for a short amount of time.At the same time, it can be observed that the proposed MCDP protocol can achieve a comparable good end-to-end delay and throughput performance while having congestion detection and speed estimation feature.
Finally, the PDR performances of MCDP, DSR, and AOMDV are illustrated in Figure 10 for a different number of connections.It is shown that the proposed MCDP protocol can realize comparable PDR performance as the AOMDV and DSR protocols in a different number of connections.Meanwhile, it can also be noted that when there is sufficient connectivity, we can almost guarantee successful packet delivery for the aforementioned three protocols.
Conclusions
VANETs technology plays an important role in safety transportation.Microscopic congestion detection protocol (MCDP) is an interesting application of VANET technology to identify road congestion.At the same time, MCDP provides us with a new approach to estimate vehicles density and to assist driver assistance.In a word, MCDP is an inexpensive approach, which integrates basic microscopic vehicle properties with some external road safety parameters to accurately monitor road status.Moreover, MCDP is quite simple protocol and it works both in a single lane and in multilane highway.An appropriate speed suggestion to the driver at every moment is also a potential application of MCDP, which will be left for future investigation.
Figure 1 :
Figure 1: The work flow illustration of the MCDP.
1 LFigure 2 :
Figure 2: An illustrative scenario where the vehicles are randomly distributed along the road.
FreewayFigure 7 :
Figure 7: Temporal congestion detection and speed estimation using different transmission ranges, number of lanes = 2, car size = 5m, safety time threshold = 2 sec, and simulation time = 500 seconds.
Figure 8 :
Figure 8: Throughput comparison of MCDP, DSR, and AOMDV protocols over 802.11P and 802.11ac with different numbers of connections.
Figure 9 :Figure 10 :
Figure 9: End-to-end delay comparison of MCDP, DSR, and AOMDV protocols over 802.11P and 802.11ac with different numbers of connections.
Table 1 :
Highway traffic monitoring protocols.
Table 2 :
Transportation control domain in beacon message.
[37]ages to disseminate vehicle information (vehicle ID, position, speed, lane ID, session time, and vehicle length) to all its one-hop neighbors.In this way, every vehicle can count its neighbors (vehicle density) within its DSRC communication range[37].The details of the newly introduced transportation control domain for periodic beacon messages are given in Table2.Each vehicle will calculate its distinct neighbor in the same Lane ID ( ) and the headway/timegap between each other.(ii)SpeedAssistance Message: This message is multicast to all intralane vehicles to control the moving speed.The proposed MCDP will try to quantify the congestion level of each vehicle.As long as is lower than some predefined threshold 0 (for instance, 0 = 2), MCDP starts calculating the congestion level and estimating the moving speed .(iii)InterlaneVehicle Density: This message is exchanged among different highway lanes to share density level.Interlane density exchange is important for better utilization of highway lanes.3.2.Transport CongestionDetection.Let us consider a road of length , which can be subdivided into the following n small road segments: L = {s 1 , s 2 , s 3 , . . . . . . . . . . . . . .s n }
Table 3 :
Parameters utilized for MCDP assessment.The impact of vehicle density on safety time headway is shown in Figure 4.It can be observed that vehicle density and safety time have an inverse relation.The congestion boundaries of the proposed MCDP over China city road and China national highway are shown in Figures
Table 4 :
Road traffic generation parameters. | 6,239.8 | 2018-06-28T00:00:00.000 | [
"Computer Science"
] |
A discussion on the anomalous threshold enhancement of
: The attractive interaction between and has to be strong enough if X(6900) is of the molecule type. We argue that since decays predominantly into a pair, the interactions between and may be significantly enhanced owing to the three point loop diagram. The enhancement originates from the anomalous threshold located at GeV , whose effect propagates into the s -channel partial wave amplitude in the vicinity of GeV. This effect may be helpful in the formation of the peak.
The peak observed by the LHCb Collaboration in the di-invariant mass spectrum [1, 2], and later in the invariant mass spectrum [3], has stimulated many discussions of theoretical aspects (see for example Ref. [4] for an incomplete list of references).Moreover, is close to the threshold of , , , and , whereas is close to the threshold of and ). Inspired by this, Ref. [5] studied the properties of and by assuming the coupling to , , , , and channels and the coupling to , , and channels.For the S-wave coupling, the pole counting rule (PCR) [6], which has been applied to the studies of " " physics in Refs.[7−10], was employed to analyze the nature of the two structures.It was found that the didata alone are not sufficient to judge the intrinsic properties of the two states.It was also pointed out that is unlikely a molecule of [5], a conclusion drawn before the discovery of Ref. [3].More recently, Refs.[4,11,12] investigated using a combined analysis of di-and data and concluded that cannot be a molecule.
Nevertheless, as already stressed in Ref. [4], even X(6900) J/ψψ (3686) cccc X(6900) J/ψψ(3770) X(6900) though is very unlikely a molecule of , this does not mean that it has to be an "elementary state" (i.e., a compact tetraquark state).It was pointed out that it is possible that be a molecular state composed of other particles, such as , which form thresholds closer to if the channel coupling is sufficiently large 1) .
This note will discuss a possible mechanism for the enhancement of the channel coupling.The (or ) component inside may play an important role, so far ignored in the literature, in explaining the resonant peak through the anomalous threshold emerged from the triangle diagram generated by the D ( ) loop, as depicted in Fig. 1.
Noticing that
or couples dominantly to , we start from the Feynman diagram as depicted in Fig. 1 by assuming that it contributes to elastic scatterings near the threshold 2) .Assuming an interaction Lagrangian 3) after performing the momentum integration, the amp-litude as depicted by Fig. 1 is where and Here, , , and .On the right hand side of Eq. (3), only the term will be considered since the rest will be absorbed by the contact interactions to be introduced latter.M is the mass of , and m is the mass of .Parameter g is the coupling strength of the three point vertex, and is the coupling strength of the four point vertex; { }.Parameter g can be determined by the decay process , where is the norm of the three-dimensional momentum of or in the final state.The PDG value [16] determines 1) .Parameter is unknown and is left as a free parameter.The amplitude, Eq. ( 4), contains a rather complicated singularity structure, especially the well-known anomalous threshold, which was discovered by Mandelstam who used it to explain the looseness of the deuteron wave function [18].The anomalous threshold is located at Considering the mass of the and mesons, one obtains GeV (for the loop, -0.98 GeV ).Numerically, function is plotted in Fig. 2(a), where one clearly sees the anomalous threshold beside the normal one at .Note that if is smaller than , the anomalous branch point is located below the physical threshold, but on the second sheet.It touches the physical threshold and turns up to the physical sheet if the value of increases to .With a further increase in , the anomalous threshold moves towards the left on the real axis, passes the origin when , and finally reaches the physical value, i.e., -1.28 GeV .The situation is depicted in Fig. 2(b).Note that here is negative, contrary to what occurs with the deuteron, because the latter is a bound state with a normalizable wave function, whereas is an unstable resonance.
M
To proceed, one needs to make the partial wave projection of and obtain where the channel momentum square reads , is the mass of , denotes the corresponding helicity configuration, and , .The key observation is that the integral interval in Eq. ( 7) will cover if GeV (for the loop, GeV).In other words, the partial wave amplitude will be enhanced in the vicinity of the peak by the anomalous threshold enhancement in the t channel, as can be observed in Fig. 3.
Based on the above observation, it is suggested that 1) In Ref. [17], g is estimated to be larger ( ).
041001-2 X(6900) L = 0 the peak may at least partly be explained by the anomalous threshold generated by the triangle diagram depicted in Fig. 1.Furthermore, to obtain the (s wave) amplitudes, we need the relation between the s wave and helicity amplitudes (see Refs. [11,12] for further discussions): J = 2 J = 0 J/ψJ/ψ J/ψψ(3686) J/ψJ/ψ J/ψψ(3686) J/ψψ(3770) In practice, it is found that the anomalous enhancement gives a more prominent effect to the amplitude than the amplitude.Furthermore, to estimate the triangle diagram contribution, a combined fit with and data is made.A coupled-channel K-matrix unitarization scheme is employed including , , and .The tree level amplitudes are also taken into account from the following contact interaction Lagrangian [19]: and these tree level amplitudes are as follows: After the same partial wave projection process of Eqs. ( 7) -( 9), the coupled-channel partial wave amplitudes at the tree level, and ), are determined.By taking into account K-matrix unitarization and final state interaction, the unitarized partial wave amplitude is where is the real polynomial function in general and is set to be constant here, and we set .Particularly, as for the case and , the triangle diagram needs to be taken into consideration.That is to say , in which the triangle diagram contribution comes from Fig. 1 1) .For other cases, .Further, to fit the experimental data, one has where refers to the abs of three momenta for the corresponding channel.According to partial wave convention, for the and case, they have a total factor [11,12]: The fit is overdone since there are many parameters.One solution is shown in Fig. 4, and the fit parameters are listed in Table 1 for illustration.In this fit, we set to be negligible simply because they are not directly related to the channel and the fit can be performed reasonably well without them.The error band in Fig. 4 is rather large; this is due to the two normalization factors, and , which contain rather large error bars.
X(6900)
During the fit, many solutions exist.Nevertheless, it was found that the triangle diagram contributions are all small.This behavior is unclear; however, one possible reason could be that the peak position through the anomalous threshold contribution, as shown in Fig. 3, is approximately 60 -80 MeV above the peak; 1) The K matrix here is no longer unitary once the triangle diagram is included.However the violation of unitarity is not a big issue here.Because, first of all, the scattering itself is not unitary at all.Only when we neglect all intermediate light hadron states and, for example, intermediate states, it may be approximately unitary.Second, one has to understand that the essence of K matrix approach is not only maintaining unitarity, but more importantly, summing up the geometric series of on-shell amplitudes with most important (nearby) singularities (one way to understand this is the Dyson resummation of propagators).So in this sense, the violation of "unitarity" is not really worrisome.hence, the fit becomes difficult.One possible way to resolve the problem is to adopt another parameterization in which the background contributions are more flexible to be tuned.Thus, the interference between the background and the anomalous enhancement can lead to the shift of the peak position by a few tens of MeV.Another possible mechanism for the suppression of the triangle diagram is that the vertex is in the p-wave form; hence, it may provide another suppression factor due to (non-relativistic) power counting.[20].We defer this investigation for future studies.
M 2 Fig. 2 .Fig. 3 .
Fig. 2. (color online) Left: triangle diagram contribution (the y axis label is arbitrary).Right: the trajectory of the anomalous threshold with respect to the variation of .
Table 1 .
Fit parameters of Fig.4.The parameters are defined in Eq. (10).The errors are statistical only. | 2,034.2 | 2024-01-29T00:00:00.000 | [
"Physics"
] |
Thermal ground state and nonthermal probes
The Euclidean formulation of SU(2) Yang-Mills thermodynamics admits periodic, (anti)selfdual solutions to the fundamental, classical equation of motion which possess one unit of topological charge: (anti)calorons. A spatial coarse graining over the central region in a pair of such localised field configurations with trivial holonomy generates an inert adjoint scalar field $\phi$, effectively describing the pure quantum part of the thermal ground state in the induced quantum field theory. Here we show for the limit of zero holonomy how (anti)calorons associate a temperature independent electric permittivity and magnetic permeability to the thermal ground state of SU(2)$_{\tiny\mbox{CMB}}$, the Yang-Mills theory conjectured to underlie photon propagation.
Introduction
Quantum Mechanics is a highly efficient framework to describe the subatomic world [1,2,3], including coherence phenomena that extend to macroscopic length and time scales [4,5,6]. The key quantity to describe deviations from classical behavior is Planck's quantum of action = h 2π = 6.58 × 10 16 eV s which determines the fundamental interaction between charged matter and the electromagnetic field and thus also the shape of blackbody spectra by relating frequency ω and wave vector k to particle-like energy E = ω and momentum p = k and by appeal to Bose-Einstein statistics. In Quantum Mechanics, sets the strength of multiplicative noncommutativity for a pair of canonically conjugate variables such as position and momentum, implying the respective uncertainty relations.
Although generally accepted as a universal constant of nature and in spite of the fact that we are able to efficiently compute quantum mechanical amplitudes and quantum statistical averages for a vast variety of processes in particle collisions, atoms and molecules, extended condensed-matter systems, and astrophysical objects to match experiment and observation very well, one should remain curious concerning the principle mechanism that causes the emergence of a universal quantum of action. In [7,8] it was argued that the irreconcilability of classical Euclidean and Minkowskian time evolution as expressed by a time-periodic SU(2) (anti)selfdual gauge field configuration -a (anti)caloron -, whose action is associated with one unit of winding about a central spacetime point, gives rise to indeterminism in the process it mediates. That each unit of action assigned to (anti)calorons of radius ρ = |φ| −1 , which dominate the emergence of the thermal ground state, equals follows from the value of the coupling e in the induced, effective, thermal quantum field theory [10] of the deconfining phase in SU(2) Yang-Mills thermodynamics. The coupling e, in turn, obeys an evolution in temperature (flat almost everywhere) which represents the validity of Legendre transformations in the effective ensemble where the thermal ground state co-exists with massive (adjoint Higgs mechanism) and massless (intact U(1)) thermal fluctuations. The thermal ground state thus is a spatially homogeneous ensemble of quantum fluctuations carried by (anti)caloron centers. At the same time, as we shall see, this state provides electric and magnetic dipole densities supporting the propagation of electromagnetic waves in an SU (2) Yang-Mills of scale Λ ∼ 10 −4 eV, SU(2) CMB [9].
In the present work, we establish this link between quantised action, represented by φ, and classical wave propagation enabled by the vacuum parameters ǫ 0 and µ 0 in terms of the central and peripheral structure of a trivial-holonomy (anti)caloron, respectively. That is, by allowing a fictitious temperature T to represent the energy density of an electromagnetic wave (nonthermal, external probe) via the thermal ground state through which it propagates we ask what this implies for ǫ 0 and µ 0 . As a result, both ǫ 0 and µ 0 neither depend on T nor, as we shall argue, on any singled-out inertial frame. But this means no more and no less than the rivival of the luminiferous aether, albeit now in a Poincaré invariant way. This paper is organised as follows. In the next section we shorty discuss key features of the effective theory for the deconfining phase of SU(2) Yang-Mills thermodynamics. Sec. 3 contains a reminder to principles in interpreting a Euclidean field configuration in terms of Minkowskian observables. In a next step, general facts are reviewed on Euclidean, periodic,(anti)selfdual field configurations of charge modulus unity concerning the central locus of action, their holonomy, and their behaviour under semiclassical deformation. Finally, we review the anatomy of zeroholonomy Harrington-Shepard (HS) caloron in detail, pointing out its staticity for spatial distances from the center that exceed the inverse of temperature, and discuss which static charge configuration it resembles depending on two distinct distance regimes. In Sec. 4 we briefly review the postulate that an SU(2) Yang-Mills theory of scale Λ ∼ 10 −4 eV, SU(2) CMB , describes photon propagation [9]. Subsequently, the large-distance regime in a HS (anti)caloron is considered in order to deduce an expression for ǫ 0 based on knowledge about the electric dipole moment provided by a (anti)caloron of radius radius ρ = |φ| −1 , the size of the spatial coarse-graining volume V cg , and the fact that the energy density of the probe must match that of the thermal ground state. As a result, ǫ 0 and µ 0 turn out to be T independent, the former representing an electric charge, large on the scale of the electron charge, of the fictitious constituent monopoles giving rise to the associated dipole density. Zooming in to smaller spatial distances to the center, the HS (anti)caloron exhibits isolated (anti)selfdual monopoles. For them to turn into dipoles shaking by the probe fields is required. We then show that the definitions of ǫ 0 and µ 0 , which were successfully applied to the large-distance regime, become meaningless. Finally, our results are discussed. Sec. 5 summarises the paper and discusses the universality of ǫ 0 and µ 0 for the entire electromagnetic spectrum.
Sketch of deconfining SU(2) Yang-Mills thermodynamics
For deconfining SU(2) Yang-Mills thermodynamics, a spatial coarse graining over the (anti)selfdual, that is, the nonpropagating [11], topological sector with charge modulus |Q| = 1 can be performed, see [9] and references therein, to yield an inert adjoint scalar field φ. Its modulus |φ| sets the maximal possible resolution in the effective theory whose ground state energy density essentially is given as tr Λ 6 φ 2 = 4πΛ 3 T (Λ a constant of integration of dimension mass) and whose propagating sector is, in a totally fixed, physical gauge (unitary-Coulomb) characterised by a massless mode (γ, unbroken U(1) subgroup of SU(2)) and two thermal quasiparticle modes of equal mass m = 2e |φ| (V ± , mass induced by adjoint Higgs mechanism) which propagate thermally, that is, on-shell only. Interactions within this propagating sector are mediated by isolated (anti)calorons whose action is argued to be [7,8]. Judged in terms of inclusive quantities such as radiative corrections to the oneloop pressure or the energy density of blackbody radiation, these interactions are feeble [9], and their expansion into 1-PI irreducible bubble diagrams is conjectured to terminate at a finite number of loops [12]. However, spectrally seen, the effects of V ± interacting with γ lead to severe consequences at low frequencies and temperatures comparable to the critical temperature T c where screened (anti)monopoles, released by (anti)caloron dissociation upon large-holonomy deformations [13], rapidly become massless and thus start to condense.
3 Caloron structure 3.1 Euclidean field theory and interpretable quantities Nontrivial solutions to an elliptic differential equation, such as the Euclidean Yang-Mills equation D µ F µν = 0, no longer are solutions of the corresponding hyperbolic equation upon analytic continuation x 4 ≡ τ → ix 0 (Wick rotation). To endow meaning to quantities computed on classical field configurations on a 4D Euclidean spacetime in SU(2) Yang-Mills thermodynamics in terms of observables in a Minkowskian spacetime we thus must insist that these quantities are not affected by the Wick rotation. That is, to assign a real-world interpretation to a Euclidean quantity it needs to be (i) either stationary (not depend on τ ) or (ii) associated with an instant in Euclidean spacetime because, by exploiting time translational invariance of the Yang-Mills action, this instant can be picked as (τ = 0, x) in Euclidean spacetime.
Review of general facts
If not stated otherwise we work in supernatural units, = c = k B = 1, where is the reduced quantum of action, c the speed of light in vacuum, and k B Boltzmann's constant. A trivial-holonomy caloron of topological charge unity on the cylinder S 1 × R 3 , where S 1 is the circle of circumference β ≡ 1/T (T temperature) describing the compactified Euclidean time dimension (0 ≤ τ ≤ β), is constructed by an appropriate superposition of charge-one singular-gauge instanton prepotentials [14] with the temporal coordinate of their instanton centers equidistantly stacked along the infinitely extended Euclidean time dimension [15] to enforce temporal periodicity, . For gauge group SU(2) this Harrington-Shepard (HS) caloron is given as (antihermitian generators t a (a = 1, 2, 3) with tr t a t b = − 1 2 δ ab ): where r ≡ |x|,η a µν denotes the antiselfdual 't Hooft symbol,η a µν = ǫ a µν − δ aµ δ ν4 + δ aν δ µ4 , and Here ρ is the scale parameter of the singular-gauge instanton to seed the "mirror sum" within S 1 × R 3 , leading to Eq. (2). The associated antiselfdual field configuration is obtained in replacingη a µν by η a µν (selfdual 't Hooft symbol) in Eq. (1). Configuration (1) is singular at τ = r = 0. This point is the locus of the configuration's topological charge Q = 1 in the sense that the integral of the Chern- δ of radius δ, which is centered there, yields unity independently of δ ≥ 0. Selfduality implies that the action of the HS caloron is given as where g is the coupling constant in Euclidean (classical) theory. Eq. (3) holds in the limit δ → 0, meaning that S C can be attributed to the singularity of the HS solution at τ = r = 0 and thus has a Minkowskian intepretation, see Sec. 3.1.
Based on [10] and on the fact that the thermal ground state emerges from |Q| = 1 caloron/anticalorons, whose scale parameter ρ essentially coincides with the inverse of maximal resolution, |φ| −1 , in the effective theory for deconfining SU(2) Yang-Mills thermodynamics, it was argued in [7], see also [8], that S C (as well as the action of a HS anticaloron S A with ρ ∼ |φ| −1 ) equals if the effective theory is to be interpreted as a local quantum field theory. The HS caloron is the trivial-holonomy limit of the selfdual Lee-Lu-Kraan-van-Baal (LLKvB) configuration with Q = 1 and total magnetic charge zero [16,17] which is constructed via the Nahm transformation of selfdual fields on the Euclidean four-torus [18]. For nontrivial holonomy (A 4 (r → ∞) = iut 3 with 0 < u < 2π β ) the LLKvB solution exhibits a pair of a magnetic monopole (m) and its antimonopole (a) w.r.t. the Abelian subgroup U(1)⊂SU(2) left unbroken by A 4 (r → ∞) = 0. Their masses are m m = 4πu and m a = 4π 2π β − u such that in the trivial-holonomy limits u → 0, 2π β one of these magnetic constituents becomes massless and thus completely spatially delocalised. For nontrivial holonomy, where both monopole and antimonopole are of finite mass, localised, and separated by a spatial distance they can be considered static by an exact cancellation of attraction, mediated by their U(1) magnetic fields, and repulsion due to the field A 4 . As was shown in [13] by investigating the effective action of a LLKvB caloron (integrating out Gaussian fluctuations), this balance is distorted, leading to monopole-antimonopole attraction for and to repulsion in the complementary range of (large) holonomy. Because there is no localised counter part to a monopole or antimonopole in the trivial-holonomy limit, HS calorons must be considered stable under Gaussian fluctuations, in contrast to the case of nontrivial holonomy which is unstable. The latter statement is also mirrored by the fact that a nontrivial, static holonomy leads to zero quantum weight in the infinite-volume limit (which is realistic at high temperatures [9] where the radius of the spatial coarse-graining volume for a single caloron diverges as |φ| −1 = 2πT Λ 3 , Λ the Yang-Mills scale). As a consequence, nontrivial holonomy can only occur transiently in configurations which do not saturate (anti)selfduality bounds to the Yang-Mills action. Again, this is equivalent to stating the instability of the LLKvB solution. It can be shown [9] that the small-holonomy case of monopole-antimonople attraction by far dominates the situation of monopoleantimonople repulsion when a caloron dissociates into its constituents.
The spatial coarse graining over (anti)selfdual calorons of charge modulus |Q| = 1, which do not propagate (due to (anti)selfduality their energy-momentum tensor vanishes identically [11]), yielding a highly accurate a priori estimate of the deconfining thermal ground state in terms of an inert, adjoint scalar field φ and a pure-gauge configuration a gs µ , is performed over isolated and stable HS solutions [9]. The coarsegrained field a gs µ represents a posteriori the effects of small holonomy changes due to (anti)caloron overlap and interaction.
Anatomy of a relevant Harrington-Shepard caloron
Let us now review [19] how the field strength of a HS caloron depends on the distance from its center at τ = r = 0. For |x| ≪ β (|x| ≡ √ where s is defined in Eq. (4). From Eqs. (6) and (1) one obtains with |x| ≪ β the following expression for where I αµ ≡ δ αµ − 2 xαxµ x 2 . At small four-dimensional distances from the caloron center the field strength thus behaves like the one of a singular-gauge instanton with a renormalised scale parameter ρ ′ 2 = ρ 2 1+ π 3 s β . Therefore, the field strength of the HS solution exhibits a dependence on τ and as such has no Minkowskian interpretation, see Sec. 3.1. What can be inferred for a Minkowskian spacetime though is that the action of the configuration is attributable to winding of the caloron around the group manifold S 3 as induced by a spacetime point, the instanton center. This is because, in the sense of Eq. (3), an instant has no analytic continuation or Wick rotation. (The 4D action or topological-charge density of the caloron is regular at τ = r = 0, does depend on Euclidean spacetime in the vicinity of this point, and thus has no Minkowskian interpretation.) For r ≫ β the selfdual electric and magnetic fields E a i and B a i are static and can be written as and thus describes a static non-Abelian monopole of unit electric and magnetic charges (dyon). For r ≫ s ≫ β Eq. (8) reduces to This is the field strength of a static, selfdual non-Abelian dipole field, its dipole moment p a i given as Interestingly, the same distance s, which sets the separation between the charge centers of an Abelian magnetic monopole and its antimonopole in a nontrivialholonomy caloron, prescribes here for the case of trivial holonomy how small r needs to be in order to reduce the non-Abelian dipole of Eq. (10) to the non-Abelian monopole constituent, see Eq. (9). For a HS anticaloron one simply replaces E a i = B a i by E a i = −B a i in Eqs. (8), (9), and (10). Finally, let us remark that the condition s ≫ β, which is required for Eqs. (9) and (10) to be valid, is always satisfied for the caloron scale ρ ∼ |φ| −1 which is relevant for the building of the thermal ground state in the deconfining phase of SU(2) Yang-Mills thermodynamics [9]. Namely, one has where λ ≡ 2πT Λ ≥ λ c = 13.87.
Thermal ground state as induced by a probe
The postulate that photon propagation should be described by an SU(2) rather than a U(1) gauge principle was put forward in [20] and has undergone various levels of investigation ever since, see [9,21,22]. As a result, the associated Yang-Mills scale Λ ∼ 1.0638 × 10 −4 eV is fixed by low-frequency observation of the Cosmic Microwave Background (CMB) [23] to correspond to the critical temperature for the deconfining-preconfining phase transition being the CMB's present baseline temperature T 0 = 2.725 K [24]. This prompted the name SU(2) CMB . In the following we would like to investigate in what sense the vacuum parameters of classical electrodynamics, namely the electric permittivity ǫ 0 and the magnetic permeability µ 0 , can be reduced to the physics of the static, non-Abelian, and (anti)selfdual monopole and dipole configurations represented by HS (anti)calorons in the regimes β ≪ r ≪ s and r ≫ s ≫ β, respectively, see Sec. 3.3. To do this, the concept of a thermal ground state together with information on how it is obtained [9] as well as the results of Sec. 3.3 [19] are invoked.
Preexisting dipole densities
Let us discuss the case r ≫ s. In order to not affect spatial homogeneity on scales comparable to or smaller than s the electromagnetic field, which propagates through the deconfining thermal ground state in the absence of any explicit electric charges, is considered a plane wave of wave length l much larger than s. Such a field effectively sees a density of selfdual dipoles, see Eq. (10). Because they are given by p a i = sδ a i their dipole moments align along along direction of the exciting electric or magnetic field both in space and in the SU(2) algebra su (2). Note that at this stage the definition of what is to be viewed as an Abelian direction in su (2) is a global gauge convention such that all spatial directions of the dipole moment p a i are a priori thinkable. That is, dynamical Abelian projection of the non-Abelian situation of Eq. (10) is owed to the Abelian and dipole aligning nature of the exciting, massless field [9]. Modulo global gauge rotations, this field is exists because of the adjoint Higgs mechanism invoked by the inert field φ.
Per spatial coarse-graining volume V cg of radius |φ| −1 = ρ = the center of a selfdual HS caloron and the center of an antiselfdual HS anticaloron [9] reside. Note the large hierachy between s (the minimal spatial distance to the center of a (anti)caloron, which allows to identify the static, (anti)selfdual dipole) and the radius of the sphere |φ| −1 defining V cg , If the exciting field is electric then it sees twice the electric dipole p a i (cancellation of magnetic dipole between caloron and anticaloron), if it is magnetic it sees twice the magnetic dipole p a i (cancellation of electric dipole between caloron and anticaloron, E = −B ⇔ −E = B). To be definite, let us discuss the electric case in detail, characterised by an exciting Abelian field E e . The modulus of the according dipole density D e ||E e is given as In classical electromagnetism the relation between the fields E e and D e is where is the electric permittivity of the vacuum, and Q = 1.602 × 10 −19 A s denotes the electron charge (unit of elementary charge), now both in SI units. According to electromagnetism the energy density ρ EM carried by an external electromagnetic wave with |E e | = |B e | is In natural units we have ǫ 0 µ 0 = 1/c 2 = 1, and therefore 1 µ 0 = 1/ǫ 0 . Thus The E e -field dependence of ρ EM is converted into a fictitious temperature dependence by demanding that the temperature of the thermal ground state of SU(2) CMB adjusts itself such as to accomodate ρ EM , Eq. (20) generalises the thermal situation of ground-state energy density of Sec. 3.2, where ground-state thermalisation is induced by a thermal ensemble of excitations, to the case where the thermal ensemble is missing but the probe field induces a fictitious temperature and energy density to the ground state. Combining Eqs. (15), (16), and (20), and introducing the ratio ξ between the non-Abelian monopole charge Q ′ in the dipole and the (Abelian) electron charge 2 Q, we obtain Notice that ǫ 0 does not exhibit any temperature dependence and thus no dependence on the field strength E e . It is a universal constant. In particular, ǫ 0 does not relate to the state of fictitious ground-state thermalisation which would associate to the rest frame of a local heat bath.
To produce the measured value for ǫ 0 as in Eq. (17) the ratio ξ in Eq. (21) is required to be Thus, compared to the electron charge, the charge unit associated with a (anti)selfdual non-Abelian dipole, residing in the thermal ground state, is gigantic. Discussing µ 0 , we could have been proceeded in complete analogy to the case of ǫ 0 . (It would be µ −1 0 defining the ratio between the modulus of the magnetic dipole density and the magnetic flux density |B|.) Here, however, the comparison between non-Abelian magnetic charge and an elementary, magnetic, and Abelian charge is not facilitated since the latter does not exist in electrodynamics.
Finally, let us see what the condition that the wavelength l of the electromagnetic disturbance considered in this section is much larger than s means in units of meters when invoking SU(2) CMB . One has Setting T = T c = 2.725 K in Eq. (23), we obtain a lower bound on the wave length of l min = 1.1254 m.
Explicitly induced dipole densities
Let us now discuss the case β ≪ |φ| −1 ≪ r ≪ s. To rely on the presence of the inert adjoint scalar field φ of the effective theory, r needs to be larger than the spatial coarse-graining scale |φ| −1 = 1 2π λ 3/2 c λ λc 3/2 β ≥ 8.22 β. Within the according regime |φ| −1 ≤ r ≪ s of spatial distances from the caloron center at (τ = 0, x = 0) an electromagnetic wave of wave length l sees the selfdual field of a static, non-Abelian monopole of electric and magnetic charge as in Eq. (9) which is centered at x = 0. A selfdual Abelian field strength E i = B i of this monopole is obtained [25] as with the field φ gauged from unitary gauge φ a = 2|φ|δ a3 into "hedgehog" gauge φ a = 2|φ|x a . The according gauge transformation is give in terms of the group element Ω ≡ cos 1 2 ψ − ik · σ sin 1 2 ψ where σ i , (i = 1, 2, 3) , are the Pauli matrices, k ≡ê 3 ×x sin θ ,ê 3 is the third vector of an orthonormal basis of space, θ ≡ ∠(ê 3 ,x), and ψ = θ for 0 ≤ θ ≤ π − ǫ, which smoothly drops to zero at θ = π, and the limit ǫ → 0 is understood [25]. For the monopole field E i to be normalized to charge −2Q ′ one 3 thus has The electric or magnetic poles of Eq. (25) should independently react by harmonic and linear acceleration to the presence of an external electric or magnetic field E e or B e , respectively, forming a monochromatic electromagnetic wave of frequency ω = 2π l . At x = 0 one has E e = E 0 sin(ωt) , and readily derives (as in Thomson scattering) that the induced dipole moment p, say, for the electric case, is given as Interestingly, by virtue of Eq. (25) the squared charge of the pole, (2 Q ′ ) 2 , cancels out in p because its mass m carries an identical factor (only the electric (magnetic) monopole is linearly and harmonically accelerated by the external electric (magnetic) field E e (B e ) and hence m carries electric (magnetic) field energy only): Again, the volume V cg , which underlies the dipole moment p by containing a caloron and an anticaloron center, is given by Eq. (13), and we have and therefore In Eq. (30) also the vacuum permittivity ǫ 0 cancels out, and we are left with the condition where temperature T (or λ), again, is set by the local field strengths of the electromagnetic probe according to Eqs. (18) and (20). Let us see whether the second of Eqs. (31) is consistent with |φ| −1 ≤ r = l ≪ s. The former inequality is selfevident, and the latter follows from By setting λ = λ c we obtain from Eqs. (31) a minimal wavelength This wavelength is about a factor of ten smaller than the lowest possible value as expressed by Eq. (23).
Discussion
In Secs. 4.1 and 4.2 an analysis was performed to clarify to what extent the thermal ground state of SU(2) CMB can be regarded as the luminiferous aether, supporting the propagation of an external electromagnetic wave (probe) of field strengths |E e | = |B e | and wave length l which, by itself, is not thermal. Sec. 4.1 has focussed on wave lengths that are large compared to the distance s = π|φ| −2 β , very large compared to the resolution limit |φ| −1 of the effective theory for deconfining SU(2) CMB and even more so on the scale of inverse temperature β, see Eq. (12), when (anti)calorons of SU(2) CMB manifest themselves as static (anti)selfdual dipoles whose dipole moment is set by a fictitious temperature representing the intensity of the probe via Eq. (20). And indeed, in this case vacuum permittivity ǫ 0 and permeability µ 0 turn out to be universal constants, see Eq. (21). When confronted with their experimental values the charges of the "constituent" non-Abelian monopoles in a dipole follow in units of electron charge, see Eq. (22).
Eqs. (23) and (20) indicate that an uncertainty-like relation between field |E e | strength and wave length l takes place as follows Therefore, the larger the probe intensity the longer its wave length is required to be in order to be supported by thermal ground-state physics. In any case, in SU(2) CMB wave lengths need to be larger than the meter scale, see Eq. (23). Things are different for wave lengths that are large on the scale |φ| −1 but short on the scale s = π|φ| −2 β . This case is investigated in Sec. 4.2. Then a (anti)caloron can no longer be viewed as a static, (anti)selfdual dipole but rather is represented by a static, (anti)selfdual monopole. However, an attempt to consider dipole moments as induced dynamically by monopole shaking through the probe fields renders the definitions of vacuum parameters ǫ 0 and µ 0 meaningless, see Eq. (30). It does yield a fixation of the probe's wave length l in terms of |φ| −1 though, see Eq. (31). While the former situation is not surprising because single magnetic charges violate the Bianchi identities for the electromagnetic field strength tensor F µν it is nontrivial that l turns out to selfconsistently satisfy the constraint that s ≫ l > |φ| −1 . Note that the minimal wave lengths l min = 1.1254 m and l min = 2 3 πΛ −1 λ 1/2 c = 0.112 m as obtained in Secs. 4.1 and 4.2, respectively, are off by a factor of ten only.
Summary and Conclusions
We have addressed the question how the concept of a thermal ground state of SU(2) CMB , which in a fully thermalised situation coexists with a spectrum of partially massive (adjoint Higgs mechanism) thermal excitations of the same temperature, can be employed to understand the propagation of a nonthermal probe (monochromatic electromagnetic wave) in vacuum, characterised by electric permittivity ǫ 0 and magnetic permeability µ 0 . To do this, we have appealed to the fact that the thermal ground state emerges by a spatial coarse graining over (anti)selfdual fundamental Yang-Mills fields of topological charge modulus unity at finite temperature: Harrington-Shepard (anti)calorons of trivial holonomy. Knowing how large the coarse-graining volume is, which contains one caloron and one anticaloron center, where the unit of action is localised (Sec. 3.2), and by exploiting the structure of these field configurations spatially far away (Sec. 3.3) from their centers, we were able to deduce densities of electric and magnetic dipoles in Sec. 4.1. Dividing these dipole densities by the respective field strengths of the probe, selfconsistently adjusted to the energy-density of the thermal ground state (small, transient (anti) caloron holonomies), yields definitions of ǫ 0 and µ 0 . In the electric case a match with the experimental value predicts the charge of one of the monopoles, which constitutes the dipole, in terms of electron charge. The former charge turns out to be substantially larger than the latter.
As shown in Sec. 4.2 this way of reasoning, which is valid for large wave lengths (l ≫ s) only, cannot be extended to smaller wave lengths l. Namely, in a region of spatial distances to the (anti)caloron center, where the configuration resembles (anti)selfdual, static monopoles, the definition of ǫ 0 and µ 0 in terms of dipole densities that are explicitly induced by the probes oscillating field strengths becomes meaningless. This is expected since the existence of resolved magnetic monopoles would violate the Bianchi identities for the field strength tensor F µν of electromagnetism.
We conclude that the thermal ground state of SU(2) CMB supports the propagation of a nonthermal probe purely in terms of Harrington-Shepard (anti)calorons (trivial holonomy) if the probe's wave length l is sufficiently large (the regime l ≫ s = π|φ| −2 β ≥ 1.1254 m) and that there is an uncertainty-like relation between l and the square of the probe's intensity, see (34).
Let us now address the question how the propagation of shorter wave lengths and/or larger intensities of nonthermal probes can be understood in terms of nontrivial ground-state structure. To do this, one could argue as follows. Since the result of Eq. (21) does not exhibit a T dependence it "forgets" about the assumptions l ≫ s = s(T ) and that field strength and temperature are related as in Eq. (20), and therefore should be considered valid for any probe and thus an invariant under the full Poincaré group. Conversely seen, the assumption of Poincaré covariance allows to transform any probe into a frame where the uncertainty-like relation (34) is obeyed (redshift by relativistic Doppler effect). In such a frame, the selfconsistency of Poincaré covariance is demonstrated by the invariance of the vacuum parameters ǫ 0 and µ 0 under further redshifting. | 7,196 | 2015-03-18T00:00:00.000 | [
"Physics"
] |
Influence of Particle Velocity When Propelled Using N 2 or N 2-He Mixed Gas on the Properties of Cold-Sprayed Ti 6 Al 4 V Coatings
Cold-spraying is a relatively new low-temperature coating technology which produces coatings by the deposition of metallic micro-particles at supersonic speed onto target substrate surfaces. This technology has the potential to enhance or restore damaged parts made of light metal alloys, such as Ti6Al4V (Ti64). Particle deposition velocity is one of the most crucial parameters for achieving high-quality coatings because it is the main driving force for particle bonding and coating formation. In this work, studies were conducted on the evolution of the properties of cold-sprayed Ti64 coatings deposited on Ti64 substrates with particle velocities ranging from 730 to 855 m/s using pure N2 and N2-He mixture as the propellant gases. It was observed that the increase in particle velocity significantly reduced the porosity level from about 11 to 1.6% due to greater densification. The coatings’ hardness was also improved with increased particle velocity due to the intensified grain refinement within the particles. Interestingly, despite the significant differences in the coating porosities, all the coatings deposited within the velocity range (below and above critical velocity) achieved a high adhesion strength exceeding 60 MPa. The fractography also showed changes in the degree of dimple fractures on the particles across the deposition velocities. Finite element modelling was carried out to understand the deformation behaviour of the impacting particles and the evolutions of strain and temperature in the formed coatings during the spraying process. This work also showed that the N2-He gas mixture was a cost-effective propellant gas (up to 3-times cheaper than pure He) to deliver the high-quality Ti64 coatings.
Introduction
Titanium (Ti) alloys, such as Ti6Al4V (Ti64), possess superb properties like low density, high specific strength and good corrosion resistance, and are ideal to be used in aerospace, chemical, and biomedical applications [1].As these Ti64 components suffer from wear and tear over the service period, it will be more cost-effective to repair them and restore their functionality instead of scraping or refabrication.Conventional repair methods such as welding and direct laser deposition may not be most suitable for the repair work as they involve high processing temperatures.These techniques often lead to heat-affected zones and high thermal stresses which lead to distortion, undesired phase change or transformation, which may create mechanical weak points for failure [2][3][4].Cold spraying (CS) is a low-temperature additive manufacturing process, which could be an alternative technique to repair these components.
CS is a process whereby particles (1 to 100 µm) are accelerated to speeds up to 1000 m/s or more by supersonic gas flow and then impact on the target substrate surface to form a dense coating.The particles remain in a solid-state condition throughout the deposition process [5].The detailed working principle of the CS process has been widely reported in the literature [6][7][8][9][10][11][12][13][14].The particle deposition velocity (or particle velocity) has the most significant impact on the bonding of particles [15][16][17].At the minimum deposition velocity or critical velocity, the particles would have just enough kinetic energy to activate adiabatic shear instabilities on the impacted surface, i.e., the particles and substrate, to form the bonding.The adiabatic shear instabilities would allow the particle contact interfaces to thermally soften, severely deform and create material jetting, as well as forming refined grains for metallurgical bonding and mechanical interlocking [12,13,[18][19][20].Hence, the impact velocity would affect the coating qualities such as adhesion, cohesive strength, deposition efficiency, hardness, etc. [21].Other factors that would influence the coating quality are substrate surface condition (temperature, roughness, hardness [21][22][23][24]), particle type and size [25], impact angle [26], etc.The optimum particle velocity differs for different types of material due to their different yield strengths and melting points [27,28].To date, there have been many studies of the influence of particle velocity for different pure metals such as aluminium, copper, and titanium as well as steels [29][30][31][32][33][34][35][36][37][38].
Several studies have been reported on understanding of the influence of particle velocity on the properties of cold-sprayed Ti64 coatings, as there is a need for the repair or enhancement of Ti64 components.The particle velocity of Ti64 can be controlled by the type of carrier gas (e.g., air, nitrogen (N 2 ) and helium (He)), gas pressure (20 to 50 bar), gas temperature (500 to 1000 • C), etc.A lighter gas, He or a mixture of N 2 and He, with high gas pressure and a preheated temperature would generate a faster gas stream and provide a higher drag force onto each particle (for acceleration), which results in a more significant particle deformation upon impact and improves coating quality [39][40][41][42][43]. Goldbaum et al. [44] studied the effect of particle velocity on deposited splats (single particle impacts) for a range of velocities.The flattening of Ti64 particles was increased by 50% when the particles were accelerated from around 600 to 800 m/s.However, the flattening of the splats seemed to reach a plateau when deposited at 800 to 1000 m/s.Although the particles were deposited at 800 m/s and above on the substrate (25 • C), the splat-substrate interface appeared to have microcracks and not be well-bonded, which resulted in a low splat adhesion strength of about 100 MPa, while the splat adhesion strength could be improved up to about 250 MPa when the coatings were deposited on preheated substrate surfaces (400 • C).Vidaller et al. [45] showed that Ti64 splats had better adhesion (on Ti64 grade 2 substrates) and more deformation when deposited using pure N 2 gas under higher pressure and temperature (e.g., 50 bar, 1000 • C).
Table 1 shows the previous studies on the CS deposition of full Ti64 coatings.The coating qualities (such as porosity level and hardness) can be easily improved by using higher gas pressure and temperature and He gas.However, as He gas is much more expensive than N 2 gas, it is not economical to be used in industry.In addition, the gas preheating threshold, at around 1100 • C, would limit the highest attainable particle velocity.If a more powerful gas heater is used (assuming a preheating temperature of 1200 to 1600 • C), there is a possibility of powder degradation (phase changes) in flight.
There are fewer studies on the cold-sprayed deposition of Ti64 coatings on Ti64 substrates across a range of velocities and using an N 2 -He (N 2 gas based) gas mixture as a propellant gas, as compared to other materials [39][40][41].The effects of particle velocity on the coating properties were studied in this work, which demonstrated that the usage of the N 2 -He gas mixture as a propellant gas could improve the overall coating quality, while keeping other process parameters constant.The porosity level, microstructure, mechanical properties, and fracture behaviour of the coatings were systematically investigated.Finite element modelling was also used to understand the particle impact phenomena at different particle velocities.
Materials
Ti64 (Grade 5) discs (Titan Engineering, Singapore) with a 25 mm diameter and 5 mm thickness were used as substrates.The substrates were polished to a mirror-like surface (with P1200 grit paper followed by fine polishing with Struers (Cleveland, OH, USA) DiaPro (9 µm diamond paste) and OP-S (0.04 µm colloidal silica) suspension) and degreased sequentially before cold-spray deposition.As shown in Figure 1a, plasma-atomized spherical Ti64 ELI (Grade 23) powder with an average size ranging from 15 to 45 µm was used as the feedstock powder.The backscattered electron image (BEI) of an unetched powder cross-section is shown in Figure 1b and consists of martensitic α'-Ti lathes due to its quenching process [48].The particle size distributions measured by laser diffraction (ASTM B822-10) [52] for D10, D50 and D90 were 19, 33 and 45 µm, respectively.
Cold-Spray Process
The Ti64 coatings were deposited using an Impact Spray System 5/11 (Impact Innovations, Rattenkirchen, Germany) with the setup shown in Figure 2a [53].A SiC spray nozzle of 6 mm diameter with an expansion ratio of 5.6, throat diameter of 2.54 mm and a divergent section length of 160 mm was used in the CS deposition.The stand-off distance between the nozzle and substrate was 30 mm.The sample stage was moved from left-to-right horizontally with a constant velocity of 500 mm/s (Figure 2b) followed by 1 mm vertical raster step after each traverse movement to form a coated layer until the coating thickness deposited was around 1.5 to 2 mm for each sample (Figure 2c).The nozzle was water-cooled.The deposition parameters are shown in Table 2.The particle velocity was measured using a Cold Spray Meter (Tecnar, Saint-Bruno-de-Montarville, QC, Canada).The numerical calculations of particle velocity and temperature were conducted using the Kinetic Spray Solutions (KSS) software package (Kinetic Spray Solutions, Buchholz, Germany) [54].Usage of the KSS software has also been reported elsewhere [30,45,55].More details of calculations for the N 2 -He gas mixture can be found in [39]. 1
Microstructural and Mechanical Characterisation
For the cross-section analysis, each cold-sprayed sample was cut into halves with the coating dimensions of 25 mm (length) × 6.5-7 mm (thickness).The cut samples were mounted with Polyfast, ground with SiC #320, followed by chemical-mechanical polishing (CMP) with a DiaPro solution containing 9 µm diamond particles and then an OP-S suspension solution containing 0.04 µm colloidal silica particles (Struers, Ballerup, Denmark).The polished samples were etched for the microstructural evaluation using Kroll's reagent by immersion method for 10 to 15 s.
Microstructures and porosities of the samples were observed under optical microscope (OM, Axioskop 2 MAT, Carl Zeiss, Oberkochen, Germany) and/or scanning electron microscope (SEM JSM-5600LV and FESEM 7600f, JEOL, Peabody, MA, USA) operated at 15 to 30 kV.For the porosity measurement, at least 10 continuous cross-section images (optical, ×100 magnification) were taken from the coating top, middle and near-interface regions.These images were stitched (per location) and processed using the open source software ImageJ (NIH, Bethesda, MD, USA) [48].
The microhardnesses of the cross-sections of the coated Ti64 samples were evaluated using a Vickers microindenter (FM-300e, Future-Tech, Kanagawa, Japan), with 300 g load and 15 s dwell time.A total of 10 indentation measurements were randomly conducted on the cross-section of each sample and an average microhardness value was calculated.
Adhesion strength testing was conducted on each coated sample following the ASTM C633 standard [56].The detailed assembly steps for the testing samples were reported in [53].An assembled sample was tested using a tensile tester (Instron 5569, High Wycombe, UK) with a load cell of 50 kN in tensile mode with an extension rate of 0.8 mm/min until the sample failed.
Finite Element Modelling
ABAQUS/Explicit finite element analysis software was used for the 3D modelling of the Ti64 particle-Ti64 substrate impact process.Figure 3 shows an isometric view to better illustrate the meshes and the exact positions of the particle and substrate.The particle temperatures were estimated from the KSS software [54].The particle impact velocities selected were the two extreme ends of the study, i.e., 730 and 855 m/s, while the particle temperatures were set to be 754 and 865 K, respectively, obtained from the KSS software [54].The substrate temperature was set at 573 K as a result of preheating [53].The particle size was fixed at 30 µm for the simulations and the substrate had a diameter of 120 µm (4-times larger than the particle size) and a height of 60 µm.The mesh size of the substrate ranged from 0.3 µm at the impact center to 1 µm at the edge wall, while the particle mesh size was set as 0.6 µm (1/50 of the particle diameter d p ) and gradually decreased to 0.3 µm (1/100 of the particle diameter d p ) towards the impacted region.The monitored elements are A, B and C as illustrated in Figure 3b.The Johnson-Cook plasticity model was used to determine the effects of strain hardening, strain rate hardening and thermal softening on the equivalent plastic deformation resistance.This model has been widely used to simulate the jetting phenomenon of particle impact during cold spraying [12,14,18,27,34,[57][58][59][60][61][62][63][64][65][66][67][68], despite its limitation at very high strain rates [57,69,70].The equivalent plastic stress of the material is given as follows: where σ is the equivalent plastic stress or flow stress (MPa), ε P is the equivalent plastic strain (s −1 ), .ε p is the equivalent plastic strain rate (s −1 ), .ε p 0 is the reference equivalent plastic stain rate (s −1 ), T m is the melting temperature of the material (K), T re f is the reference temperature, normally taken as room temperature (K), and A, B, C, m and n are the material constants determined by mechanical tests.
The Johnson-Cook dynamic failure model was also used to simulate the progressive damage and failure of materials, which is expressed as follows: where ε p f is the equivalent fracture strain, p is the pressure stress, q is the Mises stress, and D 1 to D 5 are the failure parameters determined by mechanical tests.
All the material properties and temperature-dependent data are referred from the literature [71] and summarised in Table 3.It is to be noted that, since the complete deformation process is kept within dozens of nanoseconds, the thermal diffusivity distance is much shorter than the characteristic dimension of the elements in the particle and substrate, and hence the particle-substrate impact is assumed to be an adiabatic process where thermal conduction is considered to be zero during the deformation [12,18,60].* Temperature-dependencies were reported elsewhere [60].
Particle Velocity Analysis
The particle velocity of the feedstock powder impacting onto the substrate or prior deposits provides the key driving force for bonding formation, which can be derived using the following equation [72,73]: where v p is the particle velocity, M is the local Mach number,M w is the molar mass (28 g•mol −1 for N 2 and 4 g•mol −1 for He gas), γ is the specific heat or isentropic expansion ratio (1.67 for He and 1.4 for N 2 gas), R is the perfect gas constant (8.314J•kmol −1 •K −1 ), T is the gas temperature, d p is the particle diameter, x is the axial position, ρ s is the particle density, and p 0 is the gas supply pressure measured at the entrance of the nozzle.Equation ( 3) would be used as a discussion tool while the numerical calculations were performed using the KSS software [54].From the equation, it can be seen that the particle velocity is governed mainly by the molar mass (gas type), temperature and pressure of the propellant gas.By varying the gas preheated temperature and introducing gas with a lower molar mass, different particle velocities could be achieved.Figure 4a shows the calculated and measured particle velocities as well as the calculated particle temperatures as a function of gas temperature.It is observed that both the particle velocity and temperature increase with increasing gas preheated temperature.The measured particle velocity is in a good correlation with the numerical model from the KSS software [30], with a less than 4% mismatch.When the gas preheated temperature increases from 600 to 1000 • C, the measured particle velocity also increases from 697 to 800 m/s and the particle temperature (from the KSS numerical model) is raised from 339 to 625 • C. The increases in particle velocity and temperature would allow the particles to obtain high impact energy and be thermally softened to undergo the adiabatic shear instability for bonding.
The particle velocity can be further increased with the addition of He gas into the N 2 gas to form a gas mixture as shown in Figure 4b.As He gas has a molar mass of 2 g/mol while N 2 gas has a mass of 28 g/mol, by mixing these gases, the resultant N 2 -He gas mixture has a lower molar mass, which can accelerate the metal particles at a higher speed as it is inversely proportional to molar mass.Every addition of 10 vol.% of He increases the overall gas velocity by approximate 20-30 m/s.This allows a further particle velocity increment within the capability of the cold-spray heater system.In addition, it would be more efficient to use the N 2 -He gas mixture as the propellant gas to save cost.In relation to the cost of pure N 2 gas per m 3 , the cost of the N 2 -He gas mixture (for the case of N 2 with 20 vol.% He) would only cost 2-times more, while pure He gas is 6-times more expensive [39,74,75].However, for the N 2 -He gas mixture, there is a slight drop of particle temperature of around 15 • C with every 10% addition of He because He gas is a more thermally conductive gas (0.138 W/m•K) and has less thermal storage (840 kJ/m 3 ) compared to N 2 gas (0.0234 W/m•K, 1181.3 kJ/m 3 ), which will in turn slightly cool-down the powder stream by dissipating the heat during the gas expansion.
Another reason for the particle temperatures being lower is also related to the level of gas cooling in the expanding supersonic region of the nozzle.The mixed gas containing a higher fraction of He expands more (due to a higher isentropic expansion ratio) and reduces to a much lower temperature compared to the pure N 2 gas.This causes a bigger difference between the gas and the particles in addition to the difference in terms of the thermal properties of the gas and the particles.
Figure 4c shows the resultant particle velocities with respect to the pressure and temperature parameters (Table 2) when being positioned in the window of deposition, with the critical velocity as the reference.The calculations are based on Equation (4) [27,28] and performed using the KSS software [54].The critical velocity is expressed as where σ ultimate is the ultimate tensile strength, ρ is the density, c p is the heat capacity, T m is the melting temperature, T i is the mean temperature of particles upon impact, T R is the reference temperature (293 K), and F 1 and F 2 are the fitting constants.Equation ( 4) is normally referred to as the minimum particle velocity required for the formations of coating and bonding [6,18].However, in the following sections, it will be shown that good coating adhesion can also be obtained from the particles impacted at the velocities well below the critical velocity.
Cross-Section Analysis
Figure 5a-e shows the optical micrographs of the unetched cross-sections of the Ti64 coatings deposited at different particle velocities.The porosity level of the coatings substantially drops from 11 to 1.6% (85% reduction) when the particle velocity increased from 730 to 855 m/s, as shown in Figure 5f.Besides, the current work also shows that the coating porosity level can be reduced with a small addition of He gas in the N 2 gas.The Ti64 coating sprayed with 20 vol.% addition of He gas to the N 2 gas successfully achieves a lower coating porosity in comparison with other reported works [22,42,[47][48][49]51,[76][77][78].There are several reasons for the densification of the coatings: (1) the increase in particle velocity provides sufficient impact energy for the particles to deform and seal the pores, and (2) the increase in preheated temperature allows the particles to have more thermal softening.The porosity does not improve further after reaching 1.6%, which could be attributed to the reduction of particle temperature (particles are less thermally softened and thus more resistive to deformation) as a result of the He addition, which was also observed by Goldbaum et al. [44].Some flow control parameters could be adjusted to change the particle impact temperatures by keeping particle impact velocities constant, such as (1) by extending the chamber and nozzle convergent length to increase the interaction time of particles with the preheated gas before the particles enter the nozzle throat [79]; and (2) by reducing nozzle cooling.Figure 6 shows the cross-sections of the Ti64 coatings deposited under increasing particle velocity.The left column of Figure 6 shows the optical micrographs of the etched cross-sections, revealing that all the coatings and substrates are intimately bonded without obvious coating delamination.It is also showed that the coated particles are more deformed at the higher particle velocity.Some of the particles in the coating deposited at 730 m/s appear to retain the spherical shape of the feedstock powder while the ones impacted at 855 m/s show higher particle flattening.The denser coating and higher flattening ratio observed in the coatings deposited with higher particle velocity result from the higher impacting energy and stronger tamping effect from the subsequent particles.On the other hand, the higher particle temperature accompanying the higher particle velocity also enhances the thermal softening of the particles, which contributes to the particle deformation and flattening.Goldbaum et al. also reported similar observations for single splats, where deformation increased with impact velocity [44].The middle and right columns of Figure 6 show the BEIs of the unetched coating cross-sections.The deposited Ti64 particles exhibit heterogeneous deformation, which comprises both highly and lightly deformed regions that correspond to the peripheral and interior regions of the particles, respectively [80].The BEIs show weak electron channeling contrasts, which allow a differentiation between different grain orientations.There are mainly bimodal contrasts observed in the particles: darker (termed "textured" region) and brighter (termed "smooth" region) contrasts.The right column in Figure 6 shows the BEIs of some typical particles deposited at different particle velocities.The area of the "textured" region is found to decrease with increasing particle velocity.The ratio of the "smooth" region is also an indirect indication of the extent of grain refinement the particles have encountered.The "textured" regions are made up of more than 50% of the area of the particle deposited at the particle velocity of 730 m/s (Figure 6a) and are reduced to an approximately 50% area of the particle deposited at 760 m/s (Figure 6b).The "textured" region continues to shrink and the transition between the "textured" and "smooth" regions eventually becomes unclear as seen in the particles deposited with a velocity of 855 m/s (Figure 6e).The "textured" region is believed to be made up of broken martensitic lathes with varying degrees of fragmentation as well as the remnant martensitic microstructure from the parent powder (Figure 1b), as indicated in the difference in contrast within the region [19,48].The "smooth" region appears rather featureless, which generally contains more refined grains than the martensitic lathes, resulting in the grain refinement of the parent microstructure due to the adiabatic shear instabilities upon impact [48,81].
The hardness of the coatings increases with particle velocity from 330 to 394 HV as shown in Figure 7.A higher particle impact velocity results in a larger deformation of the particles, and also the occurrence of adiabatic shear instability forms refined polycrystalline nanograin zones [48,81].These refined nano-grains increase the hardness of the coating by the grain boundary strengthening effect and decrease the dislocation mobility across grain boundaries, as described in the Hall-Petch equation [82].The hardness readings of the coatings deposited at 730 and 760 m/s have larger deviations because of the higher porosities of the coatings.In comparison to the coatings deposited at 800 to 855 m/s, the hardness is more uniform due to the much lower porosity and the more uniform deformation of the coating splats, as shown in Figure 5f.At 827 m/s (10 vol.%He in N 2 -He mixed gas) and 855 m/s (20 vol.%He in N 2 -He mixed gas), the hardness values reach a plateau because the increment of velocity is accompanied by a drop in temperature, where the thermal softening of the particles is insufficient to induce further deformation and overcome flow stresses for further strain hardening (or cold working).
Adhesion Strength
Figure 8a shows the adhesion strengths of all the coatings deposited across a large range of particle velocities tested via tensile tests (Figure 8b).It is observed that all the coatings achieve an adhesion strength above 60 up to 65 MPa as a result of failure at the glue section (Figure 8c).The results show that the bonding at the interfaces is relatively strong (with respect to thermal spray coatings [83]), mainly resulting from metallurgical bonding and mechanical interlocking.Interestingly, the coatings deposited at 730 and 760 m/s, below the theoretical critical velocity, have reasonable good adhesion to the substrates, despite having a relatively high porosity level of around 10%.Such a high adhesion strength of porous Ti64 coatings was also reported by Perton et al. [22] (Table 1).This observation is intriguing because the coatings deposited below the critical velocity generally contain cracks and defects at the interfaces that lead to a poorer interfacial bond strength [44].These results seem to suggest that the coating porosity would not be a limiting factor in achieving a cold sprayed coating with a high adhesion strength.The adhesion strength is often governed by the bonding quality, especially at the coating-substrate interface.It can be observed in Figure 5 that delamination between the coating and substrate is absent in all the coatings deposited with various particle velocities.The high adhesion strength of the porous Ti64 coatings deposited at 730 and 760 m/s could be attributed to the grain refinement at the impact zone, despite being not so severely deformed as those particles impacted at 800 to 855 m/s.The similarity of the grain refinement locations of particles deposited at 730 and 800 m/s is shown in Figure 8d,e, where these refined grains may have interlocked with the substrate surface, which has also refined the grains from the bombardment of the Ti64 particles [84], forming a bond strength higher than 60 MPa.Another possible reason for this high bonding strength was the polished substrate surface condition that allows the particles with a lower impact velocity to bond with the substrate without surface barriers [22,24].
The particles are able to efficiently convert the impact energy (kinetic energy) to plastic strain and thermal energy.The impact energy allows the particles to form the classic adiabatic shear instability feature, where the high interfacial temperature (near melting point) would induce a reduction in flow stress and allow the material to flow with a high strain (jetting).The polished surface does not contain the features that prevent the formation of material jetting.In an event of a rough surface, the particles would have utilised the impact energy to conform or deform the features, which might induce the lower strain energy to be redistributed as thermal energy for bonding [22].The evolutions of stress, strain and temperature will be further discussed in Section 3.5.
Fractography
To understand the bonding between the particle-substrate and particle-particle, the coatings were forcibly fractured by shear and bending at the coating-substrate interfaces and cross-sections, respectively.The SEM images in Figure 9 give an overview (left column) of the substrate surfaces after the coatings are removed and the individual impact craters on the substrates (right column).An impact crater is typically a cup-like feature associated with a rim of dimple fracture.Three significant regions could be identified from each of the craters: (i) the core of the crater, which generally refers to the impact centre ("south pole" [18]) where the impact particle bounces off the substrate; (ii) the rim of the dimple fracture, which corresponds to the periphery of impacted particle; and (iii) the outermost region, or the material-jetting portion [84].It is observed that both the core and outermost region of the craters are generally featureless, indicating the absence of metallurgical bonding and occurrence of brittle failure.On the contrary, the dimple fracture is representative of ductile failure, which is believed to occur at the metallurgically bonded and/or mechanically interlocked periphery of a particle with its contact surfaces.Some particles are also found to be retained on the substrates as a result of greater particle-substrate interfacial bonding than the interparticle bonding.The broken of section could be the refined grains sections as they might be less ductile due to grain boundary strengthening, and more susceptible to crack upon force.For the coating deposited at the particle velocity of 730 m/s, as shown in Figure 9a, very few particles remain on the substrate surface, resulting in a nearly clean cleavage of the coating from the substrate.The impact craters are also shallow due to the lower impact energy.However, the rim of the crater shows a dimple fracture, which is believed to account for the reasonably high adhesion strength (glue failure).This suggests that a high bond strength still be attained even at a lower particle velocity.In comparison, for a higher particle velocity, i.e., 855 m/s (Figure 9b), there are an increasing number of particles that are retained by the substrate as well as the deeper craters due to the higher particle impact energy.The rims of the dimple fracture also become wider and thicker with increasing particle velocity, which indicates a larger bonded region of the particle to the substrate.
Figure 10a,b show the overview of the fractured interface (coating side) of the coating deposited at particle velocities of 730 and 855 m/s after being removed from the substrate and the individual protrusion found on the coating, respectively.The rims of the dimple fracture in the particle protrusions at the bonded regions correspond to the rims of the craters on the side.The outer boundary of the dimple fractures is the jetted region of the particle.This indicates that the bonding resulting from the adiabatic shear instability mainly occurs in the periphery region of the particle, as reported by Vidaller et al. [45].The particle protrusion height indicates the extent of particle penetration into the substrate.Therefore, the dimple fracture region becomes wider and the protrusion height becomes more substantial alongside a high particle velocity of 855 m/s, as shown in Figure 10b.The coatings and the particle protrusions from the coatings deposited with other particle velocities are also available for comparison in Figure S2 (Supplementary Materials).As shown in Figure 11, the fractured cross-sections of the Ti64 coatings are also investigated to understand the interparticle bonding in the coatings.Figure 11a-c show the overview of the fractured coatings deposited at 730, 800 and 855 m/s, respectively.The particles coated at 730 m/s appear to partially retain the spherical shape while the particles coated at 800 and 855 m/s are significantly flattened in the impact direction.The severe plastic deformation allows the particles to seal up the interparticle gaps more effectively as a result of the stronger tamping effect at higher particle impact, and eventually densifies the coatings.The cleaved surfaces of the particles sprayed at 730 m/s (Figure 11a) show a large smooth and clean delaminated area (from particles) and some dimple fracture.At the high particle velocities of 800 m/s (Figure 11b), and 855 m/s (Figure 11c), the amount of dimple fracture increases substantially.For comparison, the SEM images of the fractured coatings deposited at other particle velocities are shown in Figure S3 (Supplementary Materials).
Finite Element Model
The finite element modelling (FEM) is carried out to understand the particle impact phenomena at different particle velocities.The overview of the impact is shown in Figure 12 with the evolutions of the elements A, B and C in terms of temperature, stress and strain at 30 ns upon impact.At 30 ns, these regions undergo a clear jump (termed "secondary" jump) in their temperature profiles, where the adiabatic shear instability takes place and aids in interfacial bonding [12], as reported in a previous work [61].For the case of 730 m/s, the top section of the particle is relatively colder (ranging from 750 to 900 K) as compared to the interface (900 to 1400 K).The temperature at the interface increases from the middle of the particle (element A, 887 K) towards the periphery (element C, 1333 K), as shown in Figure 13a.The temperature rise at the interface periphery (element C) to as high as 0.7 T m of Ti64 (refer to Table 3) will soften the material and reduce the flow stress from 800 to 480 MPa as compared to element A (almost no stress reduction) and B (800 to 750 MPa), as shown in Figure 13b.With a lower flow stress, the particle periphery (element C) deforms as high as 400% in strain compared to the central regions (elements A and B) shown in Figure 13c.The particle impact at 855 m/s shows a substantial increase in temperature, flow stress reduction and strain as compared to the particle impacted at 730 m/s.A larger portion in the particle experienced a higher temperature.The temperature at the interface periphery (element C) reaches 1412 K (as high as 0.75 T m ) (Figure 13d), further reducing the flow stress from 718 to 406 MPa (Figure 13e).Both the initial and the subsequent stresses are lower than the stress of particle impacted at 730 m/s due to thermal softening.As a result, the particle deformation is more severe and achieves a strain of 440% at its periphery (element C), while elements B and A record strains of 295% and 74%, respectively, as shown in Figure 13f.
Both impact phenomena at 730 and 855 m/s do show the occurrence of the adiabatic shear instability because there is a high jump of temperature (0.7 to 0.75 T m ) and a significant drop of stress (around 50% drop) occurring in the material [85], as predicted by the modelling results (Figure 13).However, from the experimental observations and the simulated particle shape upon impact (Figure 12), it can be seen that a much lower extent of material jetting happens in the case of 730 m/s in particle velocity, which might limit the particle-substrate adhesion.In the case of low particle velocity, the particle adhesion could be promoted by using the optimised process parameters such as smooth and preheated surfaces, optimum traverse scan speed, raster steps, etc.For comparison, the FEM of particle impact at 800 m/s is also shown in Figure S4 (Supplementary Materials).The increases in temperature strain and reduction of flow stress are slightly higher than the particle impacted at 855 m/s due to the higher initial temperature before the impact.The overall adhesion of the coating deposited with the particles sprayed below the critical velocity could primarily be attributed to the velocity distribution of the particles propelled by the gas stream, wherein the material jetting occurs in a relatively small fraction of particles, to facilitate the particle-substrate bonding with the velocities higher than the average velocity (in the case of 730 m/s, which is lower than the predicted critical velocity).For a 855 m/s mean particle velocity, a much higher fraction of particles experience material jetting and hence resulting in better bonding and lower porosity in the coating in general.9 The FEMs with respect to 730 and 855 m/s can be correlated back to the microstructure and mechanical properties of the coatings.The decrease in porosity with increased velocity is because of a higher particle deformation, with up to 440% strain due to thermal softening.However, the porosity of the cold-sprayed Ti64 coatings is not further reduced beyond the particle velocities of 827 and 855 m/s because a higher fraction of He gas in the N 2 -He mixture has a cooling effect on the particles.To further reduce the coating porosity level, for example, by around 1 to 2% (Table 1), the particles have to be deposited at a velocity of 900 m/s or above that is only achievable when using pure He gas, which may not be economical due to the high cost of He gas.
Besides this, a higher particle impact velocity results in more grain refinement via the serration of large grains in the textured region into more refined grains in the smooth region.From the simulation, it is evident that the particle impacted at 855 m/s would have more grain refinement than that impacted at 730 m/s because of the higher deformation and temperatures observed at the particle-substrate interface in the former case.The grain refinement would increase the surface area of the grains to bond with the neighbouring grains from other particles to form a strong bonding [86].This can be observed in the increasing quantity, width and thickness of the dimple fractures remaining on the adhered particles and substrate surfaces of the fractured samples (Section 3.4).The particle deposited at 730 m/s reveals that the periphery of the particle experiences a temperature rise to 0.7 T m and strain of 400%, ensuring sufficient metallurgical bonding to achieve an adhesion strength of at least 60 MPa (Section 3.3).
Conclusions
The deposition of cold-sprayed Ti64 coatings on Ti64 substrates at different particle impact velocities was investigated experimentally and simulated with finite element modelling (FEM).The following conclusions were drawn based on the results obtained from the study:
•
The addition of He gas into N 2 gas efficiently increased the particle velocities without a significant reduction in particle temperature, which contributed to the thermal softening and plastic deformation of the sprayed particles;
•
The porosity content in the Ti64 coatings dropped from about 11 to 1.6% with increasing particle velocity from 730 to 855 m/s; • The coating/substrate interfaces of all the coatings were intimate without macroscopic cracks.The percentage of smooth regions (consisted of refined nanograins) of the coatings increased with higher particle velocity as compared to the textured regions (consisted of martensite laths) due to the severe particle deformation that helped with particle refinement;
•
The microhardness of the coatings increased with higher particle velocity due to a higher fraction of refined grains (grain boundary strengthening) within the splats;
•
The adhesion strengths of all the coatings deposited across the velocity range exceeded 60 MPa, as the tests failed at the glue regions, which showed that an effective coating with an appreciable adhesion strength, albeit with a higher porosity level, could be formed even with a particle velocity lower than the calculated critical velocity.This could be attributed to the velocity distribution of particles where a fraction of particles could have velocities higher than the respective critical velocities to form a strong bonding with the substrate, coupled with the optimum deposition parameters; • Fractographic analyses revealed that the dimple fractures were more prominent in the coatings deposited at higher particle impact velocities due to the more severe cohesive failure within particles;
•
The FEM indicated more plastic deformation and higher temperatures at the peripheries of the particle with a higher impact velocity (e.g., 855 m/s), which correlated well with the experimental observation of the mechanical response of the coatings;
•
The use of an N 2 -He gas mixture as the propellant gas was more cost effective for producing high quality coatings.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2079-6412/8/9/327/s1, Figure S1: SEM micrographs of fractured interfaces on the substrate side for the coatings deposited at particle velocities of (a) 760, (b) 800 and (c) 827 m/s, observed under different magnifications at a tilted angle of 45 • ; Figure S2: SEM micrographs of fractured interfaces on the coating side for the coatings deposited at particle velocities of (a) 760, (b) 800, (c) 827 m/s observed under different magnifications at a tilted angle of 45 • ; Figure S3: SEM micrographs of fractured cross-sections of the coatings deposited at particle velocities of (a) 760 and (b) 827 m/s under different magnifications; Figure S4: (a-c) Simulated deformation and temperature profiles of a Ti64 particle impacted on a Ti64 substrate at particle velocity of 800 m/s at 30 ns for different views and (d-f) temperature, stress and strain evolutions of elements A, B and C at the interfaces of Ti64 particle impacted at 800 m/s, for the duration of 30 ns.
Figure 1 .
Figure 1.SEM images of (a) Ti64 powder (grade 23) and (b) cross-section of a Ti64 particle under back-scattered mode.
Figure 3 . 3 .
Figure 3.Figure 3. (a) Finite element mesh of a full 3D model for a single particle's normal impact onto the substrate and (b) a zoom-in view of the particle-substrate interface with the respective locations of elements A, B and C.
Figure 4 .
Figure 4. (a,b) The particle exit velocity as a function of (a) gas preheated temperature at a constant pressure of 45 bar and (b) fraction of He gas in N 2 -He mixture (vol.%) at 45 bar and 1000 • C; and (c) windows of deposition based on particle velocity and temperature.The numerical calculations by Kinetic Spray Solutions (KSS) software were based on the particle size of 33 µm.It is to be noted that the velocity measurements for 45 bar, 600 and 700 • C were used as a comparison and the coatings were not actually deposited.
Figure 5 .
Figure 5. (a-e) Optical micrographs of polished cross-sections for the coatings deposited with particle velocities of (a) 730; (b) 760; (c) 800; (d) 827 and (e) 855 m/s; and (f) porosity level as a function of particle velocity.The arrows in (a-e) indicate the interfaces between the coatings and substrates.
Figure 6 .
Figure 6.The etched (observed under OM; left column) and unetched cross-sections (observed under back scattered condition; middle and right columns with different magnifications) of the coatings deposited with particle velocities of (a) 730; (b) 760; (c) 800; (d) 827 and (e) 855 m/s under different magnifications.The textured and smooth regions are labelled with "T" and "S" in the right column, respectively where the arrows indicate the interparticle boundaries.
Figure 7 .
Figure 7.The hardness of coatings' cross-sections as a function of particle velocity.
Figure 8 .
Figure 8.(a) Coating adhesion strength as a function of particle exit velocity; (b) photographs of a coated sample before and after tensile test; (c) photograph of the coated sample which shows glue failure, and (d,e) back-scattered SEM micrographs of particles impacted with velocities of (d) 730 and (e) 800 m/s.
Figure 9 .
Figure 9. SEM micrographs of fractured interfaces on the substrate side for the coatings deposited at particle velocities of (a) 730 and (b) 855 m/s, observed under different magnifications at a tilted angle of 45 • .The fractured interfaces of the other velocities are shown in Figure S1 (Supplementary Materials).
Figure 10 .
Figure 10.SEM micrographs of the fractured interfaces on the coating side for the coatings deposited at particle velocities of (a) 730 and (b) 855 m/s, observed under different magnifications at a tilted angle of 45 • .The fractured interfaces of the coatings deposited with the sprayed particles of other impact velocities are shown in Figure S2 (Supplementary Materials).
Figure 11 .
Figure 11.SEM micrographs with different magnifications showing fractured cross-sections of the coatings deposited with particle velocities of (a) 730; (b) 800 and (c) 855 m/s.The fractured cross-sections of the coatings with respect to other particle velocities are shown in Figure S3 (Supplementary Materials).
Figure 12 .
Figure 12.Figure 12. Simulated deformation and temperature profiles of a sprayed Ti64 particle impacted on a Ti64 substrate at velocities of (a-c) 730 m/s and (d-f) 855 m/s at 30 ns with (a,d) front view, (b,e) bottom view, and (c,f) crater view.
Figure 12 .
Figure 12.Figure 12. Simulated deformation and temperature profiles of a sprayed Ti64 particle impacted on a Ti64 substrate at velocities of (a-c) 730 m/s and (d-f) 855 m/s at 30 ns with (a,d) front view, (b,e) bottom view, and (c,f) crater view.
Table 1 .
Review of CS deposited Ti64 coatings on Ti64 substrates.
Table 3 .
Material properties of the Ti64 alloy used for modelling. | 9,955.4 | 2018-09-18T00:00:00.000 | [
"Materials Science"
] |
Dual Wavelength Photoplethysmography Framework for Heart Rate Calculation
The quality of heart rate (HR) measurements extracted from human photoplethysmography (PPG) signals are known to deteriorate under appreciable human motion. Auxiliary signals, such as accelerometer readings, are usually employed to detect and suppress motion artifacts. A 2019 study by Yifan Zhang and his coinvestigatorsused the noise components extracted from an infrared PPG signal to denoise a green PPG signal from which HR was extracted. Until now, this approach was only tested on “micro-motion” such as finger tapping. In this study, we extend this technique to allow accurate calculation of HR under high-intensity full-body repetitive “macro-motion”. Our Dual Wavelength (DWL) framework was tested on PPG data collected from 14 human participants while running on a treadmill. The DWL method showed the following attributes: (1) it performed well under high-intensity full-body repetitive “macro-motion”, exhibiting high accuracy in the presence of motion artifacts (as compared to the leading accelerometer-dependent HR calculation techniques TROIKA and JOSS); (2) it used only PPG signals; auxiliary signals such as accelerometer signals were not needed; and (3) it was computationally efficient, hence implementable in wearable devices. DWL yielded a Mean Absolute Error (MAE) of 1.22|0.57 BPM, Mean Absolute Error Percentage (MAEP) of 0.95|0.38%, and performance index (PI) (which is the frequency, in percent, of obtaining an HR estimate that is within ±5 BPM of the HR ground truth) of 95.88|4.9%. Moreover, DWL yielded a short computation period of 3.0|0.3 s to process a 360-second-long run.
Introduction
Multi-diagnostic wearable devices are of ongoing interest due to their ability to store and transmit information about the wearer inexpensively and efficiently. Many wearable sensors employ photoplethysmography (PPG), a low-cost optical technique used to detect blood volume changes in the microvascular bed of tissues [1]. This technique enables noninvasive detection of the cardiovascular pulse wave generated by the elastic nature of the peripheral vascular arteries excited by the quasi-periodic contractions of the heart [2][3][4]. PPG signals are used in pulse oximeters-devices that measure the light absorbed by functional hemoglobin (oxygenated and deoxygenated hemoglobin) and produce vital signs such as heart rate (HR) or peripheral capillary oxygen saturation (SpO 2 ) (an estimate of the arterial oxygen saturation (SaO 2 ) [5]). In order to obtain a PPG signal, light is typically shone through the skin and its reflection is captured by a photo-detector. In this study, we serially collect the reflection of light at two different wavelengths, namely, green and infrared (IR).
In the presence of substantial human motion, the quality of the measured PPG signal deteriorates [6]. Much effort has been exerted to suppress motion artifacts in order to extract high-quality vital signs from noise-contaminated PPG signals [3,[7][8][9][10]. This study contributes to this effort.
There are two main sources of motion artifacts that could contaminate a PPG signal collected from a human in motion [9]. The first source of noise is the sensor displacement relative to its original point of contact with the skin. This displacement could alter the path of light, and hence modify the signal collected by the photo-detector [11]. The second source of noise is skin and tissue deformations caused by the sensor's movement.
Zhang et al. [9] proposed an HR calculation method that uses a dual-wavelength sensor that comprises an IR and a green PPG signal. The IR PPG signal was employed to develop a noise source that was used to denoise the green PPG signal from which an HR level was extracted.
The HR calculation algorithm presented in [9] was tested on "micromotion artifacts" such as "finger tapping" and "fist opening and closing". In the current study, we examined the applicability of a related approach for more substantial movements and dynamic scenarios. Motivated by the sensor architecture proposed in [9], we expanded the HR calculation technique to high-intensity full-body repetitive "macro-motion" exercise data. The resulting Dual Wavelength (DWL) method collects green and IR PPG data from a dual-wavelength wrist unit and processes them to estimate the participant's heart rate. The performance of DWL was documented in an extensive motion experiment involving fourteen (14) human participants. There were three separate experiments. In the first (SNR experiment), we used all fourteen (14) participants. In the second experiment (wrist-based heart rate calculation), we used eleven (11) participants due to sensor failure on three of the participants. In the third experiment (palm-based heart rate calculation), we used twelve (12) participants due to sensor failure on two of the participants. Figure 1 shows the essentials of the DWL method. It consists of five (5) stages; 1. Preprocessing, 2. Motion-artifact detection, 3. Motion-artifact frequency components identification, 4. Denoising, and 5. Heart rate estimation. The inputs to the DWL system are green and IR PPG channels measured from a wrist-unit constructed for this study (see Section 2.1). The output is an HR level. First, the green and IR PPG signals are normalized by dividing the signal's AC component by its DC component. We then check whether significant motion noise is present in the PPG signals (Section 2.3.2). If the signals appear noise-free, the normalized green PPG signal is directly used to calculate an HR value. If the signals appear noise contaminated, we then extract the noise components from the IR PPG signal. These noise components are removed from the noisy green PPG signal. We employ a Cascading Adaptive Noise Cancellation (C-ANC) architecture that uses a QR-decomposition-based least-squares lattice (QRD-LSL) algorithm [12] to denoise the green PPG signal before it is used for HR calculation. A separate decision mechanism validates the HR estimate, and corrects it when noise levels are excessively high to produce a meaningful estimate. The rest of this paper is organized as follows. In Section 2, we present the materials and methods we employ in this study. Section 2.1 describes the experimental settings along with the sensors' suite. In Section 2.2, we use experimental data to present the rationale for choosing the IR PPG signal as noise reference signal. Section 2.3 introduces the DWL framework; a method for (1) denoising the green PPG using the noise components extracted from an IR PPG signal, and (2) computing HR levels. Lastly, in Section 2.4, we review alternative HR calculation methods that use auxiliary sensors as a noise source, namely accelerometers. These methods are TROIKA [7] and JOSS [8]. Section 3 presents the results of the DWL framework. In Section 3.1, we define the performance metrics used to compare the performance of the DWL method to that of our implementations of TROIKA and JOSS. In Section 3.2, we compare the actual performance of the DWL method to that of our implementations of TROIKA and JOSS. The comparison is made with respect to (1) the heart rate ground truth computed from an electrocardiography (ECG) signal, and (2) the heart rate levels obtained using TROIKA and JOSS. Section 3.3 validates the DWL framework by testing its performance on experimental data collected from the palms (instead of wrists) of the same participants during a second run (validation run). In Section 4, we conclude that the DWL method provides several desirable features, including the following: (1) the DWL framework uses only PPG signals; auxiliary signals (such as accelerometers used by TROIKA and JOSS) are not needed and (2) the DWL framework appears to exhibit high accuracy and lower computational burden in the presence of motion artifacts as compared to TROIKA and JOSS.
Materials and Methods
In this section, the materials and methods employed in this work are presented. In Section 2.1, we present the sensors used for data collection. We also describe the exercise protocol followed during data collection. In our framework, noise components are extracted from an IR PPG signal. In Section 2.2, we show using experimental data, the rationale behind the choice of IR PPG signal as a reference noise source. In Section 2.3, the DWL framework is introduced and described in detail (along with its five (5) stages, namely, pre-processing, Motion-artifact detection, Motion-artifact frequency components identification, Denoising, and Heart rate estimation). We compare the performance of the DWL method to alternative HR calculation methods that use auxiliary sensors as a noise source, namely accelerometers. These methods are TROIKA [7] and JOSS [8]. In Section 2.4, we present the framework of these two alternative HR calculation methods.
Experimental Protocol and Sensors Suite
We conducted a high-intensity full-body exercise experiment where we collected PPG, electrocardiography (ECG), and tri-axial accelerometer data. Accelerometers measured accelerations in three orthogonal directions X, Y, and Z, simultaneously [13]. Readings were obtained from fourteen (14) human participants while they were standing or running on a split-belt instrumented treadmill (Bertec Corp., Columbus, OH) [14]. First, a multiwavelength wrist oximeter unit was strapped around the participant's wrist. The wrist unit encloses two green LEDs (of wavelength λ G = 520 nm) and two IR LEDs (of wavelength λ IR = 940 nm), as well as a photo-detector. Additionally, a tri-axial accelerometer sensor was placed on the participant's arm (right above the PPG wrist-unit) and secured in place using athletic tape. Lastly, an ECG sensor was mounted onto the participant's chest using adhesive electrodes. Athletic tape was wrapped around each participant's chest to ensure the sensor's stability and good skin contact. Table 1 shows all the instruments and sensors used in the experiment. Both ECG and accelerometer data were recorded using the Delsys EMGworks Software. Multi-wavelength PPG wrist-unit data were recorded using an Arduino UNO. All signals were sampled at 100 Hz. Raw data were processed using MATLAB 2022b (Mathworks, Natick, MA) [15]. All raw data are available through the Github repository in [16].
The ECG signal was used to calculate the HR "ground truth" values. We manually labeled the R peaks for all ECG signals. The HR ground truth at time step l, HR GT (l), is obtained using the relationship where δ R−R (l) is the average time difference between each two consecutive R peaks present within the 8-second-long window, at time step l. The experimental protocol we followed during data collection was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the New Jersey Institute of Technology (protocol code 2108010504; approved on 14 September 2021). All participants were physically fit, healthy, and athletic volunteers. Each participant was asked to run on a treadmill following an exercise profile that comprises six (6) stages. At each stage, the treadmill speed-hence the exercise intensity-was varied as follows: •
Infrared PPG Signal as Noise Reference Signal
According to [9], IR PPG signals are more affected by motion artifacts than green PPG signals. To verify this behavior in our experiment, we calculated the signal-to-noise (SNR) ratios for both the green and IR PPG signals. The SNR is defined as SNR(in dB) = 10 log 10 P desired signal P noise , where P desired signal and P noise are the power of the participant's heart rate component and motion artifact components, respectively. In order to calculate an SNR value, the desired and noise signal components should be identified and separated. At this stage, we used the participant's HR ground truth (obtained from an ECG signal, collected simultaneously with the PPG signals) using Equation (1), in order to determine the desired signal component.
The desired signal and noise components were obtained, respectively, from the green and IR PPG signals. First, the green and IR signals were normalized by dividing their AC component by their DC component. The desired signal component (the component that contains heart rate information) of the normalized PPG signal was obtained by applying two bandpass filters centered at the participant's HR frequency (fundamental frequency) and its second harmonic [9]. During this step the participant's HR was obtained from the ECG signal. The noise component was obtained by subtracting the desired signal component from the normalized signal.
We calculated the SNR values of the green and IR PPG signals for all fourteen (14) participants in the following manner. Every 2 s, the preceding 8-second-long PPG segment was used to obtain an SNR value. In total, each participant had between 175 and 177 SNR values for each PPG signal (green and IR signals). The first and last minute of the collected PPG data were omitted since these data segments were noise-free. SNR values for all participants were grouped together and their distribution is presented in Figure 2 as boxplots [22].
In our experiment, the SNR mean value of the IR PPG, µ IR SNR = −8.5 dB (black dot in Figure 2), was less than the SNR mean value of the green PPG signal, µ G SNR = −4.8 dB (green dot in Figure 2). These results are statistically significant for a level of significance α = 0.01. This difference supports the choice of IR PPG as a noise reference signal using experimental data. Figure 2. SNR values of IR and green PPG signals, respectively, calculated from all fourteen (14) participants. The dots represent the mean value of SNR. The red bars represent the median value of SNR. The red '+' signs represent outliers.
DWL Framework
The proposed DWL framework consists of the following stages (Figure 1), A. Preprocessing, B. Motion-artifact Detection, C. Motion-artifact Frequency Components Identification, D. Denoising, and E. Heart Rate Estimation. The inputs to the system are raw green and IR PPG signals measured using the dual-wavelength PPG wrist-unit sensor (described in Section 2.1). The output is an HR estimate,ĤR(l) at time step l (the initial time step is l = 1). We refer to the average of the latest Z estimates of the heart rate asĤR Figure 3 is a block diagram of the DWL method. The system produced a new estimate of HR at every time step (ĤR(l) at time step l). The time between two subsequent windows in our study was 2 s. In addition, the system produces three search ranges. They are; the "narrow search range", ∆ n (l + 1); the "medium search range", ∆ m (l + 1); and the "wide search range", ∆ w (l + 1). Ranges ∆ m (l + 1) and ∆ n (l + 1), which are used in the motionartifact frequency components identification process of Section 2.3.3, are centered atĤR(l). The range ∆ w (l + 1), which is used in the heart rate estimation process of Section 2.3.5, is centered atĤR (6) (l), the average of the 6 previous heart rate estimates. The ranges satisfy ∆ n (l + 1) < ∆ m (l + 1) < ∆ w (l + 1). Moreover, ∆ n (l + 1) = ∆ m (l + 1) 2 (for details on how we calculated ∆ w (l + 1) and ∆ m (l + 1), see Equations (A2) and (A4) in Appendix A, respectively). Lastly, we calculate a short-term 3-point-average heart rate,ĤR (3) (l), that we provide to the users and employ in Section 3.2 for assessing the performance of DWL. Figure 4 is an illustration of a typical IR PPG spectrum. The magenta dashed line in Figure 4a is the heart rate estimated at time step l,ĤR(l). The black dotted line in Figure 4b is the average of the 6 previous heart rate estimates at time step l,ĤR (6) (l). In this example,ĤR(l) is 1.5 Hz andĤR (6) (l) is 1.45 Hz. Additionally, we present in Figure 4 the "wide search range", ∆ w (l + 1), as a green dashed rectangle, the "medium search range", ∆ m (l + 1), as a red dashed rectangle, and the "narrow search range", ∆ n (l + 1), as a blue dashed rectangle.
Pre-Processing
First, both green and IR PPG signals are normalized (block A of Figure 3). Normalization is done by dividing the signal's AC component by its DC component [23]. The AC component is obtained by passing the raw PPG signal through a Chebyshev Type II bandpass filter of order 5 and bandpass frequency range from 0.5 to 10 Hz. The DC component is obtained by passing the raw signal through a Chebyshev Type II low-pass filter of order 5 and passband frequency of 0.5 Hz. ∆ w (l + 1) is illustrated as a green dashed rectangle. The wide search range is centered atĤR (6) (l) (black dotted line) and is used to search, at time step l + 1, forĤR(l + 1).
Motion-Artifact Detection
Motion artifact detection is used to determine whether the PPG signals are contaminated by motion noise (if they are not, we can bypass unnecessary noise suppression operations). The PPG signals go through the following three (3) local detectors to determine if appreciable levels of noise motion are present (block B of Figure 3): Local Detector 1 (D 1 )-Number of Peaks: The number of dominant peaks (whose magnitude exceeds 30% of the maximum peak for this example) in the frequency spectrum of the green PPG signal, denoted N p , is calculated. If N p exceeds two (2), D 1 indicates that the signal is contaminated with motion noise. If N p is 1 or 2, then we conclude that no appreciable motion noise is present, since the frequency of the heart rate and sometimes its second harmonic component are typically observed in the spectrum of a clean PPG signal.
Local Detector 2 (D 2 )-Power of Green Signal: The power of the green PPG signal calculated at the beginning of the experiment (when the participant is at rest) is considered the reference power, denoted P re f . At each time step l, the power of the green PPG, P G (l), is calculated and compared to the reference power P re f . If P G (l) is more than (1 + κ)P re f , D 2 indicates that the green PPG signal is contaminated with motion noise. The amplitude of the PPG signal might change over time [24]. Therefore, the reference power P re f is updated whenever no motion is detected in the system for five (5) consecutive time steps (global detector D 0 return '1'). In this case, the updated value of P re f is set to the power of the green PPG signal calculated at the current time step, l. In this study, we used κ = 0.2.
Local Detector 3 (D 3 )-Pearson Correlation between Green and IR PPG Signals: The correlation between the green and IR PPG signals is also used to assess noise contamination in the green signal. If the correlation between the green and IR PPG signals, ρ green, IR , is below a certain threshold (we used 0.8), then D 3 will decide that the green PPG signal is contaminated with motion noise.
Global Detector-Noise Detector: The decisions of the three local detectors are fed into a global detector that will decide whether the signal is noise contaminated. The global detector is shown as: where "∨" represents the OR logic operator.
Motion-Artifact Frequency Components Identification
If motion artifacts are detected in the normalized green PPG signal, we use the normalized IR signal to build the motion noise component set N noise (block C of Figure 3). N noise can be written as N noise = { f n i |1 ≤ i ≤ N n } where f n i is the i th discrete noise frequency component and N n is the number of elements in the set N noise . The set N noise , which contains all the noise frequency components that we aim to remove from the normalized green PPG signal, is obtained using the following five (5) steps in sequence. The first three steps capture noise with relatively high intensity, usually harmonically related frequency pairs that contaminate the PPG signals. The last two steps compare the IR and green signal spectra to discover additional noise components of reduced-intensity presence in the IR spectrum.
Step 1-Identification of dominant frequency components. First, we capture the dominant frequency components in the spectrum of the normalized IR PPG signal. Those are the frequencies (between 0.5 and 4 Hz) whose magnitude exceeds 50% of the highest peak in the IR PPG spectrum. Figure 5, which is an image that was created for illustration purposes, depicts how we capture dominant peaks from a typical IR signal. In this scenario, the highest peak (which actually corresponds to the participant's HR) is F 1 . Two other dominant peaks are shown as red circles (F 2 and F 3 ). Typically, the peaks captured in step 1 include the frequency of the participant's HR, as well as the frequencies of dominant noise components. We add all of them (F 1 , F 2 , and F 3 in our example) to N noise with the understanding that one of them may correspond to the participant's HR and may therefore need to be removed from N noise later. Step 2-Identification of harmonic frequency components. Noise components created by repetitive motion (e.g., when the participant is walking or running) typically occur in harmonically related pairs [25]. It is possible, however, that the PPG signal contains pairs of harmonically related noise components whose magnitude is smaller than the 50% threshold used in step 1 to identify dominant frequencies.
Step 2 is used to capture pairs of fundamental frequencies and their second harmonics present in the spectrum of the normalized IR PPG signal. Here, we look at all peaks whose magnitudes are above 30% of the highest peak in the IR PPG spectrum. For each such peak, we search for a harmonic at double its frequency. If a pair of harmonically related frequencies is thus discovered, its component(s) that were not flagged in step 1 are added to the noise frequency set N noise . Again, N noise may still contain at this stage a component that corresponds to the participant's true HR. Figure 6 uses the same spectrum shown in Figure 5 to illustrate how a pair of harmonically related components (F A , F B = F 3 ) was discovered. Of this pair, F B was known to us already from step 1 (it is the same as F 3 in Figure 5), and F A , discovered by step 2, is added to N noise . So now, N noise = {F 1 , F 2 , F 3 , F A }. Step 3-Removal of the heart rate from noise set. As mentioned in our setting in Section 2.3, our system creates a new estimate of the heart rate,ĤR(l) at every time step l. A new time step starts every 2 s when l is incremented by 1. Moreover, in step l + 1 we calculate ∆ w (l + 1) (the "wide search range") which is where we search forĤR(l + 1).
Next, frequency components in N noise which we captured during steps 1 and 2, and are close to the heart rate estimated at time step l (ĤR(l)) are removed from N noise , as we suspect they do not represent noise but rather represent the participant's HR. To be precise, at time step l + 1, we remove from N noise all the noise components in the "medium search range" ∆ m (l + 1). Figure 7 continues the examples of Figures 5 and 6 to illustrate step 3. In Figure 7a,b, we show the estimate of the participant's HR at time step l, denotedĤR(l). We also show ∆ m (l + 1), the "medium search range", [ĤR(l) − ∆ m (l + 1)/2,ĤR(l) + ∆ m (l + 1)/2], from which we remove dominant frequencies deposited earlier into N noise . The red squares in Figure 7a represent the frequency components that we obtained from steps 1 and 2 all of which are currently in N noise = {F 1 , F 2 , F 3 , F A }. We now discard the frequency around 1.2 Hz (labeled F 1 ) since it falls in ∆ m (l + 1), the "medium search range" (region represented by a red dashed rectangle in 7). Figure 7b shows (in red squares) the noise frequency components that are left in the noise set N noise = {F 2 , F 3 , F A }. N noise no longer contains the participant's HR.
The next two steps seek additional noise components, often attributed to repetitive movements by the participant, through comparison of the IR and green spectra.
Step 4: Step 4 focuses on instances where the noise set N noise , after step 3, has only one noise component, f n 1 . In this case, we look at the green spectrum. If we find a component at half f n 1 ( f n 1 /2) or twice f n 1 (2 × f n 1 ) in the green spectrum, we add this component to N noise . The only exception is if the component we seek to add falls into the narrow search range, ∆ n (l + 1), aroundĤR(l), [ĤR(l) − ∆ n (l + 1)/2,ĤR(l) + ∆ n (l + 1)/2]; in this case, we refrain from adding it to set N noise .
Step 5: This step addresses spectra that are dominated by vigorous limb swinging by the participant, which may cause displacement of the sensor. In this scenario, the green PPG signal is typically dominated by two high intensity harmonically related noise frequencies which may dwarf the component at the heart rate frequency. If these frequency components are not already placed in N noise after steps 1-3, they are added to N noise at this step. This step is automatically triggered when all the following conditions are met, namely; (a) the IR spectrum contains only one significant frequency component that dominates the spectrum; (b) the green spectrum contains only one pair of significant harmonically related frequencies; and (c) the dominant frequency component present in the IR spectrum matches one of the harmonically related frequencies discovered in the green spectrum. Figure 7. Frequency spectrum of a typical IR PPG signal.ĤR(l) is the heart-rate estimate at time step l. ∆ m (l + 1) is the "medium search range" represented by a red dashed rectangle. The frequency components we obtained from step 1 and 2, namely, F 1 , F A , F 2 , and F 3 = F B , are represented by red squares. In (a) frequency F 1 falls within ∆ m (l + 1). In (b) we discard the frequency F 1 since it falls within ∆ m (l + 1) and leave the rest in N noise (F A , F 2 , and F 3 = F B ). Figure 8 is a real-life example that illustrates this scenario (signals were collected from participant 10 in our experiment, around time 136 s). We show the spectrum of participant 10's IR signal in Figure 8a and green signal in Figure 8b. We show in magenta the heart rate estimate at time step l,ĤR(l). The green signal captures the high-intensity harmonically related frequency pairs F 1 and F 2 of Figure 8b. The IR spectrum (Figure 8a) is dominated by the frequency F A that is equal to frequency F 2 from the green spectrum, but does not capture a noise component at F 1 . Here, frequencies F 1 and F A = F 2 are put into N noise .
At the end of this stage, the set N noise will contain N n elements that correspond to the noise frequencies we wish to remove from the normalized green PPG signal.
Denoising
Adaptive Noise Cancellation (ANC) filters are often employed to eliminate in-band motion artifacts [26,27]. In-band noise in our case occurs when the spectra of motion artifacts overlap significantly with that of the PPG signal [28]. An ANC filter for our environment would use as inputs (1) a noise contaminated signal, and (2) a noise reference signal. The ANC filter seeks to eliminate the noise components (measured by the reference signal) from the input noise contaminated signal and provide a noise-free version of the input signal.
Motivated by the architecture in [29], we employ a Cascading Adaptive Noise Cancellation (C-ANC) architecture to remove all the elements of the set N noise = { f n i |1 ≤ i ≤ N n } (developed in Section 2.3.3) from the green PPG signal, one element at the time. The block diagram of the proposed C-ANC is shown in Figure 9. We show the frequency spectrum of the input signal in Figure 9 (spectrum A). This is the green signal collected from participant 3 around time 66 s. The spectrum contains three noise frequency components that we wish to eliminate from the signal. The signal collected at the output of the C-ANC (spectrum D in Figure 9) does not contain any of the noise components; only the HR frequency component remained in the spectrum. A total of N n C-ANC were used to remove the noise components of N noise from the green PPG signal. At the i th stage (1 ≤ i ≤ N n ), the noise reference signal is a pure sinusoid of frequency f n i . For instance, the first ANC filter block shown in Figure 9 removes the first noise frequency component f n 1 from the normalized green PPG signal (see spectrum B of Figure 9). The output of the first block is denoted G PPG, 1 . G PPG, 1 is fed to the next block where the second noise frequency component f n 2 is removed (see spectrum C of Figure 9). The process is repeated until all noise components are removed from the normalized green PPG signal. The final output, G PPG, N n , is a noise-free version of the green PPG signal. In the proposed method, the QR-decomposition-based least-squares lattice (QRD-LSL) adaptive filter algorithm was used to remove noise components from the green PPG signal [30]. The method incorporates the desirable features of recursive least-square estimation (fast convergence rate), QR-decomposition (numerical stability), and lattice structure (computational efficiency) [12]. The implementation of the QRD-LSL filter in our study used the built-in MATLAB function "AdaptiveLatticeFilter" [31] with 10 filter taps and forgetting factor of 0.99. Figure 9. Cascading Adaptive Noise Canceler (C-ANC) block diagram. Spectrum A is of the noisecontaminated green PPG signal which is fed to the C-ANC. Spectrum B represents the green PPG signal's frequency spectrum after removal of the first noise frequency component, f n 1 . Spectrum C is obtained after removing a second noise frequency component, f n 2 . At this stage, f n 1 and f n 2 are removed from the input signal. Spectrum D is of the clean green PPG signal. It is obtained at the output of the C-ANC after all noise frequency components were eliminated.
Heart Rate Estimation
In this stage (see block E of Figure 3), the green PPG signal is used to compute an HR value. If no noise was detected in the green PPG (D 0 = 0), then the normalized green PPG is used for heart rate calculation. When noise was detected in the green PPG signal, a HR value is obtained from the denoised green signal (obtained at the output of block D in Figure 3, also shown in Figure 9). The "Heart Rate Estimation" stage comprises two steps, namely, "Initialization" and "Heart Rate Calculation".
Initialization (block E1 of Figure 3). This is a process of capturing a baseline HR at rest. In our experiment, it was a one-minute phase during which participants were asked to remain steady in order to capture noise-free green and IR PPG signals. To calculate the initial HR estimate,ĤR(1) at time step l = 1, we used the frequency spectrum of the normalized green PPG signal.ĤR(1) corresponds to the highest peak within the initial search range 0.5 to 3 Hz (which corresponds to 30 to 180 BPM).
Heart Rate Calculation (block E2 of Figure 3). At time step l + 1, the heart rate calculation method we propose employs the following variables in order to generate an HR estimate,ĤR(l + 1): 1.
The heart rate estimated from the previous time step l,ĤR(l).
2.
A heart rate candidate HR cand (l + 1) which is obtained from the spectrum of the green PPG signal.
3.
A heart rate prediction, HR pred (l + 1) which is obtained from the long-term (LT) trend of the past six (6) HR estimates. The LT trend is obtained using STL, the Seasonal-Trend decomposition using LOESS (locally estimated scatterplot smoothing) [32].
In this study, we used the MATLAB implementation, trenddecomp.
In this case, we follow the procedure recommended in [7] to consider at most three dominant peaks in the green spectrum, whose magnitude exceed 50% of the maximum peak. Here, HR cand (l + 1) is obtained by averaging all the peaks that we considered. The estimated heart rate,ĤR(l + 1) is calculated asĤ where β is a constant we set to 0.9.
The heart rate calculation process we used requires the availability of the previous six HR estimates in order to generate an HR prediction, HR pred (l + 1) at time step l + 1. Therefore, from time steps l = 2 to l = 6, the HR estimatesĤR(2) throughĤR (6) corresponds to the highest peak in the green spectrum, within the wide search range ∆ w (l + 1) (ĤR(l + 1) ∈ [ĤR (6) (l) ± ∆ w (l + 1)/2]). If no such peak is detected, we increment ∆ w (l + 1) by 0.02 Hz (or 1.2 BPM) and we search again for a peak. This process repeats until a peak is found.ĤR (6) (l) is the average of all the previously calculated HR estimates (see Equation (3)).
Alternative HR Calculation Methods
In most studies involving PPG signals collected from humans in motion, suitable reference signals, representing motion artifacts, were obtained through additional hardware [28]. For example, when the PPG sensor is mounted on the wrist of a running participant, accelerometer sensors mounted on the participant's wrist are often used as noise reference signals [33][34][35].
TROIKA is an HR calculation framework proposed by Zhang et al. [7]. TROIKA is based on Singular Spectrum Analysis (SSA) [36] followed by Sparse Signal Reconstruction (SSR) [37] to eliminate the noise dominant components present in PPG signals. The inputs to TROIKA are a green PPG signal and X, Y, and Z accelerometer data. The output is an HR estimate. In our implementation of TROIKA, the noise components were obtained from a tri-axial accelerometer. In [7], TROIKA was tested on data collected from a wrist-worn sensor (that encloses a green PPG channel and X, Y, and Z accelerometer data) from twelve (12) participants, during fast running at peak speed of 15 km/h. The heart rate average absolute error of TROIKA in this test was 2.34 beat per minutes (BPM).
A related method is based on Zhang's Joint Sparse Spectrum Reconstruction (JOSS). It was shown in [8] to exhibit a heart rate average absolute error as small as 1.28 BPM when tested on the same twelve (12) participants used in Zhang's TROIKA study [7]. In JOSS, the input signals are a green PPG signal and X, Y, and Z accelerometer data. The accelerometer data are considered the noise signals. The output is an HR estimate. Compared to TROIKA where PPG and accelerometer signals were sampled at 125 Hz, JOSS's low-sampling rate, namely 25 Hz, is an attractive feature that gives JOSS the potential to be implemented in Very Large-Scale Integration (VLSI) or Field Programmable Gate Array (FPGA) in wearable devices [8].
The HR calculation mechanism of the DWL method was inspired by that of TROIKA and JOSS. We compare the quality of HR calculated by the DWL method which does not require accelerometers, to our implementation of the accelerometer-dependent TROIKA and JOSS. The TROIKA and JOSS experimental results were obtained from the same participants that we employed in the analysis of the DWL method.
Results
In this section, the results of the HR values calculated using the DWL framework of Section 2.3 are computed and analyzed. First, we define (in Section 3.1) the performance metrics used to compare the performance of the DWL method to that of our implementation of TROIKA and JOSS. In Section 3.2, we assess the performance of the DWL method (using the performance metrics of Section 3.1) on data collected from the participants' wrists. This comparison is made with respect to (1) the HR ground truth computed from an ECG signal, and (2) the HR levels obtained using TROIKA and JOSS. Lastly, in Section 3.3, we validate the DWL framework by comparing its performance on experimental data collected from the palms (instead of the wrists) of the same participants during a second run (validation run).
Performance Metrics
To assess, evaluate, and compare the HR estimation performance of DWL method to TROIKA and JOSS, we used four metrics, namely; Mean Absolute Error (MAE) (Equation (9)); Mean Absolute Error Percentage (MAEP) (Equation (10)); a specific performance index (PI) [38] (Equation (11)) which is the frequency, in percent, of obtaining an HR estimate that is within ±5 BPM of the HR ground truth; and computation time (CT). We defined CT to be the total time duration (in seconds) that an algorithm takes to generate heart rate levels from the entire 360-second-long off-line data that has already been collected during the experimental run. We compare the HR values calculated by the three tested methods to ground truth values obtained from an ECG signal that is simultaneously recorded, hence synced, with the green and IR PPG waveforms and the X, Y, and Z accelerometer data. All R peaks in the ECG signal were manually labeled. The ground truth HR was obtained using Equation (1). The relevant definitions are: In Equations (9)-(11), ∆(l) is defined as where . is the absolute value. BPM HR method (l) is the HR in beat per minutes (BPM) calculated using each one of the tested methods (DWL, TROIKA, and JOSS) at time step l. BPM GT (l) is the HR ground truth value in BPM obtained as BPM GT (l) = HR GT (l) × 60, where HR GT (l) is calculated using Equation (1). In Equation (11) (3)).
DWL Performance on Wrist Data
Data were collected from fourteen (14) participants while standing, walking, and running on the treadmill, following the experimental protocol described in Section 2.1. In this section, we analyze data collected from participants 1 to 11. Data from participants 12, 13, and 14 are not included in our analysis since for these participants, the system suffered from physical malfunction (intermittent readings due to loss of sensor contact). However, we still provide the data for these participants in the repository in [16]. Every 2 s, the preceding 8-second-long green and IR PPG data were used to generate a short-term
3-point-average HR estimate,ĤR
(3) (l), using the DWL method. HR levels obtained using DWL are compared to those of TROIKA and JOSS.
For the TROIKA implementation, we used a sampling rate of 100 Hz. We recreated the TROIKA code using MATLAB. Our code was tested on the same dataset of the TROIKA paper and compared to the results presented in [7]. The results using our code are very close to the results presented in the TROIKA paper.
For the JOSS implementation, we used a sampling rate of 25 Hz, as suggested in the JOSS paper [8]. We recreated the JOSS code using MATLAB. Our code was tested on the dataset used in JOSS paper and compared to the results presented in [8]. The results using our code are very close to the results presented in the JOSS paper.
As examples, we show in Figure 10 the HR calculated for the whole experimental run for two participants, participant 3 ( Figure 10a) and participant 10 ( Figure 10b). We use red circles, green squares, and blue triangles to represent the HR values calculated using DWL, TROIKA, and JOSS, respectively. The ground truth HR is the solid black line. In Figure 10a all three methods generate accurate HR estimates (the magnitude of the noise level present in the signals of participant 3 was small). For participant 10 (see Figure 10b), however, TROIKA lost track of the correct heart rate from 120 to 175 s and from 250 to 325 s. This phenomenon (losing track of the correct HR) is referred to as Lock Loss. Similary, JOSS suffered from a Lock Loss from 225 s until the end of the experimental run. During these intervals, the DWL method was still able to estimate the participant's HRs accurately (see red circles of Figure 10b).
We calculate MAE, MAEP, PI, and CT for all eleven (11) experimental participants and present them in Tables 2-5, respectively. In Table 2, we show the MAE for DWL, TROIKA, and JOSS. We calculate and report the MAE mean and standard deviation for each method in the second to last row of Table 3 summarizes the MAEP for DWL, TROIKA, and JOSS. We calculate and report the MAEP mean and standard deviation for each method in the last row of Table 3. Moreover, we calculate PI for DWL, TROIKA, and JOSS, and report it in Table 4. In the last row of Table 4, we calculate the PI mean and standard deviation of all participants. Table 2 the average MAE for all eleven participants using the DWL method is MAE of 1.22|0.57 BPM ("mean|standard deviation") (see Table 2), which is smaller than average MAE of TROIKA (3.24|2.82 BPM) and JOSS (11.98|25.79 BPM), respectively. When we exclude participants who suffer from Lock Loss (shown in the last row of Table 2), DWL (with the same average MAE = 1.22|0.57 BPM) still yields a smaller average MAE than that of TROIKA (with average MAE = 2.05|1.03 BPM) and JOSS (with average MAE = 2.11|1.24 BPM). Note that the MAE calculated using DWL method did not exceed 5 BPM for any of the participants. However, this was not the case for TROIKA and JOSS method. Participant 10 presents an example where the MAE of TROIKA (9.34 BPM) and JOSS (21.8 BPM) exceeds 5 BPM, whereas the MAE of the DWL method is 0.85 BPM. Table 3. MAEP in % for all eleven (11) experimental participants, using DWL, TROIKA, and JOSS (ideal MAEP is 0%). The last row shows the MAEP average of all eleven (11) participants shown as "mean|standard deviation". In addition to MAE, we calculate average MAEP of all three methods for all eleven participants. Average MAEP of DWL method of 0.95|0.38% is smaller than average MAEP of TROIKA (2.58|2.19%) and JOSS (8.68|17.6%) (see Table 3). Table 4 summarizes the PI values for all eleven (11) participants. The PI of DWL method is larger than the PI of TROIKA and JOSS. For instance, on average, the PI of DWL method is 95.88|4.9% that is greater than that of TROIKA with 83.87|12.75% and JOSS with 78.62|26.16%.
Participant
The CT is an indication of the algorithm's computational complexity. In order to be implement in wearable devices, the algorithm should be able to run in real-time and be energy efficient. A desirable algorithm should have a small CT. Table 5 shows the CT of DWL, TROIKA, and JOSS for participants 1 to 11. The average CT of DWL is smaller than that of TROIKA and JOSS. For instance, the average CT of DWL is 3.0|0.3 s is smaller than the average CT of TROIKA with 247.7|43.8 s and JOSS with 8.5|0.24 s. Table 5. CT in seconds for all eleven (11) experimental participants, using DWL, TROIKA, and JOSS. The last row shows the CT average of all eleven (11) participants shown as "mean|standard deviation". Additionally, we show the Bland-Altman plot (Figure 11a) of the HR values computed using DWL method for participants one (1) through eleven (11). The Bland-Altman plot describes the agreement between two quantitative measurements (A and B) by constructing the Limits of Agreements (LOA). These statistical limits are calculated by using the mean and the standard deviation of the differences between the two measurements. The resulting graph is a scatter plot, in which the y-axis shows the difference between the two paired measurements (A − B) and the x-axis represents the average of these measures ((A + B)/2) [39]. The LOA we use is [µ − 1.96 × σ , µ + 1.96 × σ] (1.96 × σ corresponds to 95% confidence level) where µ is the average difference between each HR estimate and the associated ground-truth HR against their average, and σ is the standard deviation [7]. The LOA in Figure 11a is [−4.9, 4.8] BPM. Moreover, we construct the scatter plot of the HR estimated using DWL method versus the associated ground truth HR for participants one (1) through eleven (11). The scatter plot is shown in Figure 11b. We construct a linear regression for the data points of Figure 11b. The fitted line is y = x − 0.2 (R 2 = 0.99), where x is the ground truth HR and y is the HR estimated using DWL method. The Pearson correlation between the HR estimated using DWL method and ground truth HR is also calculated and found to be 0.99. The high R 2 value and Pearson correlation indicate that DWL method is able to compute accurate HR levels. Figure 11. (a) Bland-Altman plot of HR estimated using DWL method and the ground truth HR for participants one (1) to eleven (11). The LOA = [−4.9, 4.8] BPM. (b) Scatter plot of HR estimated using DWL method (on the y-axis) vs. the ground truth HR (x-axis) for participants one (1) to eleven (11). The linear regression line that fits the data is shown in black. The line is y = x − 0.2 (R 2 = 0.99). The Pearson correlation is found to be 0.99.
Validation of the DWL Method on Palm Data
In order to validate the performance of the DWL framework, we ran a second experiment (validation run). During the second experiment, we asked the same volunteers who participated in our previous experiment to run on the treadmill again, following the experimental protocol described in Section 2.1. We reused the same ECG, accelerometer, and PPG sensors. The only difference was that we mount the dual wavelength sensor onto the participant's palm (instead of wrist). Data for all participants are provided in the repository in [16].
Both wrist and palm experiments took place on the same day. There was a break of approximately 15 min between the first and the second run during which the dual-wavelength PPG sensor was relocated from the wrist to the palm of the participant. Participants 5 and 13 deviated from the data collection protocol by interfering with the sensor during collection. Their measurements were excluded from the analysis we provide (but are available in the repository in [16]).
MAE, MAEP, and PI were calculated from the twelve (12) participants of the "palm run" for DWL, TROIKA, and JOSS. We show in Table 6, the summary of the performance metrics (MAE, MAEP, and PI) obtained for the first run (the "wrist run"), and the second run (the "palm run"). The results are presented as "mean|standard deviation". Table 6 shows that the DWL method performs as well when the measurements were taken from the wrist as when they were taken from the palm. Table 6. Summary of performance metrics for run 1 (wrist run) and run 2 (validation palm run). For run 1, we showed the average performance of eleven (11) participants. For run 2, we showed the average performance of twelve (12) participants. Results are represented as "mean|standard deviation".
Discussion
We presented a framework for heart rate (HR) calculation under motion using a dualwavelength (green and IR) PPG sensor. We used PPG data collected from 14 individuals engaged in high-intensity full-body exercise. Analysis of green and IR PPG signals indicates that the IR PPG signal is a good noise reference signal. We employed this observation to develop a motion-resistant HR calculation method derived from [9] that measures noise components from the IR PPG signal. Afterwards, a green PPG signal is denoised and used for HR calculation. The proposed method, Dual Wavelength (DWL), was tested on experimental data collected from participants' wrists while the participants were standing, walking, and running on a treadmill. The performance of the method, using several measures of accuracy and computational effort, was then compared to popular methods in the literature that use data from a tri-axial accelerometer for denoising, namely TROIKA and JOSS. Using the experimental wrist-data we collected, we showed that the DWL method exhibits good performance in the face of motion artifacts. For instance, DWL yielded a Mean Absolute Error (MAE) of 1.22|0.57 BPM, Mean Absolute Error Percentage (MAEP) of 0.95|0.38%, and performance index (PI) (which is the frequency in percent of the event that we obtain an HR estimate that is within ±5 BPM of the HR ground truth) of 95.88|4.9%. Moreover, DWL yielded a short computation period of 3.0|0.3 s to process a 360-second-long run. We validated the performance of the DWL method by testing it on data collected from the participants' palms, obtaining similar behavior. The DWL method is desirable since (1) it performed well under high-intensity full-body repetitive "macromotion", exhibiting high accuracy in the presence of motion artifacts (as compared to the leading accelerometer-dependent HR calculation techniques TROIKA and JOSS); (2) it used only PPG signals; auxiliary signals such as accelerometer signals were not needed; and (3) it was computationally efficient, hence implementable in wearable devices. | 11,543 | 2022-12-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Decrease in Pitting Corrosion Resistance of Extra-High-Purity Type 316 Stainless-Steel by Cu 2+ in NaCl
: The effect of Cu 2+ in bulk solution on pitting corrosion resistance of extra-high-purity type 316 stainless-steel was investigated. Pitting occurred in 0.1 M NaCl-1 mM CuCl 2 , whereas pitting was not initiated in 0.1 M NaCl. Although deposition of Cu 2+ on the surface occurred regardless of a potential region in 0.1 M NaCl-1 mM CuCl 2 , Cu 2+ in bulk solution had no influence on the passive film formation. The decrease in pitting corrosion resistance in 0.1 M NaCl-1 mM CuCl 2 resulted from the deposited Cu or Cu compound and continuous supply of Cu 2+ on the surface.
Introduction
The effects of alloyed Cu on the corrosion resistance of stainless-steels are complex and depend on both the corrosion environment and Cu content. In H 2 SO 4 environments, alloyed Cu has been found to enrich in the near surface [1][2][3] and to suppress the active dissolution of stainless-steels [4][5][6][7]. In chloride environments, alloyed Cu is both beneficial and harmful to the pitting corrosion resistance of stainless-steels. If stainless-steel contains sufficient amount of alloyed Cu, pitting initiation caused by the dissolution of MnS inclusions is inhibited by a protective Cu compound layer. This layer is produced by Cu and S species resulting from the dissolution of MnS [8,9]. Garfas-Mesias found that alloyed Cu increased the pitting potential of 25% Cr duplex stainless-steel in 1 M HCl and in 3.5% NaCl solutions [10]. However, alloyed Cu reduced the pitting corrosion resistance in chloride solutions. It has been reported that the interface between the inclusion and matrix are preferred to initiate the pitting on the Cu-containing duplex stainless-steel [11]. The passivation process of austenitic stainless-steel was influenced by alloyed Cu in acidic chloride solution, which decreased the pitting potential of stainless-steel [12]. After pitting initiation, alloyed Cu inhibits the growth of pitting corrosion. Pitting initiation on the surface of Cu-containing stainless-steels is followed by Cu 2+ dissolution from the stainlesssteel matrix. The dissolved Cu 2+ acts as an inhibitor in acidic chloride environments formed in pits. Sourisseau et al. reported that the active dissolution rate inside the pits was suppressed by dissolved Cu 2+ , and that Cu-deposited insoluble copper sulfides were formed with dissolved S species [13]. The active dissolution of Cu-containing stainless-steels is known to be suppressed by Cu enrichment at the surface [12,14,15] or by the formation of a deposited Cu-containing layer [16][17][18] in acidic environments.
However, on the other hand, only a few reported studies have considered the effects of Cu 2+ in the bulk solution on the corrosion resistance of stainless-steels [13,19,20]. A continuous supply of Cu 2+ on the surface is likely when Cu 2+ is present in the bulk solution. In contrast, the supply of Cu 2+ from the stainless-steel matrix may cease after a protective Cu-containing layer is formed on the surface. Therefore, the effect of Cu 2+ supply from the bulk solution on the corrosion morphology of stainless-steel may differ from that of the supply from the matrix. To understand the effects of Cu 2+ in the bulk solution on the corrosion resistance of stainless-steel, specimens of Cu-free stainless-steel were investigated.
In the present study, the effects of Cu 2+ in bulk solution on the pitting corrosion resistance of stainless-steel and on the formation of a passive film were analyzed. To avoid the effects of alloyed Cu on the corrosion resistance of the stainless-steel or Cu dissolution from the stainless-steel matrix, an extra-high-purity (EHP) type 316 stainless-steel (316EHP stainless-steel) was used as the sample. EHP stainless-steels have been developed to exhibit enhanced corrosion resistance by preventing the segregation of impurities [21]. The total concentration of impurities was limited to be less than 100 ppm (0.01 mass %). The amount of alloyed Cu in the 316EHP stainless-steel was expected to be sufficiently low to prevent the formation of a Cu-enriched layer on the surface and Cu dissolution into the solution.
Specimens and Electrolytes
An extra-high-purity type 316 stainless-steel (316EHP stainless-steel) was used as the sample material. EHP stainless-steels were manufactured by controlling the alloying elements so that there are very few impurities. The stainless-steel was produced by a vacuum induction melting furnace (KOBELCO, Kobe, Japan). A 50 kg-ingot of steel was hot-forged and hot-rolled into a 12 mm-thick plate, which was then cut into specimens of approximately 15 × 25 mm, parallel to the rolling direction. The specimens were heattreated at 1373 K for 3.6 ks and quenched in water. The electrode surface of each specimen was mechanically ground with SiC papers and then polished to 1 µm with a diamond paste. Afterward, the specimens were ultrasonically cleaned with ethanol. The chemical composition of the stainless-steel was analyzed by Glow Discharge Mass Spectrometry (CAMECA) (GD-MS), and is listed in Table 1. The total concentration of alloying elements other than Ni, Cr, and Mo were less than 100 ppm (0.01 mass %) and are substantially lower than the concentrations in commercial type 316 stainless-steels. The Cu content of the 316EHP stainless-steel was only 6.8 × 10 −4 mass %. This content was expected to be sufficiently low to ensure that the detected Cu on the surface originated from the solution, rather than from the metal. Electrochemical measurements were performed in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 . The pH value of 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 was almost 5.5. All solutions were prepared from deionized water and analytical grade chemicals.
Electrochemical Measurements
Potentiodynamic anodic polarization measurements were performed using a pock-etSTAT potentiostat (Ivium, Eindhoven, The Netherlands) in deaerated 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 at 298 K. The specimen surfaces were insulated by an epoxy resin, except for the electrode area (approximately 10 mm × 10 mm) with epoxy resin (AR-R30, Nichiban, Tokyo, Japan) and subsequently with paraffin. The measurements were performed in a conventional three-electrode cell, the counter electrode was a platinum plate, and the reference electrode was an Ag/AgCl (saturated (sat.) KCl) electrode (0.197 V vs. standard hydrogen electrode at 298 K). All the potentials reported in this work refer to the Ag/AgCl (sat. KCl) electrode. The potential scan rate was 3.33 × 10 −4 V s −1 (20 mV min −1 ). The polarization measurements were started immediately after immersion of the working electrode in the electrochemical cell to avoid the Cu deposition on the surface before the start of polarization. Each potentiodynamic polarization measurement was performed at least three times to confirm the reproducibility.
Observation and Analysis
Before and after electrochemical measurements, the electrode surfaces were observed via JSM-7000 field emission scanning electron microscopy (FE-SEM) (JEOL, Tokyo, Japan) coupled with energy-dispersive X-ray spectroscopy (EDS) (JEOL, Tokyo, Japan). Secondary electron images and EDS maps were obtained at an accelerating voltage of 15 kV. Elemental analyses on the surfaces were performed by means of Theta-probe X-ray photoelectron spectroscopy (XPS) (Thermo Fisher Scientific, Waltham, MA, USA). The XPS system consisted of an X-ray source with a monochromator, and the instrument used an aluminum anode generating Al K α radiation (E = 1486.6 eV), a hemispherical electron energy analyzer for photoelectrons, and an Ar ion gun for depth profiling. The depth of the etching crater was measured by a VK-X3000 Laser microscope (KEYENCE, Osaka, Japan) after performing the depth profiling. In this study, an X-ray spot of 400 µm was employed. The Fe 2p, Cr 2p, Ni 2p, Mo 3d, and Cu 2p XPS spectra of the surfaces were collected. The spectra were collected at an energy step and pass energy of 0.1 and 200 eV, respectively.
Inclusion Characterization
Inclusions are known to have a vital role on the pitting corrosion resistance of the stainless-steels [22,23]. The surface of the specimen was observed immediately after the polishing. Inclusions on the specimen were characterized by means of FE-SEM/EDS. There were at least 40 inclusions in the electrode area of 1 mm 2 . Figure 1 shows an SEM image and corresponding EDS maps of a typical inclusion. The inclusions were approximately round, with diameters smaller than 5 µm. The EDS maps in Figure 1 show that these inclusions were Cr-and O-enriched inclusions. The results of the quantitative analysis corresponding to point 1 in Figure 1 are shown in Table 2. The Cr/O atomic ratio was approximately 3:2. This result suggested that the inclusions of the specimen were Cr oxide. These inclusions were thought to be introduced during the dissolution process because electrolytic ferrochromium is used as a raw material. cell to avoid the Cu deposition on the surface before the start of polarization. Each potentiodynamic polarization measurement was performed at least three times to confirm the reproducibility.
Observation and Analysis
Before and after electrochemical measurements, the electrode surfaces were observed via JSM-7000 field emission scanning electron microscopy (FE-SEM) (JEOL, Japan) coupled with energy-dispersive X-ray spectroscopy (EDS) (JEOL, Japan). Secondary electron images and EDS maps were obtained at an accelerating voltage of 15 kV. Elemental analyses on the surfaces were performed by means of Theta-probe X-ray photoelectron spectroscopy (XPS) (Thermo Fisher Scientific, the United States). The XPS system consisted of an X-ray source with a monochromator, and the instrument used an aluminum anode generating Al Kα radiation (E = 1486.6 eV), a hemispherical electron energy analyzer for photoelectrons, and an Ar ion gun for depth profiling. The depth of the etching crater was measured by a VK-X3000 Laser microscope (KEYENCE, Japan) after performing the depth profiling. In this study, an X-ray spot of 400 µm was employed. The Fe 2p, Cr 2p, Ni 2p, Mo 3d, and Cu 2p XPS spectra of the surfaces were collected. The spectra were collected at an energy step and pass energy of 0.1 and 200 eV, respectively.
Inclusion Characterization
Inclusions are known to have a vital role on the pitting corrosion resistance of the stainless-steels [22,23]. The surface of the specimen was observed immediately after the polishing. Inclusions on the specimen were characterized by means of FE-SEM/EDS. There were at least 40 inclusions in the electrode area of 1 mm 2 . Figure 1 shows an SEM image and corresponding EDS maps of a typical inclusion. The inclusions were approximately round, with diameters smaller than 5 µm. The EDS maps in Figure 1 show that these inclusions were Cr-and O-enriched inclusions. The results of the quantitative analysis corresponding to point 1 in Figure 1 are shown in Table 2. The Cr/O atomic ratio was approximately 3:2. This result suggested that the inclusions of the specimen were Cr oxide. These inclusions were thought to be introduced during the dissolution process because electrolytic ferrochromium is used as a raw material.
Pitting Corrosion Resistance of the Specimen
Potentiodynamic anodic polarization measurements were conducted in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl2 to investigate the effects of Cu 2+ in the bulk solution on the pitting corrosion resistance of the specimen. The results are shown in Figure 2.
Pitting Corrosion Resistance of the Specimen
Potentiodynamic anodic polarization measurements were conducted in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 to investigate the effects of Cu 2+ in the bulk solution on the pitting corrosion resistance of the specimen. The results are shown in Figure 2. As indicated by curves (a) and (b) in Figure 2, the potential was scanned from the cathodic polarization region in 0.1 M NaCl ( Figure 2a) and 0.1 M NaCl-1 mM CuCl 2 ( Figure 2b). In both cases, cathodic currents were observed at first, and then anodic currents appeared. In 0.1 M NaCl, anodic currents were measured from −0.14 V. In this case, current spikes which would indicate the initiation of metastable and stable pits were not recorded until ca. 1.4 V, and no pits were observed on the surface. The large increase in current from ca. 1.1 V was considered to be oxygen generation [24,25]. However, for the specimen in 0.1 M NaCl-1 mM CuCl 2 , cathodic currents were measured from 0.18 to 0.23 V, and the anodic currents arose from ca. 0.23 V. These results indicate that the corrosion potential in 0.1 M NaCl-1 mM CuCl 2 was higher than that measured in 0.1 M NaCl. This shift of the corrosion potential may have resulted from the reduction reaction of Cu 2+ at the cathode in 0.1 M NaCl-1 mM CuCl 2 . Previous studies have proposed the following reaction as the reduction reaction of Cu 2+ [16,17]: Cu + e → Cu In addition, Cu + can chemically combine with Cl − , as bellow [26]: Therefore, cathodic Cu plating and formation of CuCl was expected to occur in the cathodic polarization region in 0.1 M NaCl-1 mM CuCl2 (Figure 2b). The deposited Cu on the surface is known to act as a "weak point" to the pitting. Xi et al. reported that deposited Cu caused the discontinuity of the passive film and reduced the resistance to pitting corrosion of the passive film [27]. In this case, pitting occurred at 0.67 V. To confirm the pitting on 316EHP was caused by the deposited Cu, which was deposited in the cathodic polarization region, the anodic polarization measurement from the anodic polarization region was conducted in 0.1 M NaCl-1 mM CuCl2. As shown in Figure 2c, the anodic polarization measurement was conducted from 0.25 V in 0.1 NaCl-1 mM CuCl2 to avoid Cu plating onto the cathodic polarization region (below 0.23 V). An anodic current was observed through the measurement under this condition, but no cathodic current was observed. This result suggests that Cu deposition attributable to the cathodic current did not occur on the surface. Despite the lack of Cu deposition in the cathodic polarization region, pitting occurred at ca. 0.82 V. Therefore, the decrease in pitting corrosion resistance was not caused by the Cu deposition in the cathodic region. This reaction is known to occur in two steps as the following reaction: In addition, Cu + can chemically combine with Cl − , as bellow [26]: Therefore, cathodic Cu plating and formation of CuCl was expected to occur in the cathodic polarization region in 0.1 M NaCl-1 mM CuCl 2 (Figure 2b). The deposited Cu on the surface is known to act as a "weak point" to the pitting. Xi et al. reported that deposited Cu caused the discontinuity of the passive film and reduced the resistance to pitting corrosion of the passive film [27]. In this case, pitting occurred at 0.67 V. To confirm the pitting on 316EHP was caused by the deposited Cu, which was deposited in the cathodic polarization region, the anodic polarization measurement from the anodic polarization region was conducted in 0.1 M NaCl-1 mM CuCl 2 . As shown in Figure 2c, the anodic polarization measurement was conducted from 0.25 V in 0.1 NaCl-1 mM CuCl 2 to avoid Cu plating onto the cathodic polarization region (below 0.23 V). An anodic current was observed through the measurement under this condition, but no cathodic current was observed. This result suggests that Cu deposition attributable to the cathodic current did not occur on the surface. Despite the lack of Cu deposition in the cathodic polarization region, pitting occurred at ca. 0.82 V. Therefore, the decrease in pitting corrosion resistance was not caused by the Cu deposition in the cathodic region.
After the anodic polarization measurements, the electrode surfaces were observed via SEM/EDS. Figure 3 shows the SEM images of the Cr oxide inclusion after the anodic polarization measurement in 0.1 M NaCl (Figure 2a) and 0.1 M NaCl-1 mM CuCl 2 (Figure 2b). In both solutions, the inclusion remained undissolved. Table 3 shows the composition of the inclusions after the anodic polarization measurements. Composition of the inclusions at points 2 and 3 were similar to that before the polarization (shown in Figure 2). Szummer et al. reported that the dissolution of Cr 2 O 3 inclusions led to structural defects, which acted as initiation sites for pitting of 16Cr-Fe single crystals [22]. In this study, structural defects, which act as pitting initiation sites, were absent from the surface of the specimen in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 . Therefore, the pitting that occurred in 0.1 M NaCl-1 mM CuCl 2 was not caused by the structural defects made by dissolution of inclusions. The SEM image and corresponding EDS maps of Cu around the pit on the electrode surface after the anodic polarization measurement in 0.1 M NaCl-1 mM CuCl 2 (Figure 2b) are shown in Figure 4. A pit with a lace-like cover, which commonly occurred on type 316 stainless-steels in chloride environments [28][29][30], was observed. In general, deposited Cu has been detected via EDS when the corrosion morphology was changed by Cu deposition [16,31,32]. In this case, no EDS confirmation of Cu deposition inside the pit or on the surface around the pit was obtained. This probably resulted from the fact that the Cu deposited on the surface was too thin to be detected by EDS. After the anodic polarization measurements, the electrode surfaces were observed via SEM/EDS. Figure 3 shows the SEM images of the Cr oxide inclusion after the anodic polarization measurement in 0.1 M NaCl (Figure 2a) and 0.1 M NaCl-1 mM CuCl2 ( Figure 2b). In both solutions, the inclusion remained undissolved. Table 3 shows the composition of the inclusions after the anodic polarization measurements. Composition of the inclusions at points 2 and 3 were similar to that before the polarization (shown in Figure 2). Szummer et al. reported that the dissolution of Cr2O3 inclusions led to structural defects, which acted as initiation sites for pitting of 16Cr-Fe single crystals [22]. In this study, structural defects, which act as pitting initiation sites, were absent from the surface of the specimen in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl2. Therefore, the pitting that occurred in 0.1 M NaCl-1 mM CuCl2 was not caused by the structural defects made by dissolution of inclusions. The SEM image and corresponding EDS maps of Cu around the pit on the electrode surface after the anodic polarization measurement in 0.1 M NaCl-1 mM CuCl2 (Figure 2b) are shown in Figure 4. A pit with a lace-like cover, which commonly occurred on type 316 stainless-steels in chloride environments [28][29][30], was observed. In general, deposited Cu has been detected via EDS when the corrosion morphology was changed by Cu deposition [16,31,32]. In this case, no EDS confirmation of Cu deposition inside the pit or on the surface around the pit was obtained. This probably resulted from the fact that the Cu deposited on the surface was too thin to be detected by EDS. After the anodic polarization measurements, the electrode surfaces were observed via SEM/EDS. Figure 3 shows the SEM images of the Cr oxide inclusion after the anodic polarization measurement in 0.1 M NaCl (Figure 2a) and 0.1 M NaCl-1 mM CuCl2 ( Figure 2b). In both solutions, the inclusion remained undissolved. Table 3 shows the composition of the inclusions after the anodic polarization measurements. Composition of the inclusions at points 2 and 3 were similar to that before the polarization (shown in Figure 2). Szummer et al. reported that the dissolution of Cr2O3 inclusions led to structural defects, which acted as initiation sites for pitting of 16Cr-Fe single crystals [22]. In this study, structural defects, which act as pitting initiation sites, were absent from the surface of the specimen in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl2. Therefore, the pitting that occurred in 0.1 M NaCl-1 mM CuCl2 was not caused by the structural defects made by dissolution of inclusions. The SEM image and corresponding EDS maps of Cu around the pit on the electrode surface after the anodic polarization measurement in 0.1 M NaCl-1 mM CuCl2 (Figure 2b) are shown in Figure 4. A pit with a lace-like cover, which commonly occurred on type 316 stainless-steels in chloride environments [28][29][30], was observed. In general, deposited Cu has been detected via EDS when the corrosion morphology was changed by Cu deposition [16,31,32]. In this case, no EDS confirmation of Cu deposition inside the pit or on the surface around the pit was obtained. This probably resulted from the fact that the Cu deposited on the surface was too thin to be detected by EDS.
Effect of Cu 2+ in the Bulk Solution on the Surface of 316EHP
Cu deposition was not detected via EDS around the pits that occurred in 0.1 M NaCl-1 mM CuCl 2 . To elucidate whether Cu was deposited or not in the pitting initiation process in anodic polarization, XPS depth profiling was performed on the specimen surfaces after the anodic polarization. At first, the effect of Cu 2+ in the bulk solution on passive film formation was investigated. To compare the elemental composition of the passive film formed in the two solutions, the specimens were polarized to 0.4 V from the cathodic polarization region in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 . The anodic polarization curves are shown in Figure 5a. These conditions simulated the formation of a passive film before the initiation of the pitting shown in Figure 2a,b. The composition depth profiles of Fe, Cr, Ni, Mo, and Cu on the specimen surface are shown in Figure 5b,c. Figure 5b shows the depth profiles of the surface polarized from −0.37 to 0.40 V in 0.1 M NaCl. As shown in Figure 5b, an Fe-rich layer was observed at the outermost surface, and a Cr enrichment layer formed slightly inside the outermost surface. This multilayer structure was considered as a passive film. The interface between the passive film and the matrix was defined as a position at the half-maximum of the Cr profile. The interface is indicated by the vertical dashed line in Figure 5b. The thickness of the passive film was ca. 5.0 nm. The concentration of Ni near the film-metal surface and the concentration of Mo at the outermost surface were also confirmed. The elemental profile trends of Cr, Ni, and Mo are the same as those of stainless-steels reported in previous studies [33,34]. The Cu XPS spectra were not detected on the specimen polarized in 0.1 M NaCl. The elemental profiles of the surface of the specimen polarized from 0.20 to 0.40 V in 0.1 M NaCl-1 mM CuCl 2 are shown in Figure 5c. In this case, the film thickness was ca. 5.0 nm, which was defined by the half-maximum of the Cr profile. The thicknesses of the passive films were approximately the same in the two solutions. As shown in Figure 5b,c, the elemental profiles of Cr, Fe, Ni, and Mo in the passive film formed in the 0.1 M NaCl-1 mM CuCl 2 are quite similar to the profiles of passive film formed in 0.1 M NaCl. Therefore, both the thickness and composition of the passive film formed by the anodic polarization to 0.4 V in the two solutions were approximately the same. This indicates that the Cu 2+ in the bulk solution had no effect on the thickness or composition of the passive film.
Jiangnan et al. reported that Cu 2+ dissolved from the stainless-steel had no effect on the formation of the passive film [12]. Therefore, Cu 2+ in the bulk solution had a similar effect to Cu 2+ dissolved from the stainless-steel matrix on the formation of the passive film. A magnified view of the Cu profile shown in Figure 5c (see Figure 5d) reveals that the Cu signal was detected on the outermost surface. The thickness of deposited metal Cu or Cu compound (which was expected to be CuCl, see Equation (4)) was defined as ca. 1.9 nm on the basis of the Cu profile. The metal Cu or Cu compound seem mainly attributed to the reduction reaction of Cu 2+ in the cathodic polarization region. In the polarization from 0.20 to 0.40 V, no pitting occurred on the surface with 1.9 nm-thickness metal Cu or Cu compound in 0.1 M NaCl-1 mM CuCl 2 . This indicates that this amount of metal Cu or Cu compound on the passive film did not cause the pitting in 0.1 M NaCl-1 mM CuCl 2 in the polarization from 0.20 to 0.40 V. Jiangnan et al. reported that Cu 2+ dissolved from the stainless-steel had no effect on the formation of the passive film [12]. Therefore, Cu 2+ in the bulk solution had a similar effect to Cu 2+ dissolved from the stainless-steel matrix on the formation of the passive film. A magnified view of the Cu profile shown in Figure 5c (see Figure 5d) reveals that the Cu signal was detected on the outermost surface. The thickness of deposited metal Cu or Cu compound (which was expected to be CuCl, see Equation (4)) was defined as ca. 1.9 nm on the basis of the Cu profile. The metal Cu or Cu compound seem mainly attributed to It has been reported that the corrosion behavior of the stainless-steel was affected by the amount of Cu on the surface [16,17]. If the amount of the deposited metal Cu or Cu compound is the key factor for decrease in pitting corrosion resistance of 316EHP, metal Cu or Cu compound thicker than 1.9 nm will be detected on the surface without cathodic Cu plating on the surface in 0.1 M NaCl-1 mM CuCl 2 . Thereby, the surface after pit initiation in 0.1 M NaCl-1 mM CuCl 2 without cathodic Cu deposition was investigated. For the comparison of the surface, the specimen polarized in same potential region in 0.1 M NaCl was also surveyed. The results are shown in Figure 6. Figure 6a shows the anodic polarization curves in 0.1 M NaCl and 0.1 M NaCl-1 mM CuCl 2 . Both specimens were polarized from 0.25 V to avoid the Cu deposition in the cathodic polarization region in 0.1 M NaCl-1 mM CuCl 2 . In 0.1 M NaCl-1 mM CuCl 2 , pitting occurred at about 0.86 V. Figure 6b,c shows the composition depth profiles of Fe, Cr, Ni, Mo, and Cu on the specimen surface. The depth profiles of the surface polarized in 0.1 M NaCl is shown in Figure 6b. The depth profiles of Fe, Cr, Ni, and Mo had the same trend as the film formed by anodic polarization from −0.37 to 0.25 V (see Figure 5b). The thickness of the passive film was defined as a position at the half-maximum of the Cr profile. The interface is indicated by the vertical dashed line in Figure 6b. The thickness of the passive film was ca. 7.8 nm. A thicker passive film was formed on 316EHP with the anodic polarization from 0.25 to 0.86 V than that from −0.37 to 0.4 V in 0.1 M NaCl. Figure 6c shows the elemental profiles of the surface after pit initiation in 0.1 M NaCl-1 mM CuCl 2 . As with the case of Figure 5b,c, the thickness and the composition of the passive film were almost the same as the profiles of the film formed in 0.1 M NaCl, not affected by the Cu 2+ in the bulk solution. A magnified view of the Cu profile shown in Figure 6c is shown in Figure 6d. Similar to the case of Figure 5d, the Cu signal was detected on the outermost surface in Figure 6d. Thereby, it was found that metal Cu or Cu compound deposited on the surface, even in the anodic polarization region. Zhou et al. have reported that Fe on the surface of stainless-steel was substituted by Cu 2+ in the solution, as follows [20]: In addition, Hermas et al. reported that metal Cu deposited onto a stainless-steel surface was replaced by CuCl in a Cl − solution [17]. Therefore, deposition of metal Cu or Cu compound (expected to be CuCl) occurred on the surface regardless of the polarization potential region in 0.1 M NaCl-1 mM CuCl 2 . In this case, the thickness of metal Cu or Cu compound was ca. 1.0 nm. This is thinner than that formed on the surface polarized from 0.20 to 0.40 V, including the cathodic polarization region (1.8 nm, see Figure 5d). This indicates that the amount of deposited Cu or Cu compound formed in the cathodic polarization region was larger than that formed only in the anodic polarization region (not including the cathodic polarization region). In the case of the polarization from 0.20 to 0.40 V (including the cathodic polarization region), the cathodic reduction of Cu 2+ occurred in conjunction with Cu substitution in the anodic polarization region. This created a thicker deposition of metal Cu or Cu compound on the surface. However, pitting occurred on the surface even though the surface contained lower amounts of the deposited metal Cu or Cu compound on the surface when polarized from 0.25 to 0.86 V than when polarized from 0.20 to 0.40 V (including the cathodic polarization region). This suggests that the amount of deposited metal Cu or Cu compound was not the critical factor for the pitting initiation in 0.1 M NaCl-1 mM CuCl 2 . Pitting was not caused by the increase in amount on deposited Cu or Cu compound on the surface. Chemical states of Cu on the surface were also important for the corrosion resistance for stainless-steels [14][15][16][17][18]. The chemical state of Cu was analyzed in detail by comparing the Cu 2p 3/2 XPS spectrum of the specimen polarized from 0.20 to 0.40 V with that of the specimen polarized from 0.25 to 0.86 V. Figure 7 shows the Cu 2p 3/2 XPS spectrum corresponding to the outermost surface of the specimen polarized in 0.1 M NaCl-1 mM CuCl 2 under the two aforementioned polarization conditions. Dotted lines indicate the fitted curves of both spectra. As shown in Figure 7, the peaks of the spectra were quite similar. This indicates that the chemical states of deposited Cu or Cu compound were not changed by these two polarization conditions and seem not to influence the pitting corrosion resistance. Both peaks of binding energy were at 932.3 eV. Therefore, pitting initiation in the polarization from 0.25 to 0.86 V was not caused by the change in chemical state of deposited metal Cu or Cu compound. The peaks of metal Cu and Cu + attributable to CuCl were expected to appear at binding energies ranging from 932.2 to 933.1 eV [35][36][37][38] and from 932.1 to 932.6 eV [39][40][41]. The peak of metal Cu tended to occur at a slightly higher binding energy than that of Cu + attributed to CuCl. However, the binding energies of metal Cu and Cu + are very similar, and distinguishing the chemical state of Cu from the spectra in Figure 7 is difficult. The assumption is that the surfaces of both specimens contained metal Cu, which had been plated during the reduction reaction of Cu 2+ in the cathodic polarization region or substituted in the anodic polarization region. As mentioned above, CuCl was also formed on the surface in both cathodic and anodic polarization regions. Therefore, the spectra corresponding to the surface polarized from 0.20 to 0.40 V and from 0.25 to 0.86 V were considered as a mixture of metal Cu and CuCl. These results indicate that the decrease in pitting corrosion resistance was not caused by the change in chemical state of deposited metal Cu or Cu compound on the surface.
Effect of Cu 2+ in the Bulk Solution on the Pitting Corrosion Resistance of the Specimen
The aforementioned results revealed that metal Cu or Cu compound were deposited onto the surface in 0.1 M NaCl-1 mM CuCl2. However, whether the decrease in pitting corrosion resistance of the specimen resulted from this deposition or the Cu 2+ in the bulk solution is unclear. The effect of deposited metal Cu and CuCl on the pitting corrosion resistance of the specimen was investigated by conducting anodic polarizations under two conditions, (a) and (b). Under condition (a), the specimen was polarized to 0.4 V in 0.1 M NaCl-1 mM CuCl2 to deposit metal Cu and CuCl onto the surface. Afterward, the electrode surface was rinsed with deionized water and the solution was replaced with 0.1 M NaCl. Anodic polarization was subsequently performed from 0.4 V in 0.1 M NaCl. The polarization curves are denoted as a1 and a2 in Figure 8. Under condition (b), the specimen was first polarized to 0.4 V in 0.1 M NaCl-1 mM CuCl2. The electrode surface was cleaned with deionized water when the potential reached 0.4 V. Subsequently, anodic polarization was conducted from 0.4 V in 0.1 M NaCl-1 mM CuCl2 (the corresponding curves are denoted as b1 and b2 in Figure 8). Based on Figure 5, deposition of metal Cu and CuCl onto the surface after polarization to 0.40 V in 0.1 M NaCl-1 mM CuCl2 was
Effect of Cu 2+ in the Bulk Solution on the Pitting Corrosion Resistance of the Specimen
The aforementioned results revealed that metal Cu or Cu compound were deposited onto the surface in 0.1 M NaCl-1 mM CuCl 2 . However, whether the decrease in pitting corrosion resistance of the specimen resulted from this deposition or the Cu 2+ in the bulk solution is unclear. The effect of deposited metal Cu and CuCl on the pitting corrosion resistance of the specimen was investigated by conducting anodic polarizations under two conditions, (a) and (b). Under condition (a), the specimen was polarized to 0.4 V in 0.1 M NaCl-1 mM CuCl 2 to deposit metal Cu and CuCl onto the surface. Afterward, the electrode surface was rinsed with deionized water and the solution was replaced with 0.1 M NaCl. Anodic polarization was subsequently performed from 0.4 V in 0.1 M NaCl. The polarization curves are denoted as a1 and a2 in Figure 8. Under condition (b), the specimen was first polarized to 0.4 V in 0.1 M NaCl-1 mM CuCl 2 . The electrode surface was cleaned with deionized water when the potential reached 0.4 V. Subsequently, anodic polarization was conducted from 0.4 V in 0.1 M NaCl-1 mM CuCl 2 (the corresponding curves are denoted as b1 and b2 in Figure 8). Based on Figure 5, deposition of metal Cu and CuCl onto the surface after polarization to 0.40 V in 0.1 M NaCl-1 mM CuCl 2 was expected under both a1 and b1 conditions (see Figure 8). Pitting occurred in 0.1 M NaCl-1 mM CuCl 2 , but no pitting occurred in 0.1 M NaCl, as indicated by a2 and b2 in Figure 8. These results suggest that both deposited Cu or Cu compound and Cu 2+ in the bulk solution was necessary for the pitting to occur in 0.1 M NaCl-1 mM CuCl 2 . That is, the deposited metal Cu and CuCl on the surface did not function as the "weak point" in 0.1 M NaCl. The present study indicated that the continuous presence of Cu 2+ in the bulk solution was essential for pitting after the deposition of metal Cu or Cu compound. Therefore, the decrease in pitting corrosion resistance of the 316EHP stainless-steel in 0.1 M NaCl-1 mM CuCl2 mainly resulted from the deposited Cu or Cu compound and continuous supply of Cu 2+ on the surface. Further research is needed to clarify the precise mechanism governing the effects of Cu 2+ in the bulk solution on the pitting initiation of the stainless-steel.
Conclusions
1. No pitting occurred on 316EHP stainless-steel in 0.1 M NaCl. The Cr oxide inclusions remained undissolved after the anodic polarization to ca. 1.4 V. Therefore, the structural defects that lead to the pitting initiation were not introduced by the dissolution of the Cr oxide inclusions on 316EHP stainless-steel in 0.1 M NaCl. 2. Pitting occurred on 316EHP stainless-steel in 0.1 M NaCl-1 mM CuCl2. The potential was scanned from the cathodic polarization region, where Cu plating by the reduction reaction of Cu 2+ and the formation of CuCl occurred. The deposited Cu was considered to function as a "weak point" where pitting was initiated. However, pitting also occurred during the anodic polarization from the anodic polarization region (not including the cathodic polarization region). Thus, pitting occurred regardless of whether the potential was scanned from the cathodic polarization region or from the anodic polarization region in 0.1 M NaCl-1 mM CuCl2. This indicates that the pitting of 316EHP stainless-steel in 0.1 M NaCl-1 mM CuCl2 was not caused by the Cu deposition in the cathodic polarization region. 3. The thickness and composition of the passive film on the surface of 316EHP were unaffected by the Cu 2+ in the bulk solution during the polarization from 0.20 to 0.40 V (including the cathodic polarization region) and from 0.25 to 0.86 V (not including the cathodic polarization region). 4. The thicker deposition of metal Cu and Cu compound was formed on the surface of | 8,601.6 | 2021-03-19T00:00:00.000 | [
"Materials Science"
] |
Multi-Label Diagnosis of Arrhythmias Based on a Modified Two-Category Cross-Entropy Loss Function
: The 12-lead resting electrocardiogram (ECG) is commonly used in hospitals to assess heart health. The ECG can reflect a variety of cardiac abnormalities, requiring multi-label classification. However, the diagnosis results in previous studies have been imprecise. For example, in some previous studies, some cardiac abnormalities that cannot coexist often appeared in the diagnostic results. In this work, we explore how to realize the effective multi-label diagnosis of ECG signals and prevent the prediction of cardiac arrhythmias that cannot coexist. In this work, a multi-label classification method based on a convolutional neural network (CNN), long short-term memory (LSTM), and an attention mechanism is presented for the multi-label diagnosis of cardiac arrhythmia using resting ECGs. In addition, this work proposes a modified two-category cross-entropy loss function by introducing a regularization term to avoid the existence of arrhythmias that cannot coexist. The effectiveness of the modified cross-entropy loss function is validated using a 12-lead resting ECG database collected by our team. Using traditional and modified cross-entropy loss functions, three deep learning methods are employed to classify six types of ECG signals. Experimental results show the modified cross-entropy loss function greatly reduces the number of non-coexisting label pairs while maintaining prediction accuracy. Deep learning methods are effective in the multi-label diagnosis of ECG signals, and diagnostic efficiency can be improved by using the modified cross-entropy loss function. In addition, the modified cross-entropy loss function helps prevent diagnostic models from outputting two arrhythmias that cannot coexist, further reducing the false positive rate of non-coexisting arrhythmic diseases, thereby demonstrating the potential value of the modified loss function in clinical applications.
Introduction
Cardiovascular disease (CVD) is one of the leading causes of death, accounting for over 31% of deaths worldwide [1].There are many types of cardiovascular diseases, and their impact on human health also varies.Determining the type of CVD plays an important role in follow-up treatment.In the clinic, one of the most commonly used methods to diagnose CVD is the resting electrocardiogram (ECG).Medical personnel place electrodes at fixed positions on the resting patient to acquire and select a high-quality 10 s ECG and make a diagnosis based on the ECG waveform.According to incomplete statistics, there are more than 100 kinds of cardiovascular diseases, and the detection of ECGs depends on the diagnostic experience of medical professionals.Therefore, it is very important to develop ECG-based diagnostic tools.
Most early ECG diagnostic tools were realized by imitating the logical conclusions of the physician.Geddes et al. [1] proposed classifying various premature ventricular contractions (PVC) using rule-based reasoning.First, the parameters for detection were selected according to the ECG characteristics of PVC, such as the R-R interval and the duration and shape of the QRS complex.Then, certain medical rules were used as criteria for assessing the occurrence of PVC.Kezdi et al. [2] proposed an algorithm for detecting ectopic beats and arrhythmia based on clinical experience.The R-wave was determined by calculating the slope of the QRS complex.Supraventricular tachycardia and ventricular ectopy were detected by calculating the changes in the R-R interval and the width, polarity, and height of the QRS complex.The parameters selected for these methods are clinically interpretable.However, other feature extraction methods (except for R-wave) are not accurate enough because of the strong personalization and nonlinearity of ECG signals, especially in different types of arrhythmias.Since different types of ECG signals have different time-frequency features, large errors can easily occur in the calculation of feature parameters, leading to the failure of this type of method.
Another type of method is pattern recognition.First, certain statistical features are extracted, and then a classifier is created using machine learning (ML) to classify different types of arrhythmias.In many studies, time/morphological statistics [3][4][5][6][7], spectral features [8,9], and higher-order statistical parameters [10][11][12][13] have been used to diagnose ventricular arrhythmias in malignant arrhythmias.These mathematical features, in combination with classifiers such as artificial neural networks (ANNs) or support vector machines (SVMs) [14][15][16], can efficiently filter out rhythms such as ventricular fibrillation and ventricular tachycardia.The two steps (i.e., feature extraction and classification) in pattern recognition help in the diagnosis of cardiac arrhythmias.The accuracy and efficiency of detection are better than simulating the physician's logical conclusions.The disadvantage is that the signal features are artificially determined, or more precisely, the quality of the signal features often depends on artificial experience.Therefore, it is difficult to find effective statistical features because there are too many types of arrhythmias.
In recent years, with the development of deep learning, researchers have begun to use deep learning instead of artificial feature extraction methods [17] to evaluate ECG signals."Artificial feature extraction methods" refer to the methods used to calculate the features of electrocardiogram signals from different perspectives (such as the time domain, frequency domain, and time-frequency domain) for the classification of arrhythmias.The selection of these features is based on personal subjective experience.Feng et al. [18] employed dynamic time warping (DTW), C-means clustering, and the BP algorithm to optimize the parameters of the probabilistic process neural network (PPNN).The method achieved an F1 score of 0.7615 and an accuracy of 74.16% on the Chinese Cardiovascular Disease Database (CCDD).While PPNN offers advantages such as few-shot learning and computational complexity, the limited size of its parameters hampers its classification performance.Yıldırım et al. [19] proposed a new one-dimensional convolutional neural network model (1D CNN) to classify 17 types of cardiac arrhythmias.Its accuracy and F1 score on the MIT-BIH arrhythmia database were 91.33% and 0.8538, respectively.The model demonstrated efficient and rapid diagnostic capabilities.Luo et al. [20] conducted a study using the same database and proposed a hybrid convolutional recurrent neural network (HCRNet), achieving an accuracy of 99.01%.However, the MIT-BIH data were derived from internal patients, and the ECG signals exhibited highly personalized characteristics.Thus, a model with high accuracy might not necessarily possess a high degree of generalizability across different patients.Yao et al. [21] proposed the ATI-CNN model to address the low performance of a CNN in the detection of variable-length ECG signals.This model integrated a CNN, recurrent cells, and an attention module.On the China Physiological Signal Challenge (CPSC) dataset, ATI-CNN achieved an F1 score of 0.812 and a precision of 0.826.By combining the spatiotemporal features of ECG signals, ATI-CNN improved accuracy while reducing the number of model parameters, thereby lowering training costs.However, this model did not consider the oneto-many relationship between patients and arrhythmia labels.Objectively, deep learning methods learn features from a large number of data to classify ECG signals, which will be the development direction of intelligent ECG diagnosis in the future.
In ECG signals, some arrhythmias can occur simultaneously, whereas others do not.For example, in ECG signals of a period of sustained atrial fibrillation, PVCs but not premature atrial fibrillation can occur simultaneously.The relationship between the various designations is complex, making multi-label classification of ECG signals challenging [22][23][24].Yoo et al. [25] optimized the algorithm from the perspective of multi-label classification of arrhythmia and proposed xECGNet.By incorporating the L2 norm of attention maps of different disease categories into the loss function, xECGNet achieved a multi-label subset accuracy of 84.6% in the classification tasks of eight types of arrhythmias on the CPSC dataset.Yang et al. [26] proposed using a stacking approach to combine the classification results of ResNet and random forest and obtain the final results through voting.Despite the method's accuracy improving to 95%, integrating multiple models increased deployment costs, making it challenging to apply to general medical embedded devices.Nowadays, current methods emphasize learning the relationships between labels from research data (the labels themselves).However, due to the complex relationships between the labels of ECG signals, it is difficult to learn these relationships from only research data.This causes the diagnostic models to output some arrhythmias that cannot coexist, leading to the increased misdiagnosis rate of the multi-label ECG diagnostic algorithm [27].
In this work, we propose a multi-label diagnostic method based on a modified twocategory cross-entropy loss function.This method first incorporates LSTM and attention mechanisms to enhance the classification accuracy of the CNN model.Building upon this, to address the issue of certain conclusions being unable to coexist in arrhythmia diagnosis, we add a regularization term to the traditional binary cross-entropy loss function, which disallows the coexistence of certain arrhythmia disease label pairs.The regularization term helps constrain the network's learning direction, enabling it to consider the mutually exclusive relationships between various disease labels.It improves the applicability of the ECG diagnostic algorithm in real-life diagnosis scenarios.
The main innovative points of this article are: (A) A new multi-label training loss function is proposed by adding a regularization term that does not allow the coexistence of some arrhythmias; (B) A CNN + LSTM + ATTENTION architecture is presented to improve ECG classification performance; (C) More than 10,000 ECG recordings of the six most common cardiac arrhythmias are used to test the loss function and classification method, and the performance is compared between patients.Our method improves the accuracy of classifying four types of arrhythmias (normal, sinus tachycardia, atrial flutter, and atrial tachycardia) and reduces the incidence of misdiagnosing atrial flutter and atrial tachycardia as false positives.
This paper is organized as follows.In Section 2, explanations of the CNN + LSTM + ATTENTION architecture and the modified cross-entropy loss function are presented.The new ECG database is described in Section 3.An analysis of the modified cross-entropy loss function and its comparison with other methods are described in Section 4. Further details of the presented method and future research topics are given in Section 5. Section 6 presents the conclusions of this paper.
Deep Learning Model
In this work, a deep learning model, consisting of a convolutional neural network (CNN) [28], long short-term memory (LSTM) [29], and an attention mechanism [30] is used to classify ECG signals.
Feature Extraction
A CNN is used for feature extraction, as shown in Figure 1.For the convolution operation in the CNN, it is assumed that z l j represents the j-th channel output of the i-th convolutional layer and o l j is the input.The input o l j and the output z l j of the l-th layer can be expressed by Equation (1) and Equation (2), respectively.
where f(•) is the activation function, M j is the subset of the feature map of the (l − 1)-th layer, k l ij is the convolution kernel matrix, b l j is the bias, and '*' is the convolution symbol.For the pooling operation in the CNN, α stands for the sampling coefficient and represents the maximum pooling (•) function.The input o l+1 j and the output z l+1 j of the (l + 1)-th layer can be expressed by Equation (3) and Equation (4), respectively.
LSTM
The features Z ∈ R T×D obtained by the CNN are input to the following LSTM, where T is the length of the input features and D is the number of input features.The workflow is shown in Figure 2.
The internal state C t ∈ R S between the units in the LSTM layer is used to determine the relationship between the ECG features extracted by the CNN.S represents the length of the vector output from the LSTM layer.z t represents the t-th slice in the group of input features (1 ≤ t ≤ T). h t ∈ R S represents the hidden state of the LSTM layer corresponding to z t .The final output h t can be calculated as follows:
Attention Mechanism
The attention mechanism is used to compute the attention distribution in the hidden state h t (1 ≤ t ≤ T) at each time point.The final output features are then formed by the weighted average of the attention distribution.The computational process is illustrated below: where Z ∈ R S represents the results after the weighted average, β i represents the weighting factor in the hidden state h i (1 ≤ t ≤ T) , b s and W s are both trainable weights, W s represents the query vector, and u i (1 ≤ t ≤ T) represents the intermediate weighting factor in the calculation.
Fully Connected Layer
Finally, the features Z obtained by the attention mechanism are input to the fully connected layer to perform the final classification.The final prediction vector z is obtained as follows: where z 1 and z 2 each represent a weighing matrix in the fully connected layer.b 1 and b 2 represent the bias matrices.z 1 represents the output of the first fully connected layer, and Sigmiod represents the activation function.
The Modified Cross-Entropy Loss Function
In multi-label classification, a two-category cross-entropy loss function is usually used to calculate the loss between the labels and the predicted outcomes.In this work, two types of cross-entropy loss functions are studied, given by Equations ( 16) and (17).
where N is the number of arrhythmia disease types, y k is the k-th element in the real ECG label vector, and a k is the k-th element in the predicted ECG label vector.M is the number of combinations belonging to the coexistence of arrhythmia diseases with strong negative correlations, a l is the l-th combination of arrhythmia diseases that cannot coexist, and a i and a j are the i-th element and j-th element, respectively, in the predicted ECG label vector in a j .According to Equation ( 16), loss − 1 is the traditional cross-entropy loss function used for multi-label classification and is widely used in deep learning.However, the traditional cross-entropy loss function does not consider the correlations between different labels [31][32][33].This results in cardiac arrhythmias, which almost never occur simultaneously, in the predicted results.
According to Equation ( 17), loss − 2 is the modified cross-entropy loss function, obtained by introducing a regularization term.The regularization term increases the penalty of the co-occurrence of cardiac arrhythmias that cannot coexist, which is expected to improve the prediction performance of deep learning models.
Specifically, when the model predicts the presence of non-coexisting arrhythmia disease label pairs in the results due to the logarithmic function's derivative property, it rapidly increases the value of the regularization term, allowing the model to continue training.Conversely, this regularization term tends toward 0, resulting in the degeneration of the loss function into a binary cross-entropy loss function, which does not affect the prediction of other coexisting disease labels.
ECG Database
This work is based on the 12-lead ECG data collected by SID MEDICAL TECHNOLOGY CO., LTD from many hospitals in Shanghai.The device used to acquire the ECG signals was the Inno-12 ECG acquisition workstation, as shown in Figure 3.The ECG signals collected were 10 seconds long, and the sampling frequency was 500 Hz.The ECG signals were first magnified 400 times using electrode tabs and then discretized, ensuring the accuracy of acquisition.Considering the power frequency interference, a trap filter was developed in the hardware circuit.Each ECG sample was processed with a Butterworth bandpass filter (0.5~100 Hz) to remove high-and low-frequency noise.
A total of 39,069 data were collected, including six types of ECGs: normal ECG, sinus tachycardia, sinus bradycardia, atrial flutter, atrial tachycardia, and premature ventricular contraction (PVC), as shown in Figure 4.In a 10-second ECG signal, atrial flutter and atrial tachycardia cannot coexist simultaneously.In this work, they are considered non-coexisting arrhythmia disease label pairs.All ECG data were labeled by two professional cardiologists.If the two cardiologists disagreed, the label was determined by a third chief cardiologist.Then, these ECG data were divided into a training dataset (23,322), a validation dataset (2591), and a test dataset (13,156).The distribution of arrhythmias in the different datasets is shown in Table 1.
Experimental Setup
In terms of hardware, all experiments were carried out on a Dell T5820 workstation with an Intel Core i9-10900X CPU, 64 GB of RAM, and two graphics cards (NVIDIA RTX 3060 12GB) sourced from Dell in Shanghai, China.In terms of software, all deep learning models were constructed using Numpy 1.19.5, TensorFlow 1.13.1, and Keras 2.2.4,which were installed on Ubuntu 20.04.
Parameter Setting 4.2.1. The Deep Learning Model
Three CNN models (i.e., 1D VGG16 [34], 1D ResNet34 [35], and 1D ResNet50 [35]) were used to compare whether our proposed method leads to performance improvements in the CNN models, as shown in Figure 5.In our method, only one CNN model is used for feature extraction.The character '/2' in each sub-image means that the stride size in the corresponding network layer is 2. The VGG16 model comprises 16 convolutional layers and adopts the traditional stacked convolution layer approach.Its model structure is relatively deep but simple.The ResNet34 model has 34 convolutional layers and adds residual structures, in contrast to VGG16.It resolves the issue of gradient vanishing during model training by incorporating skip connections that directly add the input to the output.The ResNet50 model, on the other hand, has 50 convolutional layers and utilizes bottleneck structures to reduce computational complexity and improve model efficiency.
Three deep learning models (i.e., VGG16 + LSTM + ATTENTION, ResNet34 + LSTM + ATTENTION, and ResNet34 + LSTM + ATTENTION) were used to verify whether our proposed method leads to performance improvements in the CNN models.The corresponding parameter settings and network structures are shown in Table 2.The input size of all three deep learning models was 5000 × 12 .The output sizes of the three deep learning models were 44 × 512, 22 × 512, and 22 × 2048, respectively.In the LSTM layer, an intermediate output with a size of 1 × 60 was generated at each iteration.The activation function 'sigmoid' was used for the forget gate, the input gate, and the output gate.The activation function 'tanh' was used for updating the state C t .The initialization method used for the matrix weight was 'glorot uniform'.
In the attention layer, the sizes of the matrix weight W s , bias b s , and query vector u s were 60 × 60, 60 × 1, and 60 × 1, respectively.The initialization method used was 'glorot uniform'.The first dense layer consisted of 64 neurons and used the activation function 'ReLU'.The second dense layer consisted of six neurons (corresponding to the different diseases) and used the activation function 'Sigmoid'.The initialization method used in the two fully connected layers was 'glorot uniform'.
The Modified Cross-Entropy Loss Function
In the database created in this work, atrial flutter and atrial tachycardia have a high negative correlation.It was found that the correlation (Poisson correlation degree) between atrial flutter and atrial tachycardia was −0.98 according to the correlation analysis of arrhythmia diseases based on the 200,000 ECG conclusions obtained from Shanghai Zhongshan Hospital.Thus, the loss function used in this work can be expressed using Equation (18).
where a 4 and a 5 represent the existence probabilities of atrial flutter and atrial tachycardia, respectively, obtained from the predicted ECG label vector.The influence of the presence of both atrial flutter and atrial tachycardia in the predicted outcomes on the regularization term is shown in Figure 6a.The regularization term tended toward 0 when either only one or neither (i.e., atrial flutter and atrial tachycardia) appeared in the predicted outcomes.The regularization term increased rapidly when the probability of atrial tachycardia and atrial flutter simultaneously exceeded 0.5.Figure 6b shows the influence of the regularization term on the model's weight matrix concerning the partial derivative values of the loss function.When two labels with a negative correlation were present simultaneously, the corresponding ∂Loss/∂W value increased, thereby enhancing the speed of weight updates in the backward propagation process of the model.This enabled the model to promptly recognize negative correlations between the labels and adjust the weights accordingly.Conversely, when the model experienced a decrease in the speed of the weight updates, it tended to achieve stability.
Evaluation Indicators
In this section, six evaluation indicators are examined to assess the performance of the presented models.The six evaluation indicators are (1) Error Num, (2) Hamming Loss, (3) 94.74%, 93.64%, and 93.52%, respectively.It is proved that a CNN with a suitable structure is effective in multi-label ECG classification.After adding LSTM + ATTENTION, the F1 scores of the three methods were 95.21%, 93.98%, and 94.16%, respectively.This shows that the prediction performance of a CNN can be improved or ensured by adding LSTM + ATTENTION.In this section, the traditional cross-entropy loss function (see Equation ( 16)) is used to train the presented deep learning methods, as shown in Table 2.
VGG16 [34], ResNet34 [35], and ResNet50 [35] are the most commonly used CNN models for ECG classification.In this work, VGG16, ResNet34, ResNet50, and their combinations with LSTM + ATTENTION were used to verify the performance of the improved loss function.'Adam' was chosen as the optimizer, the initial learning rate was set to 0.001, and the number of training epochs was set to 100.Regarding the hyperparameter settings for each CNN model, we established them based on parameters published in the literature [36,37] and determined the optimal model training configuration using the GridsearchCV algorithm [38].To evaluate the performance of the multi-label model on the validation dataset, an early stop mechanism was introduced into the training process to prevent overfitting.The training of the model was stopped if the loss of the multi-label model on the validation dataset did not decrease in 10 consecutive training sessions.
The training dataset (23,322) and the validation dataset (2591), as described in Section 3, were both used to train the three deep learning models.The test dataset (13,156) was used to test the effectiveness of the trained models.
The training process using the traditional cross-entropy loss function is shown in Figure 7a.The corresponding experimental results are shown in Table 4.The training process using the modified cross-entropy loss function is shown in Figure 7b.The corresponding experimental results are given in Table 4.
The early stop mechanism stopped the training of the model when it entered a stable phase.In Figure 7, it can be seen that ( 1) the model loss did not decrease after 68 training epochs when using the traditional loss function, and (2) the model loss did not decrease after 52 training epochs when using the modified loss function.It can be concluded that the training epochs were shorter when using the modified loss function.
In addition, it can be seen in Table 4 that (1) for both loss functions, VGG16 outperformed the ResNet34 and ResNet50 models across all evaluation metrics; (2) 'Error Num' was significantly reduced when using the modified loss function; (3) 'Precision' slightly increased when using the modified loss function; and (4) 'Subset Accuracy', 'Jaccard Index', 'Recall', and 'F1 score' decreased slightly when using the modified loss function.It can be concluded that the modified loss function can significantly reduce the number of coexisting strongly negatively correlated labels while guaranteeing model performance.Therefore, it can be concluded that the modified loss function can effectively prevent the occurrence of strongly negatively correlated arrhythmias in the multi-label diagnosis of arrhythmias.
Table 5 compares the accuracy of classifying different arrhythmias using the two different loss functions.In the table, it can be seen that there was a slight improvement in accuracy when diagnosing normal ECG, sinus tachycardia, atrial flutter, and atrial tachycardia with the improved loss function.However, it should be noted that the model's accuracy in classifying PVCs decreased by more than 1%.Furthermore, with regard to the overall improvement in precision evident in Table 4 and Figure 8, we conclude that using the modified two-category cross-entropy loss function significantly reduces the number of misdiagnoses of atrial tachycardia.
Discussion
This article proposes a multi-label diagnosis method for cardiac arrhythmias based on a modified two-category cross-entropy loss function.In order to validate the performance of LSTM + ATTENTION, the classic neural networks VGG16, ResNet34, and ResNet50 are used for evaluation.The results show that the prediction performance of the CNN can be improved or ensured by adding LSTM + ATTENTION.
Many types of diseases can be identified from ECG signals, and some of these diseases cannot exist simultaneously.We compare the traditional loss function to our improved loss function across different CNN models.The results indicate that using the traditional loss function still produces non-coexisting labels.However, when using the proposed modified loss function in this paper with the addition of a regularization term, the model's weight update rate between negatively correlated labels is strengthened, forcing the CNN model to learn the connections between non-coexisting labels and preventing the appearance of non-coexisting label pairs in the diagnostic results.In addition, the improved loss function shortens the required training period of the model, demonstrating the effectiveness of our approach in reducing model training costs and enhancing the feasibility of clinical applications.
To validate the classification performance of the modified loss function in diagnosing cardiac arrhythmias using neural network models, we compare the accuracy of the two different loss functions in six types of ECG arrhythmias.The results indicate that our method can improve the precision of the model for negatively correlated atrial tachycardia and atrial flutter labels.This means that it can reduce the risk of false positives in medical diagnosis, demonstrating the potential value of the improved loss function in clinical applications.However, our method shows decreased accuracy in the identification of PVCs and sinus bradycardia.Currently, our research focuses on six common types of cardiac arrhythmias.In the future, we will expand our scope to include a broader range of cardiac arrhythmia datasets.
Conclusions
This work applies a CNN + LSTM + ATTENTION model to multi-label ECG classification.To prevent the occurrence of label pairs that cannot exist simultaneously, in the presented method, a modified cross-entropy loss function is proposed.The modified loss function introduces a regularization term to increase the penalty for the coexistence of arrhythmias exhibiting a strong negative correlation.Experimental results show that the modified loss function helps prevent the occurrence of strongly negatively correlated arrhythmias, sacrificing prediction accuracy by only a small margin.This work provides theoretical evidence for multi-label ECG classification in clinical diagnosis.
Figure 1 .
Figure 1.Structure of the deep learning model for multi-label diagnosis of cardiac arrhythmias.
Figure 6 .
Figure 6.Changes in the regularization term: (a) The impact of negatively correlated labels on the regularization term.(b) The impact of the regularization term on the partial derivative values of the loss function with respect to the model's weight matrix.
Figure 7 .
Figure 7.Comparison of the traditional and modified loss functions: (a) training process using the traditional loss function; (b) training process using the modified loss function.
where f t , i t , and O t represent the update results of the forget gate, input gate, and output gate, respectively.W f , W i , W c, and W o represent the weights of the forget gate, input gate, output gate, and LSTM state unit, respectively.
Figure 2. Structure of LTSM and attention mechanism.
Table 1 .
Distribution of cardiac arrhythmias in different datasets.
Normal Sinus Tachycardia Sinus Bradycardia Atrial Flutter Atrial Tachycardia PVC
Note: atrial flutter and atrial tachycardia are non-coexisting arrhythmia disease label pairs.
Table 2 .
Parameter settings and network structures of the three deep learning models.
Table 3 .
Comparison of the performance of the three CNN models after adding LSTM + ATTENTION.
Table 4 .
Comparison between the modified and traditional loss functions.Effectiveness of the Modified Cross-Entropy Loss FunctionIn this section, the modified cross-entropy loss function (see Equation (18)) is used for training the presented deep learning methods, as shown in Table2.The other parameter settings are the same as those used in Section 4.2.
Table 5 .
Comparison of the accuracy of 6 types of cardiac arrhythmias between the modified and the traditional loss functions.Comparison of the precision of 6 types of cardiac arrhythmias between the modified and traditional loss functions. | 6,360.2 | 2023-12-12T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Sometimes We Do Not Hear What People Say, Instead We Hear What We Expect Them to Say
Sometimes people make mistakes when they are speaking. For example, someone might say, “Can you hand me the hammer?” when they meant to ask for a screwdriver. Because this mistake is related to the meaning, it is called a semantic error. Sometimes listeners (or readers) notice these errors and sometimes they do not. Language scientists are interested in how people’s brains respond when sentences have semantic errors. To study this, scientists have done experiments using a technique called EEG. These experiments have shown that people’s brains respond differently to different kinds of semantic errors. In particular, there is a certain brain response based on how well the incorrect word fits in with the other words in the sentence. These experiments have shown that our brains often use knowledge about what kinds of words are expected in a sentence to construct meaning from that sentence.
Sometimes people make mistakes when they are speaking. For example, someone might say, "Can you hand me the hammer?" when they meant to ask for a screwdriver. Because this mistake is related to the meaning, it is called a semantic error. Sometimes listeners (or readers) notice these errors and sometimes they do not. Language scientists are interested in how people's brains respond when sentences have semantic errors. To study this, scientists have done experiments using a technique called EEG. These experiments have shown that people's brains respond di erently to di erent kinds of semantic errors. In particular, there is a certain brain response based on how well the incorrect word fits in with the other words in the sentence. These experiments have shown that our brains often use knowledge about what kinds of words are expected in a sentence to construct meaning from that sentence.
she could do something to help. "Oh, since you are standing there, you could peel the peas," I said. I noticed my mistake, but when she heard "peel" she had already started reaching for the carrots, and she did not even notice that I had said "peas" instead of "carrots"! Compare that with the time when I was in third grade and I went up to my teacher and asked, "Mom, can I go to the bathroom?" Much to my embarrassment, I think the whole class heard my mistake and everyone started laughing! Language scientists are often interested in these kinds of mistakes and whether or not people notice them. In both cases, the mistake I made was a semantic error. Semantics refers to the meaning of language.
SEMANTICS
The meaning of words and sentences. In linguistics, semantics often refers to the study of the meaning of language.
Consider this famous sentence written by the linguist Noam Chomsky: "Colorless green ideas sleep furiously." Would you agree that the words are all in the right place in the sentence? And yet, the sentence does not make any sense. That is because some of the words are combined in a way that does not quite work. For example, it is not possible for something to be both green and colorless. Noam Chomsky came up with this sentence to make exactly this point: sometimes words can be in the right place in a sentence, but the sentence still does not make sense, because the words are combined in a way that is not meaningful.
Similar to the examples provided above, language scientists have discovered that some semantic errors (also called semantic anomalies) are easier to notice than others. Language scientists study how people react to di erent kinds of language mistakes, because it can tell us something about how the brain constructs meaning from the words it is reading or hearing. Scientists want to understand what determines whether or not someone notices a semantic anomaly.
SEMANTIC ANOMALY
A word in a sentence that is unexpected given the other words in the sentence. For example, I take my co ee with cream and dogs. Why do people notice that survivors should not be buried when the story is about a bike crash but not a plane crash? One important di erence between the two stories is how well the word "survivor" fits into the scenario. When a plane crashes, it is usually an extremely devastating event, so people are very likely to talk about whether or not there are any survivors. However, when there is a bicycle crash, it is unlikely to be as devastating. The people on the bikes might be hurt, but they are unlikely to be killed. So, in the case of a bicycle crash, people are very unlikely to talk about survivors. One idea about what is happening when people do not notice semantic anomalies is that, when a word fits well into the story, the brain might not fully interpret what the word means. For example, the word survivor means "a person who is still alive." In stories where we expect to hear about survivors, the brain might activate only the idea of "person" and not "who is still alive." This is just one of the ways that the brain can be a bit lazy when interpreting language [ ].
WHAT HAPPENS IN THE BRAIN DURING SEMANTIC ANOMALIES?
After decades of research, language scientists have found that people's brains respond di erently to di erent kinds of errors in a sentence. One way to study the brain's response to semantic errors is to use electroencephalogram (EEG). EEG measures the electrical activity
ELECTROENCE-PHALOGRAM (EEG)
Measurement of the electrical activity of many neurons in the brain, using electrodes placed on the scalp.
that is always happening in every part of the brain. To measure this activity, scientists ask people to wear special caps that are covered with sensors called electrodes. The electrodes sit on the scalp and measure the electrical activity coming from the neurons (brain cells) that are right underneath the electrodes. Scientists can then study how the electrical activity changes based on what volunteers are doing.
Scientists have recorded EEG's while volunteers read sentences with semantic anomalies. In their experiments, scientists asked volunteers to read many sentences that contain semantic mistakes. The scientists then and take the average of the brain's activity when as the volunteers read the sentences. The averaged brain activity is called an event-related potential (ERP) waveform, which that is like a
EVENT RELATED POTENTIAL (ERP)
Peaks or troughs in the averaged EEG signal that reflect the brain's responses to events we see or hear.
wave that contains several high and low points. Those high and low kids.frontiersin.org March | Volume | Article | Patson Semantic Errors in the Brain
Figure Figure
The type of semantic anomaly can a ect the brain's N response.
points represent the brain's response to the sentence over time. After decades of research, scientists have learned that there are predictable patterns of high and low points in the ERP, and. When that is the case, those high and low points are were given names. For example, a low point that occurs around milliseconds (ms; / , of a second) after an unexpected word appears is known as an N ERP component (this is just one of the many types of stimuli that cause
N ERP COMPONENT
A part of the ERP that typically has a low point around ms (therefore " ") after a person sees or hears a stimulus. In language studies, the N is larger when a word is unexpected.
an N [ ])
. The size of the ERP components (measured in voltage) reflects how strong the brain's response is, while the timing of these ERP components (measured in milliseconds after the stimulus) reflects the timing of the response.
Language scientists have found that the size of the N ERP component depends on the kind of semantic anomaly that is in the sentence. Some semantic anomalies are very easy to notice. For example, everyone notices the mistake in the sentence: "I take my co ee with cream and dogs." Scientists have found that, when volunteers read these kinds of anomalies, their brains show a large N component after reading the incorrect word. However, when volunteers read sentences with hard-to-detect anomalies (like the plane crash/survivors example above) their brains do NOT show a large N component when they read the incorrect word (Figure ) [ ].
When an anomaly is expected because the word fits into the scenario ("the eggs ate the toast"), or if there is no anomaly at all, the brain does not show a large N e ect. This can be seen by the red line. However, when the anomaly is unexpected ("co ee with cream and dogs"), the brain shows a large N e ect. This can be seen by the blue line.
kids.frontiersin.org
March | Volume | Article | This experiment shows that we can expect an N when a semantic anomaly is easy to notice, but not when it is di cult to notice. However, if that were the only kind of di erence the N could detect, it would not be useful to scientists. Instead of doing an expensive brain study, scientists could just ask volunteers whether or not they noticed the anomaly and find the same results! Importantly, language scientists have also found that peoples' brains do not always show a large N component when they read easy-to-detect semantic anomalies. For example, when people read sentences like, "At breakfast, the eggs ate the toast," their brains do not show a large N response on the word "toast," even though they notice the anomaly in the sentence. If we compare these two easy-to-detect anomalies, we can get an idea about what is going on in the brain. In the "cream and dogs" example, "dogs" does not fit into the scenario being described in the sentence. However, in the "eggs ate the toast" example, the scenario is about breakfast, and both eggs and toast are things people might say when talking about breakfast. What scientists have decided is that the N is larger when a word does not fit well into the scenario being described in the sentence. If a word fits well into the scenario, even if the word does not really make sense, people's brains do not show a large N and, if you ask them, they might not have even noticed the mistake.
WHY IS IT IMPORTANT TO STUDY SEMANTIC ANOMALIES?
The study of how people's brains react to semantic anomalies helps language scientists understand how the brain builds meaning from words and sentences. These studies suggest that the brain often uses a "top-down" approach to understand language rather than a "bottom-up" approach. A bottom-up approach would mean that the brain builds the meaning of a sentence by fully understanding the meaning of each word as we read or hear it. If the brain used a bottom-up approach, more people should notice semantic anomalies. That is, they should notice that you should not bury survivors, because they would activate the full meaning of the word "survivors," which includes the fact that the people are still alive.
Using a top-down approach, the brain uses background knowledge to process the meaning of a sentence, based on how expected a word is in that scenario. When a word is expected because it fits well, the brain might be a bit lazy when determining the meaning of that word. For example, the word "survivor" means "a person who is still alive." In stories where we expect to hear about survivors, the brain might just activate "person" and not activate the rest of the meaning. Understanding how people use both top-down and bottom-up approaches to make meaning from language is useful for doctors who treat patients who have language disorders.
kids.frontiersin.org
March | Volume | Article | Although we have clearly seen the importance of the N in understanding how the brain responds to di erent kinds of semantic anomalies, scientists are still trying to figure out exactly what the N tells us the brain is doing [ ]. One idea is that the brain makes predictions about what words it will see, and the N is large when a word is unexpected. A di erent idea is that the brain is simply checking words as they come in to make sure they fit into the sentence, and a large N reflects that a word does not fit.
CONFLICT OF INTEREST:
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
COPYRIGHT © Patson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 3,038 | 2020-03-27T00:00:00.000 | [
"Linguistics",
"Psychology"
] |
TET2 Inhibits PD-L1 Gene Expression in Breast Cancer Cells through Histone Deacetylation
Simple Summary Programmed cell death ligand 1 (PD-L1) is an essential immune checkpoint molecule that helps tumor cells to escape the immune surveillance. The aim of the current study was to investigate the epigenetic mechanisms underlying the aberrant expression of PD-L1 in breast cancer cells. Here, we identified TET2 as a negative regulator of PD-L1 gene transcription in breast cancer cells. Mechanistically, TET2 recruits HDAC1/2 to the PD-L1 promoter and facilitates the deacetylation of H3K27ac, resulting to the suppression of PD-L1 gene transcription. Our work reveals an unanticipated role of TET2-HDAC1/2 complex in the regulation of PD-L1 gene expression, providing new insights into the epigenetic mechanisms that drive immune evasion during breast cancer pathogenesis. Abstract Activation of PD-1/PD-L1 checkpoint is a critical step for the immune evasion of malignant tumors including breast cancer. However, the epigenetic mechanism underlying the aberrant expression of PD-L1 in breast cancer cells remains poorly understood. To investigate the role of TET2 in the regulation of PD-L1 gene expression, quantitative reverse transcription PCR (RT-qPCR), Western blotting, chromatin immunoprecipitation (ChIP) assay and MeDIP/hMeDIP-qPCR were performed on MCF7 and MDA-MB-231 human breast cancer cells. Here, we reported that TET2 depletion upregulated PD-L1 gene expression in MCF7 cells. Conversely, ectopic expression of TET2 inhibited PD-L1 gene expression in MDA-MB-231 cells. Mechanistically, TET2 protein recruits histone deacetylases (HDACs) to PD-L1 gene promoter and orchestrates a repressive chromatin structure to suppress PD-L1 gene transcription, which is likely independent of DNA demethylation. Consistently, treatment with HDAC inhibitors upregulated PD-L1 gene expression in wild-type (WT) but not TET2 KO MCF7 cells. Furthermore, analysis of the CCLE and TCGA data showed a negative correlation between TET2 and PD-L1 expression in breast cancer. Taken together, our results identify a new epigenetic regulatory mechanism of PD-L1 gene transcription, linking the catalytic activity-independent role of TET2 to the anti-tumor immunity in breast cancer.
Introduction
For the past few years, immunotherapy has emerged as a frontline treatment for multiple malignancies, and now joins the ranks of surgery, radiation, chemotherapy, and targeted therapy for cancer therapy [1]. Different from traditional therapies, cancer immunotherapy utilizes the body's own immune system to fight against tumor cells [2]. Among various immunotherapeutic strategies, the immune checkpoint blockade (ICB) has the broadest impact and prospects, with several antibodies targeting CTLA4 (cytotoxic T lymphocyte antigen 4), PD1 (programmed cell death 1), and PD-L1 (PD-1 ligand 1) approved by the FDA for the treatment of a number of different cancers [3][4][5]. PD-L1, encoded by the CD274 gene, is an essential immune checkpoint molecule that is mainly expressed on the surface of tumor cells and macrophages [6]. The expression of PD-L1 is commonly elevated in cancer cells [7,8]. Cancer cells may exhibit immune escape upon recognition of PD-L1 by PD-1, which mediates T cell exhaustion [9]. Therefore, understanding the regulatory mechanisms of PD-L1 gene expression in cancer cells is of great importance for improving responsiveness to anti-PD-L1 immunotherapy and suppressing immune evasion.
Many studies have revealed the transcriptional regulatory mechanisms of the PD-L1 gene, including the inflammatory cytokines, specific transcription factors, and so on [? ]cers-1201219,B11-cancers-1201219,B12-cancers-1201219,B13-cancers-1201219. In addition, the epigenetic modifiers also play an important role in regulating PD-L1 gene transcription, altering the chromatin accessibility for the transcription factors through DNA methylation or histone modifications [14][15][16][17]. For instance, DNA methylation at the promoter region is commonly considered to be an epigenetic mechanism of transcriptional silencing [18]. In contrast, TET (Ten-eleven translocation) family members, including TET1, TET2, and TET3, have been regarded as DNA demethylases for gene activation [19,20]. TET proteins initiate active or passive DNA demethylation by promoting 5mC (5-methylcytosine) oxidization [21]. Recent studies have shown that TET2 could act as a critical player in the regulation of immune homeostasis and anti-tumor immunity [22][23][24][25][26][27]. Although a recent work reported that TET2 augments the IFN-gamma-induced PD-L1 expression in melanoma, colon cancer, and acute monocytic leukemia cells [15], whether TET2 is involved in the epigenetic regulation of PD-L1 gene expression in breast cancer remains largely unknown.
In this study, we set out to investigate the relationship between TET2 and PD-L1 in breast cancer. Surprisingly, we found that TET2 is a suppressor of PD-L1 gene transcription in breast cancer cells. Mechanistically, TET2 inhibits PD-L1 gene expression through recruiting HDAC1/2 to PD-L1 gene promoter and facilitating the histone deacetylation, which is likely independent of DNA methylation and hydroxymethylation.
Cell Cultures
MCF7 and MDA-MB-231 human breast cancer cells were cultured in a DMEM high glucose medium (Hyclone, Marlborough, MA, USA) supplemented with 10% fetal bovine serum (FBS)(BI) and 1% penicillin/streptomycin (100 U/mL, Hyclone). All cells were cultured at 37 • C in a humidified incubator containing 5% CO 2 . The identity of two cell lines was confirmed by STR genotyping analysis.
Stable Cell Lines Construction
TET2 knockout MCF7 cells were generated by the CRISPR method as described previously [28]. Mono-cell colonies were picked and expanded for identification and future experiments. The TET2 knockout efficiency of these colonies was examined by Western blot analysis of TET2 protein expression and PCR analysis of genomic DNA for indels around the sgRNA targeting region. The following target sequences were used for sgRNA design: TET2KO sg#1: AGGACTCACACGACTATTC TET2KO sg#2: GGAGAAAGACGTAACTTCG Mock (empty vector) and TET2-overexpressing (O/E) MDA-MB-231 cells were generated using the piggybac system as described previously [29]. In brief, MDA-MB-231 cells were co-transfected with pCMV-PBase and piggybac plasmids (pPB-CAG-ires-Pac empty vector or pPB-CAG-Flag-HA-TET2-ires-Pac). Puromycin (2 µg/mL) was added to the culture medium at 48 h post-transfection. The puromycin-resistant stable cell colonies were picked and expanded for identification and future experiments.
Reagents and Antibodies
The following cytokines and chemical inhibitors were used for the treatment of cells in indicated experiments: IFN-gamma (Peprotech, Cranbury, NJ, USA, #300-02), 5-Aza-CdR
Knocking Down by shRNAs
The oligos for TET2 shRNAs were subcloned into the pLKO.1-TRC vector (Addgene, Watertown, MA, USA). The shRNA lentiviral particles were packaged in 293T cells according to the manufacturer's guidelines. MCF7 cells were infected with each lentivirus supernatant in the presence of 8 µg/mL polybrene. At 48h post infection, the infected cells were cultured with 2 µg/mL puromycin. After continuous puromycin screening for 5 days, the survival MCF7 cells were collected for RNA isolation and RNA-seq.
RT-qPCR
RNA was extracted from the cells using TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol. The protocol of RT-qPCR was described in our previous study [13]. Gene expression levels were normalized to GAPDH. The sequences of primers were shown in Supplementary Table S4.
ChIP-qPCR
The Chromatin immunoprecipitation (ChIP) assay was performed as previously described [29]. The enrichment levels of TET2, H3K4me3, H3K27me3, H3K27ac, and HDAC1/2 at PD-L1 promoter were quantified by qPCR analysis of the ChIP products and the relevant inputs. Primer sequences used for ChIP-qPCR are listed in Supplementary Table S4.
Co-Immunoprecipitation
The co-immunoprecipitation assay was conducted as previously described [31].
RNA seq Analysis
RNA samples from MCF7 cells (WT, TET2 KO #1, TET2 KO#2, scramble, shTET2#1, and shTET2#2) were subjected for RNA-seq using the Illumina platform. First, raw reads were trimmed to remove adapters and low-quality bases using the trim_galore program (version 0.6.6) with parameters: "-paired -fastqc". Secondly, trimmed fastq files were aligned to the human reference genome (hg19.fa from UCSC) with the tophat program (v2.1.1) with default parameters. We used the FPKM_count.py (RSeQC) to calculate FPKM (fragments per kilo base of transcript per M) to represent the abundance of gene expression. The bedgraph files were uploaded on UCSC genome browser for visualization. RNA-seq data has been deposited in the NCBI Gene Expression Omnibus (GEO) under the accession number GSE164032 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE164032, Submitted on 30 December 2020, Released on 1 June 2021).
Analysis of CCLE and TCGA Data
The gene expression data of 23 kinds of cancer cell lines were downloaded from CCLE (Comprehensive Cell Line Encyclopedia, http://www.broadinstitute.org/ccle/home; accessed on 2 May 2018). The expression levels of TET2 and PD-L1 from breast invasive carcinoma (Agilent microarray data) were downloaded from TCGA (The Cancer Genome Atlas, http://www.cbioportal.org; accessed on 19 March 2020). The TET2 and PD-L1 mRNA levels of cancer cell lines and breast cancer tissues were used for correlation analysis and linear regression analysis.
Statistical Analysis
GraphPad Prism Software was used for quantitative data visualization and statistical analysis. All graphs were presented as an average of at least three independent experiments. Standard deviation (S.D.) was calculated and presented as error bars in graphs. Comparisons between two groups were analyzed by paired Student's t-test. Multiple comparisons were analyzed by one-way ANOVA with Tukey post-test. Significance level was set as p = 0.05 and presented as * in all graphs (*, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001; ns, not significant).
TET2 Is a Negative Regulator of PD-L1 Gene Transcription in Breast Cancer Cells
In an RNA-seq analysis for the downstream genes of TET2 in MCF7 cells, we identified that PD-L1 mRNA expression level was upregulated in TET2 KO MCF7 cells compared to wild type (WT) MCF7 cells ( Figure 1A). RT-qPCR analysis also confirmed the substantial increase in PD-L1 mRNA levels in TET2 KO MCF7 cells ( Figure 1B). Given that the basal expression level of PD-L1 is relatively low in MCF7 cells, we also treated cells with IFNgamma and measured the impact of TET2 depletion on PD-L1 expression. RT-qPCR and Western blotting analyses showed that IFN-gamma treatment dramatically upregulated PD-L1 expression in both WT and TET2 KO MCF7 cells ( Figure 1C,D). Importantly, regardless of the absence or presence of IFN-gamma stimulation, the mRNA and protein levels of PD-L1 in TET2 KO MCF7cells were correspondingly higher than those in WT MCF7 cells ( Figure 1C,D). Furthermore, PD-L1 intron mRNA level displayed a similar increase in TET2 KO MCF7 cells ( Figure S1A), indicating a direct effect of TET2 depletion on nascent RNA synthesis rather than mRNA stability. Knocking down TET2 by shRNA also increased the PD-L1 mRNA level in MCF7 cells ( Figure S1B). Moreover, the MDA-MB-231 cell is a commonly used triple-negative breast cancer cell line in laboratory research. Compared to MCF7 cells, MDA-MB-231 cells express relatively low levels of TET2 expression but high levels of PD-L1 expression, showing features of an ideal model for a TET2 "gain-of-function" study. To verify the suppressive role of TET2 on PD-L1 expression, we overexpressed TET2 in MDA-MB-231 cells and observed the downregulation of PD-L1 gene expression upon TET2 overexpression ( Figure S2A,B). Overall, our data demonstrate that TET2 functions as a negative regulator of PD-L1 gene transcription in breast cancer cells.
TET2 KO Does Not Alter the DNA Methylation and Hydroxymethylation Level at the Promoter Region of PD-L1 Gene
Next, we investigate the molecular mechanism through which TET2 inhibits PD-L1 gene transcription in breast cancer cells. By analyzing the published TET2 ChIP-seq data (GSE120756) [32], we discovered that TET2 directly bound to the promoter region of PD-L1 gene in MCF7 cells (Figure 2A). Our TET2 ChIP-qPCR data also confirmed the occupancy of TET2 at the PD-L1 promoter, indicating a direct action of TET2 on PD-L1 gene transcription ( Figure 2B). Since TET2 has the ability to catalyze 5mC oxidation and initiate DNA demethylation, we examined the 5mC and 5hmC enrichment at the promoter region of PD-L1 gene by MeDIP-and hMeDIP-qPCR. Unexpectedly, TET2 depletion has no significant effect on the 5mC and 5hmC enrichment at PD-L1 promoter region ( Figure 2C,D). Given the critical role of TET2 in DNA demethylation, TET2 KO may increase DNA methylation levels on other genomic regulatory regions beyond the PD-L1 promoter, which might contribute to the transcriptional activation of PD-L1 gene. To exclude this possibility, we treated WT and TET2 KO MCF7 cells with 5-Aza-CdR (a DNMT inhibitor) and examined the mRNA expression of the PD-L1 gene. Figure 2E showed that 5-Aza-CdR treatment was not sufficient to change PD-L1 expression in either WT or TET2 KO MCF-7 cells. Thus, our data suggest that TET2-mediated inhibition of PD-L1 gene transcription is not dependent on its 5mC dioxygenase activity.
TET2 Recruits HDAC1/2 to Deacetylate H3K27ac at PD-L1 Promoter
In addition to the DNA-demethylating activity, TET2 can also function as a transcriptional co-factor to regulate gene expression [33]. We profiled the well-studied histone modifications (H3K4me3, H3K27me3, and H3K27ac) at the PD-L1 promoter region in WT and TET2 KO MCF7 cells. ChIP assay showed that the levels of H3K4me3 enrichment at PD-L1 promoter were comparable between WT and TET2 KO MCF7 cells ( Figure 3A), while the repressive H3K27me3 was rare at the PD-L1 promoter in WT and TET2 KO MCF7 cells ( Figure 3B). Consistent to the changes of PD-L1 gene transcription, the H3K27ac levels at PD-L1 promoter region were increased in MCF7 cells upon TET2 depletion ( Figure 3C). Additionally, TET2 overexpression reduced H3K27ac enrichment at the PD-L1 promoter region in MDA-MB-231 cells ( Figure S3A). Our data suggest that TET2 is able to modulate the H3K27ac level at the PD-L1 promoter in breast cancer cells.
TET2 has been shown to recruit histone deacetylases (HDAC1/2) to specific gene loci, thereby mediating the transcriptional repression in immune cells [34,35]. Therefore, we performed ChIP-qPCR analysis of HDAC1 and HDAC2 in WT and TET2 KO MCF7 cells. As expected, the binding of HDAC1 and HDAC2 to the PD-L1 promoter in TET2 KO MCF7 cells was significantly lower than that in WT cells ( Figure 3D,E). Moreover, we conducted a co-IP assay for the interaction between TET2 and HDAC1/2 using anti-HDAC specific antibody. As expected, we detected a TET2 band in the anti-HDAC1 immunoprecipitants from MCF7 cells and TET2-O/E MDA-MB-231 cells by Western blotting (Figure 3F & Figure S3C). Next, we chose two HDAC inhibitors (TSA and SAHA) to treat WT and TET2 KO MCF7 cells. As shown in Figure 3G, both inhibitors increased the global H3K27ac level in WT and TET2 KO MCF7 cells. Importantly, the PD-L1 mRNA expression level was elevated by treatment with HDAC inhibitors only in WT cells but not in TET2 KO MCF7 cells ( Figure 3H). We also observed that HDAC1 binding at the PD-L1 gene promoter was increased in TET2 O/E MDA-MB-231 cells ( Figure S3B). Taken together, our data suggest a working model in which TET2 unites HDAC1/2 to deacetylate H3K27ac and establishes a repressive chromatin structure at the PD-L1 promoter ( Figure 3I).
Negative Correlation between TET2 and PD-L1 Expression Levels in Breast Cancer
To investigate the clinical significance of the TET2/PD-L1 axis, we analyzed the relationship between TET2 and PD-L1 expression levels in online breast cancer data. CCLE data analysis showed that TET2 mRNA expression levels are negatively associated with PD-L1 mRNA expression levels in 57 breast cancer cell lines (p = 0.0031) ( Figure 4A). Interestingly, by analyzing other types of cancer cell lines from CCLE, we found that at least two other kinds of cancer types (lung and soft tissue) had a significant negative correlation between TET2 and PD-L1 mRNA expression levels (Table S1). We also analyzed the correlation between TET2 and PD-L1 by separating the CCLE breast cancer cell lines into three tumor subtypes and found that TET2 expression is negatively correlated with PD-L1 expression in luminal subtype (p = 0.0282) (Table S2). In a similar manner, TCGA data analysis also showed that TET2 expression levels had a negative correlation with PD-L1 expression levels in breast cancer (n = 1904, p = 0.0065) ( Figure 4B). Through an analysis of the four subtypes of TCGA breast cancer data, we found that TET2 and PD-L1 showed a negative correlation only in luminal B and Her2-enriched subtypes (Table S3).
Discussion
In this study, we demonstrate that TET2 inhibits PD-L1 gene expression in breast cancer cells either under baseline conditions or upon IFN-gamma stimulation. Conversely, a recent work from Xu et al. [15] showed that IFN-gamma-induced PD-L1 gene expression was impaired by TET2 depletion in murine melanoma (B16-OVA), colon tumor cells (MC38), and human monocytic cells (THP-1). Mechanistically, they revealed that TET2 could be recruited by STAT1 to hydroxymethylate the PD-L1 gene promoter and enhance its transcription in murine melanoma and colon tumor upon IFN-gamma stimulation. However, our data showed that the PD-L1 promoter of breast cancer cells is in a DNA hypo-methylated status and that TET2 depletion does not alter the DNA methylation or hydroxymethylation levels at the promoter region of PD-L1 gene. Moreover, we found that treatment with 5-Aza-CdR, a DNMT inhibitor, could not enhance PD-L1 gene expression in either WT or TET2 KO MCF7 cells, suggesting that TET2 may repress PD-L1 gene expression in a catalytic-activity independent manner. These opposite results indicate that TET2-mediated regulation of PD-L1 gene expression may be largely dependent on the cancer/tissue types.
In addition to the well-known DNA demethylation activity, TET proteins have been shown to recruit histone deacetylases (HDACs) and mediate transcriptional repression in immune cells [34][35][36]. Coincidentally, our study demonstrates that TET2 recruits HDAC1/2 to deacetylate H3K27ac at PD-L1 promoter and results in the transcriptional suppression of PD-L1 gene. Although our work identifies that TET2 acts as a scaffold protein for the negative regulation of PD-L1 gene transcription in breast cancer cells, it is still unclear how TET2 itself is recruited to the PD-L1 promoter. Since dozens of transcription factors have been identified to recruit TET2 to specific gene loci for epigenetic regulation in different types of tissues [37][38][39], we speculate that a specific transcription factor may be responsible for this task in breast cancer.
Previous reports have showed that the downregulation of TET2 expression and 5hmC levels is an epigenetic hallmark of multiple types of cancers [21,[40][41][42]. Dysregulation of the TET2/5hmC pathway promotes epithelial-mesenchymal transition (EMT), chemotherapy resistance, proliferation, invasion, and metastasis during breast cancer pathogenesis [43][44][45]. Based on the afore-mentioned regulatory mechanism, we speculate that TET2 loss may facilitate immune evasion by breast cancer cells through upregulating PD-L1 gene expression. Consistently, we observed an inverse correlation between PD-L1 and TET2 expression levels regardless of whether the breast cancer cell lines were cultured in vitro (CCLE data) or in cancer samples from patients (TCGA data). Therefore, our work expands our current understanding of the pleiotropic role of TET2 loss in breast cancer pathogenesis, especially as it relates to immune evasion.
Interestingly, by analyzing multiple cancer cell lines from CCLE, we noticed that the correlation between TET2 and PD-L1 in lung cancer cell lines is also negative and more significant than that in breast cancer cell lines. This finding suggests that the TET2/PD-L1 negative regulatory axis may exist not only in breast cancer but also in other types of cancers. Given that anti-PD-1/L1 therapy has been successfully applied in the firstline therapy of lung cancer, it is of great interest to validate this observation in clinical patient samples and explore the prognostic value of TET2 expression in predicting the responsiveness of lung cancer patients to anti-PD-1/L1 therapy. If true, it may provide valuable clues and new strategies to improve immunotherapy treatment of lung cancer.
Multiple signaling pathways are aberrantly activated in the complicated tumor microenvironment of breast cancers. Among them, several signaling pathways (such as hypoxia, AMPK, IFN, and oxidative stress) have been reported to regulate TET2 expression level. The findings of our study suggest that the tumor microenvironment may modulate the expression of PD-L1 gene in breast cancer through targeting the TET2/HDAC complex. Given that anti-PD-1/L1 immunotherapy has been widely used in clinics, we speculate that the combination of HDAC inhibitors and targeting TET2 with anti-PD-L1 immunotherapy may be a new strategy for breast cancer patients who have low responsiveness to anti-PD-1/L1 immunotherapy. Currently, four HDAC inhibitors, Vorinostat, Romidepsin, Belinostat, and Panobinostat have been approved by FDA for cancer treatment [46]. Several studies have reported that HDAC inhibition make for increased PD-L1 expression in melanoma [47], ARID1A-inactivated Ovarian Cancer [48], and anaplastic thyroid cancer [49]. These findings, together with ours, reinforce a rationale for applying HDAC inhibitors or targeting TET2 to augment the immunotherapy of breast cancer.
Conclusions
In conclusion, our study provides clear evidences that PD-L1 gene transcription is negatively regulated by the TET2-HDAC1/2 complex in breast cancer cells. Although more work remains to be done with regard to the regulatory mechanism and functional role of TET2/PD-L1 axis in the anti-tumor immunity, our findings suggest that targeting TET2 or HDAC1/2 might be a potential combination strategy for the anti-PD-1/PD-L1 immunotherapy treatment of breast cancer. | 4,702.6 | 2021-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Electronic Registration for Undergraduate Students with Department Selection Based on Artificial Neural Network
The objective of the present research is to facilitate the administrative procedures associated with student registration process, and to ensure equal opportunities for all applicants in college. It aims to assist students in identifying appropriate alternatives available to the departments electronically anytime and anywhere saving time and effort for the student. For this purpose, an intelligent e-government system named "An Electronic Intelligent Registration with Department Selection" (E-IRDS) is designed as one of the intelligent eservices in Iraqi e-governance by using many tools and programming languages which are (PHP, MYSQL, HTML, CSS, XML, NOTPAD++, C#). Artificial neural networks (ANNs) technology is applied, notably Kohonen's self-organizing map (SOM) as one of the important unsupervised classification algorithms of machine learning for classifying and distributing the students automatically into the college academic departments based on their desires, their total degrees, and according to scientific plan for each department, in addition to the specific and personal student information. The applied results based on international standards demonstrated the accuracy of Kohonen's SOM algorithm in classification and distribution methods at least time and possible learning ratio. The system test and assessment results confirmed that it is characterized with a very high security and reliability and accuracy. It is also distinguished with very high efficiency and transparency as well as flexibility and high performance speed. The results also emphasized the ease and availability of the system to all students, besides the possibility of troubleshot and correct errors easily .
Introduction
In the latest decades, the use of Information and Communication Technologies (ICT) has actually obtained the increasing attentiveness in the public sector represented by e-governance services, especially within latest years when the developing countries have improved access to the provided services by the government for the purpose of reducing the costs as well as increasing the effectiveness of an organization [1].
Currently, the higher education sector in Iraq is playing an important role in the society improvement and development. The e-services implementation had really become the critical goal in universities context in worldwide [1]. E-Governance indicates to the public sector's use of ICT by government agencies to transmit information and deliver services to the citizens anytime and anywhere [2, 3]. E-government provides opportunities for citizens to participate in the decision-making process in democratic life that lead to increase transparency and efficiency making government more accountability [2,4]. For more than 15 years, the European Union built infrastructure of e-government, and supported research to how delivering e-government's services efficiently and effectively [4].
The Iraqi government began to use e-government technology in 2004 when the Iraqi Ministry of Science and Technology afforded the contract for the building up of an Iraqi egovernment project to an Italian company [5]. Iraqi e-government systems have been defined as the use of websites, e-mail, or any interaction method via the internet for the delivery of government services to citizens, the business sector, and civil society organizations [5]. Thus, most Iraqi universities (public or private) actually strived to exploit the modernistic technologies like (web 2.0, mobile applications) in improving the students-universities interaction [1]. It can be also benefit from modern machine learning technologies in egovernment systems for predict new knowledge from huge amounts of distributed data depending on the hidden relationships and patterns of events that cannot be directly Learning methods are classified into supervised, unsupervised, reinforcement learning.
Unsupervised learning method finds the correlations and relations among the input features to discover the subtle structure of unlabeled data such as the SOM model that utilizes unsupervised learning algorithms [7]. Among different present neural network architectures and learning algorithms, Kohonen's SOM is one of the most common neural network models It is a spontaneous classification method which is the origin of SOM. Actually, SOM is an unsupervised learning algorithm with a simple structure and computational form which projects high-dimensional data onto a low-dimensional grid which called a topological map [8,9].
In this paper, due to the difficulty of administrative procedures associated with Undergraduate students' registration process that requires manual recording and which often lead to loss of time and effort in Kirkuk University's colleges, Kohonen's SOM is applied as one of the important ANN algorithms in supplying e-government service.
The purpose of this study is to assist students to identify appropriate alternatives available for academic departments electronically anytime and anywhere saving student time and effort. It aims to supply automatic classification by applying Kohonen's algorithm for distributing students to desired departments relying on scientific plan of each department and students' total degrees as well as recording choices of students.
System Analysis
The analytical study about registry mechanism in Kirkuk University's colleges has proved the existence of several disadvantages and obstacles that restrict the registry process which are including : 1-Busyness of large number of staffs for a long time by routine students' registration and distribution them to college's academic departments 2-Students waiting and their families in long queues for the registration process completion.
3-Unreliable Registration and distribution processes.
4-Frequent questions of students as well as the overwork of registration staff . service (face-to-face) to e-service, as everybody may not prefer to utilize e-service and most of them are already familiar with the traditional service that is by dealing with people and not ICT. It has been also emphasized that the system is very high efficient (92.4%) by decreasing the time and effort. It is also enough flexible (91.1%) to control the DB design and modify its fields, in addition to the ability of update system structure with its interfaces and webpages such as adding or deleting further options of academic departments according to the college.
It has been proved the availability (87.6%) of the system by activating its website during registry period and possibility of using its pages anytime and anywhere. Eventually, the results have proved that the system is moderate maintainability (76.7%) for the capability of troubleshot by professionals. It also improves data access across system website and encourages students to participate in decision making process in e-registration that lead to increase transparency (80.1%).
Conclusions
In this paper, it is concluded that the E-IRDS is an integrated electronic system. It can be connected and merged with other electronic systems in Iraqi e-governance to reduce the boring routine of a series of reviews in scientific and civilian institutions. This system is enough flexible to be used in other universities and institutes. The registration process by using the designed system as an e-government application is done smoothly and lightness making it easier for the competent committees to register students with sorting and distributing them to the college departments. The system includes a good mechanism for users' accounts to control the adding and deleting of users, which increases the system control capability and efficiency. It is inferred the accuracy of Kohonen's SOM algorithm in classification and distribution to clusters with minimum learning rate and without error rate. It was found that the system is very high security and reliability. It is distinct with high transparency and high performance speed, in addition to students and employees well satisfaction to the system. | 1,659 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Distributed Control Strategy for DC Microgrids of Photovoltaic Energy Storage Systems in Off-Grid Operation
DC microgrid systems that integrate energy distribution, energy storage, and load units can be viewed as examples of reliable and efficient power systems. However, the isolated operation of DC microgrids, in the case of a power-grid failure, is a key factor limiting their development. In this paper, we analyze the six typical operation modes of an off-grid DC microgrid based on a photovoltaic energy storage system (PV-ESS), as well as the operational characteristics of the different units that comprise the microgrid, from the perspective of power balance. We also analyze the key distributed control techniques for mode transformation, based on the demands of the different modes of operation. Possible reasons for the failure of PV systems under the control of a voltage stabilizer are also explored, according to the characteristics of the PV output. Based on this information, we propose a novel control scheme for the seamless transition of the PV generation units between the maximum PV power tracking and steady voltage control processes, to avoid power and voltage oscillations. Adaptive drooping and stabilization control of the state of charge of the energy storage units are also considered, for the protection of the ESS and for reducing the possibilities of overcharging and/or over-discharging. Finally, various operation conditions are simulated using MATLAB/Simulink, to validate the performance of the proposed control strategy.
Introduction
Microgrid structures consisting of multiple intelligently coordinated heterogeneous networks have greatly improved the operation of power grids [1]. These structures have been widely studied as basic units that can be integrated into a larger overall network [2][3][4][5]. DC microgrids, as an alternative option, have been attracting increasing interest in the recent years, owing to their advantages of high system power quality and easy control with neither reactive power nor AC harmonic concerns.
However, the power quality of microgrids is influenced by the fluctuation and intermittence of renewable distributed micro-power. A potential solution is utilizing energy storage systems (ESSs) in order to alleviate the problem in microgrids. Hence, research on control and management technologies relevant to DC microgrids has become increasingly prevalent.
Most of the DC microgrid research conducted so far has focused on the control, operation and power sharing of DGs in an DC microgrid during and subsequent to islanding. A control strategy was developed for autonomous DC microgrids, applicable to low-voltage applications such as remote telecommunication power systems; experimental tests on changing the mode conditions The remainder of this paper is organized as follows. In Section 2, the structure, operating modes, and modal transitions of a PV energy storage DC microgrid are discussed. The key control problems of the PV-ESS in DC microgrids are studied in Section 3, based on the operational mode analysis. This includes an analysis of the output characteristics of a PV system, the "failure mechanism" of PV voltage stabilization, and the feasibility of multi-unit energy storage and voltage stabilization in a controller for the seamless transfer between MPPT and voltage regulation. We also consider a droop stabilizing controller for stabilizing the state of charge (SOC) of the energy storage units. Section 4 details the simulations performed to verify the effectiveness of the proposed method under various working conditions. The conclusions of this paper are summarized in Section 5.
Composition of a DC Microgrid
DC microgrids typically consist of DGs, ESSs, and loads, connected by DC buses. Two operating modes, grid-connected mode and off-grid mode, can be defined for these systems, The remainder of this paper is organized as follows. In Section 2, the structure, operating modes, and modal transitions of a PV energy storage DC microgrid are discussed. The key control problems of the PV-ESS in DC microgrids are studied in Section 3, based on the operational mode analysis. This includes an analysis of the output characteristics of a PV system, the "failure mechanism" of PV voltage stabilization, and the feasibility of multi-unit energy storage and voltage stabilization in a controller for the seamless transfer between MPPT and voltage regulation. We also consider a droop stabilizing controller for stabilizing the state of charge (SOC) of the energy storage units. Section 4 details the simulations performed to verify the effectiveness of the proposed method under various working conditions. The conclusions of this paper are summarized in Section 5.
Composition of a DC Microgrid
DC microgrids typically consist of DGs, ESSs, and loads, connected by DC buses. Two operating modes, grid-connected mode and off-grid mode, can be defined for these systems, according to whether they are connected to large power grids or not. The structure of a typical PV-ESS DC microgrid in off-grid operation is shown in Figure 2. From Figure 2, it can be seen that the power generators, which are the PV modules in this case, are connected to the DC bus through a one-way DC/DC converter (the image details a buck-boost conversion scheme) and DC circuit breaker. As PV power generation is intermittent and volatile, the fluctuation of the PV output should be stabilized, to reduce the impact of this uncertainty on the grid. Thus, it is necessary to configure energy storage devices in a PV-ESS DC microgrid, to maintain a steady voltage and ensure that the power between the source and load is balanced.
The intermittency of PV power, the residual energy states of the ESSs, the load units, and the disturbances from faults affect the power balance and energy flow in a PV-ESS DC microgrid, and thus determine the control mode of the power electronic converter in a DC microgrid. Hence, it is particularly important to research the operation modes, modal boundaries, and modal transformations of DC microgrids in the off-grid operation.
Operation Modes of a DC Microgrid
The operating conditions of PV-ESS DC microgrids can be categorized into six modes, according to the structure illustrated in Figure 2. The operation modes and the conditions for transition between these are summarized in Figure 3. From Figure 2, it can be seen that the power generators, which are the PV modules in this case, are connected to the DC bus through a one-way DC/DC converter (the image details a buck-boost conversion scheme) and DC circuit breaker. As PV power generation is intermittent and volatile, the fluctuation of the PV output should be stabilized, to reduce the impact of this uncertainty on the grid. Thus, it is necessary to configure energy storage devices in a PV-ESS DC microgrid, to maintain a steady voltage and ensure that the power between the source and load is balanced.
The intermittency of PV power, the residual energy states of the ESSs, the load units, and the disturbances from faults affect the power balance and energy flow in a PV-ESS DC microgrid, and thus determine the control mode of the power electronic converter in a DC microgrid. Hence, it is particularly important to research the operation modes, modal boundaries, and modal transformations of DC microgrids in the off-grid operation.
Operation Modes of a DC Microgrid
The operating conditions of PV-ESS DC microgrids can be categorized into six modes, according to the structure illustrated in Figure 2. The operation modes and the conditions for transition between these are summarized in Figure 3. according to whether they are connected to large power grids or not. The structure of a typical PV-ESS DC microgrid in off-grid operation is shown in Figure 2. From Figure 2, it can be seen that the power generators, which are the PV modules in this case, are connected to the DC bus through a one-way DC/DC converter (the image details a buck-boost conversion scheme) and DC circuit breaker. As PV power generation is intermittent and volatile, the fluctuation of the PV output should be stabilized, to reduce the impact of this uncertainty on the grid. Thus, it is necessary to configure energy storage devices in a PV-ESS DC microgrid, to maintain a steady voltage and ensure that the power between the source and load is balanced.
The intermittency of PV power, the residual energy states of the ESSs, the load units, and the disturbances from faults affect the power balance and energy flow in a PV-ESS DC microgrid, and thus determine the control mode of the power electronic converter in a DC microgrid. Hence, it is particularly important to research the operation modes, modal boundaries, and modal transformations of DC microgrids in the off-grid operation.
Operation Modes of a DC Microgrid
The operating conditions of PV-ESS DC microgrids can be categorized into six modes, according to the structure illustrated in Figure 2. The operation modes and the conditions for transition between these are summarized in Figure 3. The maximum PV power, denoted as P max pv in Figure 3, depends on the power required for storing energy at a particular SOC, the PV output, and the load. The PV-ESS DC microgrid operates in one of six modes, according to the control strategy, ESS state, and load conditions. That is: the control strategy for the PV generation unit can be MPPT or a voltage regulation. The ESSs can be operated in the voltage regulation or charge/discharge states, or in the shutdown state. Finally, the load can be increased, decreased, or in special cases, removed. The six different modes are described below: Mode I: In this condition, P max pv is less than the load power, and the SOC of the ESS is lower than the preset minimum. The storage system is in the SOC protection state, and the bidirectional DC/DC converter turns off the ESS, to prevent over-discharge and maintain the life of the ESS. To maintain the power balance of the DC microgrid and ensure voltage stability, a partial load is removed.
Mode II: The microgrid system transitions to this condition from Mode I. Here, P max pv is greater than the load power and the PV unit is controlled in the MPPT mode. In the ESS, the DC bus voltage is stabilized by the bidirectional DC/DC converter. The PV module supplies power to the load, and the residual power is used to charge the ESS, the SOC of which is still below the preset minimum.
Mode III: During the transition from Mode II to Mode III, the ESS is charged continuously, and the SOC rises. The working conditions are as in the case of Mode II: the PV output power is larger than the load, the PV unit is controlled in the MPPT mode, and the ESS maintains the DC bus voltage through the two-way DC/DC converter. However, the SOC of the ESS will be in the normal range, between SOC max and SOC min , unlike in Mode II.
Mode IV: The operational conditions of Mode IV are similar to those of Mode III. However, in this case, the load is increased (or the irradiance on the PV module is decreased). Hence, P max pv is less than the load power. Although the generation unit is still controlled using an MPPT strategy, the energy storage converter controls the DC voltage and the ESS will be in the discharge state. The SOC of the ESS declines, as a result.
Mode V: In this state, as in Mode III, P max pv is larger than the load power. The energy storage battery is charged beyond the upper limit, SOC max . In this situation, to protect the battery life and avoid overcharging the batteries, the energy storage converter is stopped from operating. The PV module supplies power to the load independently, and the PV control strategy is changed from MPPT to voltage regulation.
Mode VI: If the load is increased or the irradiance is decreased when the microgrid system is in Mode V, the magnitude of P max pv will be insufficient to meet the load demand. The control strategy for the PV module thus switches from voltage regulation to MPPT, to ensure that maximum power is generated. The remaining power required by the load is supplied by the energy storage unit, to make up for the shortfall; and the energy storage converter switches from the standby state to an operational state, to maintain the stability of the DC bus voltage.
From the above analyses, it can be concluded that the modal boundary definitions are based on the maximum possible output power of a PV module, the charge state of the energy storage unit, and the power required by the load. The two primary reasons for defining the operational modes in this manner are to stabilize the voltage of the DC bus for ensuring reliable operation of the grid while supplying load power and to extend the life of the energy storage units while maximizing the potential generation of PV energy. The key control techniques for maintaining internal mode evolution or transition to different modes depend on the maximum PV power output algorithm, stable PV DC bus voltage, and performance of the technology controlling the stable DC bus voltage, in consideration of the protection of the energy storage units.
Characteristics of PV Modules and Energy Storage Units Used for Controller Design
The analysis presented in Section 2 demonstrates that switching between the different modes of operation corresponds to modifying the electronic power converter modes in the PV modules and ESS units. Hence, studying control strategies for switching between the different modes of operation of the PV converter and the energy storage unit converter is very important.
Analysis of Output Characteristics and Failure Mechanism of PV Modules
There have been numerous studies on maximum power tracking algorithms for PV systems, the findings of which have been widely used in the design of microgrid systems. In this study, we consider a traditional P&O algorithm. Based on PV nonlinear output characteristics, we discuss the feasibility of stabilizing the DC bus voltage of the PV modules in a PV-ESS grid in Mode V of off-grid operation, detailed in Figure 3, using the analysis process illustrated in Figure 4. At the maximum power point C, the power is given by Pmax, while the output voltage is Um. Also depicted are four different working points, A, B, D, and E, where the output voltages are UA, UB, UD, and UE, respectively. and ESS units. Hence, studying control strategies for switching between the different modes of operation of the PV converter and the energy storage unit converter is very important.
Analysis of Output Characteristics and Failure Mechanism of PV modules
There have been numerous studies on maximum power tracking algorithms for PV systems, the findings of which have been widely used in the design of microgrid systems. In this study, we consider a traditional P&O algorithm. Based on PV nonlinear output characteristics, we discuss the feasibility of stabilizing the DC bus voltage of the PV modules in a PV-ESS grid in Mode V of off-grid operation, detailed in Figure 3, using the analysis process illustrated in Figure 4. At the maximum power point C, the power is given by Pmax, while the output voltage is Um. Also depicted are four different working points, A, B, D, and E, where the output voltages are UA, UB, UD, and UE, respectively. For analyzing the failure of PV voltage regulation, the characteristic output curve in Figure 4 is segmented by three load lines. To illustrate the causes for the failure of the constant voltage control mechanism in the PV power generation unit, we discuss two load conversion scenarios.
Vpv Vdc
(1) Scenario 1: load power is increased from P1 to P2. There are two points on the U-P curve illustrated in Figure 4, which correspond to an output power of P1, A and E. Regulation at these points is discussed below.
(1.1) In this sub-scenario, we assume that the power generation system is at point A. The DC bus voltage falls temporarily as the power demand increases from P1 to P2. Owing to the steady voltage control algorithm, an increase in the load power results in an increase in the output PV current. The positive error signal, obtained by subtracting the output voltage of the DC bus from the required voltage, is sent to the double loop controller, and the duty cycle of the converter is subsequently increased. A steady-state condition is reached at point B, and DC bus voltage stability is achieved.
(1.2) In this sub-scenario, the power generation system is at point E. As before, the DC bus voltage falls temporarily as the load is increased, and the processing of the positive error signal by the double loop controller causes the duty ratio to increase, to keep the DC bus voltage constant. Increasing the duty cycle and output PV current shifts the operating point of the generation system to the left. However, as point E is located to the left of the maximum power point, reducing the voltage of the PV system will reduce the output PV power, leading to an increase in the positive error. This, in return, aggravates the left shift of the working point, forming a direct connection in the switching transistor. The working point eventually slides to point 0, and the PV system will fall into a short-circuit state, causing the voltage stabilization technique to fail.
(2) Scenario 2: load power is increased from P1 to P3 In this scenario, the generation system is at point A. As before, increasing the power demand (from P1 to P3) leads to a transient drop in the DC bus voltage. The positive error signal resulting For analyzing the failure of PV voltage regulation, the characteristic output curve in Figure 4 is segmented by three load lines. To illustrate the causes for the failure of the constant voltage control mechanism in the PV power generation unit, we discuss two load conversion scenarios.
There are two points on the U-P curve illustrated in Figure 4, which correspond to an output power of P1, A and E. Regulation at these points is discussed below.
(1.1) In this sub-scenario, we assume that the power generation system is at point A. The DC bus voltage falls temporarily as the power demand increases from P1 to P2. Owing to the steady voltage control algorithm, an increase in the load power results in an increase in the output PV current. The positive error signal, obtained by subtracting the output voltage of the DC bus from the required voltage, is sent to the double loop controller, and the duty cycle of the converter is subsequently increased. A steady-state condition is reached at point B, and DC bus voltage stability is achieved. (1.2) In this sub-scenario, the power generation system is at point E. As before, the DC bus voltage falls temporarily as the load is increased, and the processing of the positive error signal by the double loop controller causes the duty ratio to increase, to keep the DC bus voltage constant. Increasing the duty cycle and output PV current shifts the operating point of the generation system to the left. However, as point E is located to the left of the maximum power point, reducing the voltage of the PV system will reduce the output PV power, leading to an increase in the positive error. This, in return, aggravates the left shift of the working point, forming a direct connection in the switching transistor. The working point eventually slides to point 0, and the PV system will fall into a short-circuit state, causing the voltage stabilization technique to fail.
(2) Scenario 2: load power is increased from P1 to P3 In this scenario, the generation system is at point A. As before, increasing the power demand (from P1 to P3) leads to a transient drop in the DC bus voltage. The positive error signal resulting from this change in voltage is sent to the double loop controller, which causes the duty cycle to increase. Because of regulation, the power generation system will eventually reach point C, where the maximum power is generated. However, as P3 is greater than C, the DC bus voltage will still be lower than the required value and the double loop controller will continue working. Therefore, the working point will pass point C, and the increase in the duty cycle will reduce the output power. Further regulation will move the working point to the left, leading to the direct connection of the switching transistor, and the PV module falls into a short-circuit state, causing a regulation failure. Similarly, if the initial working point of the generation system is E, the voltage regulation system will cause the final working point to move to point 0, forming a PV short circuit, leading to the failure of the voltage stabilization control.
From the above analyses, we observe that there are two feasible points of operation, on the left and right sides of the maximum power point on the U-P curve, when P load < P max . When the load is increased, the working point of the generation system inevitably slides to the left. If the initial working point is on the left side of the maximum power point, this slide to the left will eventually result in a regulation failure. Therefore, under voltage stabilization control, the initial working point of the voltage regulator should be located on the right side of the maximum power point of the U-P curve, so that the PV module can support the load to complete the stabilization process. If the load power is increased to a value that is greater than the maximum power output of the PV module, a part of the load must be allocated to the completion of the stabilization process. Hence, if the PV power is guaranteed to be larger than the load power, and the PV working range is limited to the right side of the maximum power point, the strategy of independent PV voltage stabilization will be feasible.
Based on the monotonic variation of the output PV current with respect to voltage, as depicted in the U-I curve in Figure 5, and the idea of dual loop control, we propose an anti-dead zone control method to limit the amplitude of the nominal value of the inner current loop. This method restricts the voltage and output power of the PV to the right side of the maximum power point, to avoid regulation failure, as illustrated in Figure 5. from this change in voltage is sent to the double loop controller, which causes the duty cycle to increase. Because of regulation, the power generation system will eventually reach point C, where the maximum power is generated. However, as P3 is greater than C, the DC bus voltage will still be lower than the required value and the double loop controller will continue working. Therefore, the working point will pass point C, and the increase in the duty cycle will reduce the output power. Further regulation will move the working point to the left, leading to the direct connection of the switching transistor, and the PV module falls into a short-circuit state, causing a regulation failure. Similarly, if the initial working point of the generation system is E, the voltage regulation system will cause the final working point to move to point 0, forming a PV short circuit, leading to the failure of the voltage stabilization control. From the above analyses, we observe that there are two feasible points of operation, on the left and right sides of the maximum power point on the U-P curve, when Pload < Pmax. When the load is increased, the working point of the generation system inevitably slides to the left. If the initial working point is on the left side of the maximum power point, this slide to the left will eventually result in a regulation failure. Therefore, under voltage stabilization control, the initial working point of the voltage regulator should be located on the right side of the maximum power point of the U-P curve, so that the PV module can support the load to complete the stabilization process. If the load power is increased to a value that is greater than the maximum power output of the PV module, a part of the load must be allocated to the completion of the stabilization process. Hence, if the PV power is guaranteed to be larger than the load power, and the PV working range is limited to the right side of the maximum power point, the strategy of independent PV voltage stabilization will be feasible.
Based on the monotonic variation of the output PV current with respect to voltage, as depicted in the U-I curve in Figure 5, and the idea of dual loop control, we propose an anti-dead zone control method to limit the amplitude of the nominal value of the inner current loop. This method restricts the voltage and output power of the PV to the right side of the maximum power point, to avoid regulation failure, as illustrated in Figure 5. Since the output characteristics of the PV unit depend primarily on the irradiance intensity and temperature, we use the maximum power point current im, at the test conditions provided by the manufacturers of the PV panel, to define the amplitude limit for the inner current loop, to ensure that stabilization can be achieved at different irradiance and temperature conditions. The method limits the reference value of the inner current loop to prevent the PV operating point from entering the left side of the MPP curve during regulation. At the same time, considering the changes in irradiance and temperature adaptively changes the inner loop reference value. In accordance with the PV array characteristics (Figure 5b), the MPP locus, i.e., MPPs at different radiation levels, can be approximated by a cubic function. It is worth noting that, in order to keep the working point on the right side of the maximum power point, as much as possible, the limiting amplitude should be set to a value slightly less than im. Since the output characteristics of the PV unit depend primarily on the irradiance intensity and temperature, we use the maximum power point current i m , at the test conditions provided by the manufacturers of the PV panel, to define the amplitude limit for the inner current loop, to ensure that stabilization can be achieved at different irradiance and temperature conditions. The method limits the reference value of the inner current loop to prevent the PV operating point from entering the left side of the MPP curve during regulation. At the same time, considering the changes in irradiance and temperature adaptively changes the inner loop reference value. In accordance with the PV array characteristics (Figure 5b), the MPP locus, i.e., MPPs at different radiation levels, can be approximated by a cubic function. It is worth noting that, in order to keep the working point on the right side of the maximum power point, as much as possible, the limiting amplitude should be set to a value slightly less than i m .
Seamless Transfer Controller
Based on analysis of the feasibility of independent DC bus voltage stabilization and the operation mode of PV-ESS DC microgrids, it is found that there are two different sets of requirements for PV output control during the transitions from Mode III to V and Mode V to VI. A block diagram of the control system is shown in Figure 6.
Seamless Transfer Controller
Based on analysis of the feasibility of independent DC bus voltage stabilization and the operation mode of PV-ESS DC microgrids, it is found that there are two different sets of requirements for PV output control during the transitions from Mode III to V and Mode V to VI. A block diagram of the control system is shown in Figure 6.
Constant Voltage Controller Figure 6. Controller for seamless transfer between voltage stabilization and MPPT operation of a PV generation unit.
A comparison of the voltage regulator and the MPPT controller depicted in Figure 6 reveals that they both use a dual loop control structure. The difference lies in the output of the outer ring-although the nominal value of the inner current ring is different, a common inner current loop can simplify the design of the controller and is convenient for limiting the dead zone. However, direct transfer between two modes of operation can cause a jump in the required output of the inner current loop, resulting in violent fluctuations of the DC bus voltage or PV output that adversely affect the load and DC microgrid. Thus, during the mode transfer process, the fluctuations should be reduced. This can be realized by providing additional compensation using a compensator with transfer function F(s) ( Figure 6). As this compensation aims at reducing the effect of voltage and power fluctuations, we propose a seamless transfer controller, the operational principle of which is as follows.
In Mode V, the PV generation system is in a voltage stabilization working condition, and the switch is placed in position 1. At this point, the MPPT control loop is idle and the nominal value of the inner current loop is given by the regulator, i.e., ( If there is no seamless switching controller, the output values of the two control loops are unlikely to be equal, i.e., By transposing the above equation, we obtain: A comparison of the voltage regulator and the MPPT controller depicted in Figure 6 reveals that they both use a dual loop control structure. The difference lies in the output of the outer ring-although the nominal value of the inner current ring is different, a common inner current loop can simplify the design of the controller and is convenient for limiting the dead zone. However, direct transfer between two modes of operation can cause a jump in the required output of the inner current loop, resulting in violent fluctuations of the DC bus voltage or PV output that adversely affect the load and DC microgrid. Thus, during the mode transfer process, the fluctuations should be reduced. This can be realized by providing additional compensation using a compensator with transfer function F(s) ( Figure 6). As this compensation aims at reducing the effect of voltage and power fluctuations, we propose a seamless transfer controller, the operational principle of which is as follows.
In Mode V, the PV generation system is in a voltage stabilization working condition, and the switch is placed in position 1. At this point, the MPPT control loop is idle and the nominal value of the inner current loop is given by the regulator, i.e., If there is no seamless switching controller, the output values of the two control loops are unlikely to be equal, i.e., i re f ,1 = i re f ,2 . This inequality causes a jump in the nominal value i ref when the position of the switch is modified from 1 to 2, which leads to a jump in the output duty cycle d and causes transient output voltage instability.
During PV voltage stabilization control, the nominal value of i ref,2 in the inner loop of the idle control loop is, i re f ,2 (s) = K i1 s · e 2 (s) − i re f ,2 (s) − i re f (s) · F(s) + K p · e 2 (s).
By transposing the above equation, we obtain: Dividing both sides of (3) by 1 + K i1 s · F(s) gives According to the classical control theory, s = 0 when lim t→∞ <=> s = 0. Substituting this into (4) results in the following: Assuming F(s) = 1 Ka [25], when Ka is small enough, Ka · e 2 (0) = 0, then, i re f ,1 (0) = i re f ,2 (0), and seamless transition can be realized between PV voltage regulation and MPPT control. It is worth noting that, in practical applications, only Ka should be set small enough to meet the precise control requirements. To remove the jump, the controller output u should be as close as possible to u m , the manually designated controller output.
Ess Control and Output
ESSs are important parts of a PV-ESS microgrid system, as they make up the power shortfall when the DGs are unable to meet the load demand. In off-grid operation, the bus voltage loses the support of a larger grid. The energy storage converter usually needs to use droop control to maintain the voltage and power balance of the DC microgrid. The reference voltage of the storage converter, V ESS_re f , is expressed as, where V re f is the nominal voltage of the microgrid, i o is the output current, and k is the droop coefficient. A block diagram of a traditional droop controller is shown in Figure 7. The difference between and the output voltage is sent to the closed voltage-current double control loop to generate the duty cycle d ESS for the control of the bidirectional DC/DC converter. then, , and seamless transition can be realized between PV voltage regulation and MPPT control. It is worth noting that, in practical applications, only Ka should be set small enough to meet the precise control requirements. To remove the jump, the controller output u should be as close as possible to um, the manually designated controller output.
Ess Control and Output
ESSs are important parts of a PV-ESS microgrid system, as they make up the power shortfall when the DGs are unable to meet the load demand. In off-grid operation, the bus voltage loses the support of a larger grid. The energy storage converter usually needs to use droop control to maintain the voltage and power balance of the DC microgrid. The reference voltage of the storage converter, where Vref is the nominal voltage of the microgrid, io is the output current, and k is the droop coefficient. A block diagram of a traditional droop controller is shown in Figure 7. The difference between and the output voltage is sent to the closed voltage-current double control loop to generate the duty cycle dESS for the control of the bidirectional DC/DC converter.
The capacity of the ESS is closely related to the SOC. Using the charging characteristics of lithium-ion batteries, as an example, as shown in Figure 8a, we note that, to maximize the life of the ESS, overcharging and over-discharging should be avoided. Therefore, upper and lower SOC limits are defined for the ESS, termed SOCmax and SOCmin, respectively, in this paper. Based on Figure 8a, charging should be stopped when SOCmax is reached. Similarly, discharging should be stopped at SOCmin. The effect of operating beyond these limits is depicted in Figure 8b, where region A is the over-discharging range (ODR), region B is the overcharging range (OCR), and the intermediate The capacity of the ESS is closely related to the SOC. Using the charging characteristics of lithium-ion batteries, as an example, as shown in Figure 8a, we note that, to maximize the life of the ESS, overcharging and over-discharging should be avoided. Therefore, upper and lower SOC limits are defined for the ESS, termed SOC max and SOC min , respectively, in this paper. Based on Figure 8a, charging should be stopped when SOC max is reached. Similarly, discharging should be stopped at SOC min . The effect of operating beyond these limits is depicted in Figure 8b, where region A is the over-discharging range (ODR), region B is the overcharging range (OCR), and the intermediate region is the normal operation range (NOR). In regions A and B, there are nonlinear variations in the voltage of the energy storage battery with decrease or increase in the SOC. In contrast, in the NOR region, the relationship between the SOC and the change in the voltage of the energy storage battery is linear [26,27].
the voltage of the energy storage battery with decrease or increase in the SOC. In contrast, in the NOR region, the relationship between the SOC and the change in the voltage of the energy storage battery is linear [26][27].
In this paper, we adopt an energy storage control scheme that considers SOC. Vmax, Vref, and Vmin, highlighted in Figure 8c, are the maximum, nominal, and minimum values, respectively, of the voltage of the microgrid in the permitted operation state. When the SOC is relatively high, the output voltage is raised correspondingly. Otherwise, when the SOC is relatively low, the reference value for the output voltage is reduced. In this paper, we adopt an energy storage control scheme that considers SOC. V max , V ref , and V min , highlighted in Figure 8c, are the maximum, nominal, and minimum values, respectively, of the voltage of the microgrid in the permitted operation state. When the SOC is relatively high, the output voltage is raised correspondingly. Otherwise, when the SOC is relatively low, the reference value for the output voltage is reduced.
During normal grid voltage and SOC operation, the value of the output voltage is increased when the SOC is high. If the ESS is in a charge state, under the control of the proposed scheme, the charging voltage will be smaller than the nominal voltage. Conversely, if the ESS is in a discharge state, the discharge voltage will be larger than the nominal voltage. Thus, by considering SOC ref , the ESS will be in a "more discharge and less charge" mode. In contrast, when the SOC is low, the droop controller will decrease its output voltage. If the ESS is in a charging state, the charging voltage will be larger than the nominal voltage, while in the discharge state, the discharge voltage is smaller than the nominal voltage. Hence, by considering SOC ref , the ESS will be in a "more charge and less discharge" operation mode.
Unlike the traditional droop control method, the SOC-based droop control strategy maintains the different energy storage units in a "both chargeable and dischargeable" state, as much as possible, by ensuring that the SOC approaches SOC ref . Hence, this strategy avoids the microgrid power imbalance phenomenon, caused by the overcharging or over-discharging of the ESS. The SOC ref and droop control coefficient m ESS of the ESS can be defined as in (9) and (10): A block diagram of the ESS control strategy is shown in Figure 9, where V o is the output voltage of the bidirectional DC/DC converter, i L is the current of the inductor in the converter, and is given by Energies 2016, 9, 2637 11 of 19 During normal grid voltage and SOC operation, the value of the output voltage is increased when the SOC is high. If the ESS is in a charge state, under the control of the proposed scheme, the charging voltage will be smaller than the nominal voltage. Conversely, if the ESS is in a discharge state, the discharge voltage will be larger than the nominal voltage. Thus, by considering SOCref, the ESS will be in a "more discharge and less charge" mode. In contrast, when the SOC is low, the droop controller will decrease its output voltage. If the ESS is in a charging state, the charging voltage will be larger than the nominal voltage, while in the discharge state, the discharge voltage is smaller than the nominal voltage. Hence, by considering SOCref, the ESS will be in a "more charge and less discharge" operation mode.
Unlike the traditional droop control method, the SOC-based droop control strategy maintains the different energy storage units in a "both chargeable and dischargeable" state, as much as possible, by ensuring that the SOC approaches SOCref. Hence, this strategy avoids the microgrid power imbalance phenomenon, caused by the overcharging or over-discharging of the ESS. The SOCref and droop control coefficient mESS of the ESS can be defined as in (9) and (10): A block diagram of the ESS control strategy is shown in Figure 9, where Vo is the output voltage of the bidirectional DC/DC converter, iL is the current of the inductor in the converter, and is given by
Simulation Validation
In this section, we consider simulations of the proposed control strategies, using models built in the MATLAB/Simulink environment (R2016b, Mathworks, Natick, MA, USA). We divide the simulation scenarios into two parts. Section 4.1 discusses the effects of transferring the PV unit controller between MPPT and voltage stabilization using the seamless transfer control strategy avoiding the dead zone. Section 4.2 presents the construction of a PV-ESS DC microgrid model, which includes the transfer strategy discussed in Section 4.1, to verify the effect of the proposed control technology on the transitions between the different modes of PV-ESS DC microgrid operation. Detailed simulation parameters are listed in Table 1, while details of the working conditions simulated are listed in Tables 2-4. Table 1. Parameters of the PV modules and ESS used in simulation.
Simulation Validation
In this section, we consider simulations of the proposed control strategies, using models built in the MATLAB/Simulink environment (R2016b, Mathworks, Natick, MA, USA). We divide the simulation scenarios into two parts. Section 4.1 discusses the effects of transferring the PV unit controller between MPPT and voltage stabilization using the seamless transfer control strategy avoiding the dead zone. Section 4.2 presents the construction of a PV-ESS DC microgrid model, which includes the transfer strategy discussed in Section 4.1, to verify the effect of the proposed control technology on the transitions between the different modes of PV-ESS DC microgrid operation. Detailed simulation parameters are listed in Table 1, while details of the working conditions simulated are listed in Tables 2-4.
Simulation of PV Controller
To verify the seamless transfer and anti-dead zone control strategy proposed in this paper, we built a model of a PV power generation unit with an independent load without ESS for simulation. The simulation conditions were set as in Table 2. The reference for the voltage stabilizer was 1000 V, and the irradiance intensity was maintained at 1000 W/m 2 . A P&O algorithm was used for MPPT control, and the simulation time was 1.2 s.
The results of these simulations are summarized in Figure 10. Figure 10a shows the output power, current, and voltage of the PV panels, while Figure 10b shows the load voltage, current, and power. To verify the proposed control strategy, we compare its performance with that of a normal control 1.
From 0-0.3 s: At this time, the PV grid is in Mode III and the controller is maintained in an MPPT state. The performance of the proposed transfer strategy is thus identical to that of the conventional method. At this point, the output power is 20 kW. 2.
From 0.3-0.5 s: At this point, the PV grid is in Mode V. The performance of both strategies is again identical, as under the MPPT control in the previous state. The bus voltage is higher than the reference value (1000 V). Meanwhile, the forward conduction diode in series with the output side of the boost circuit prevents energy reflux. Hence, the switching transistor of the boost circuit is in the off state, and the DC side energy is absorbed by the load until the voltage is restored to 1000 V. The voltage regulator plays a role in maintaining the voltage and power balance of the system. 3.
From 0.5-1.0 s: At this time, the simulation considers the transition of the PV modules from the voltage regulation to MPPT operation. As no seamless transfer control is included in the conventional method, the bus voltage and power fluctuate, which affects the load and stability of the DC microgrid. In contrast, with the proposed control strategy, there is a seamless transition of the output power from the rated value in the voltage stabilization state, to 20 kW, the value for the MPPT state, with no obvious buffeting during transition. The increased simulation time is to ensure that the transition from Mode VI to Mode IV, and then to Mode III, can be modelled. In order to illustrate the effect of the seamless transfer algorithm, the simulation adopts an independent carrier mode, without energy storage control. With a constant load, the maximum power point is greater than the regulated power. Therefore, under MPPT control, the bus voltage increases. 4. From 1.0-1.2 s: At this point, the PV control is switched from MPPT to voltage stabilization, similar to the process that occurs between 0.3 and 0.5 s. However, in this stage, the load is decreased to 35 Ω, and the load power at the reference voltage exceeds the maximum allowable PV power under voltage stabilization control. The PV units are thus unable to maintain the load capacity at 1000 V. Using a voltage stabilization algorithm without anti-dead zone control, in this condition, would inevitably lead to PV failure, as mentioned in Section 2. The working point will slip to the left of the PV U-I curve, the switching transistor of the boost circuit will be in an open state, the PV unit will be in a short-circuit state with zero output power and zero load current, and the system will become unstable. If the proposed anti-dead zone control algorithm is adopted, the system will not slide to the left of the U-P curve, as the initial working point of the PV system will be on the right side of the maximum power point. The output power will be clamped and the output voltage can be reduced without it being equal to the reference voltage. The stability of the microgrid can thus be maintained to a certain extent. This state is consistent with Mode V.
Simulation of PV-ESS Microgrid
In this section, we verify the operation of the proposed PV control and energy storage control algorithms using simulations. To compare the different ESS control strategies, we built a PV DC microgrid model with two energy storage batteries. The control strategy proposed in this paper was adopted in the first battery (ESS 1), while the conventional control strategy was adopted in ESS 2. The operation of the DC microgrid under different simulation conditions is summarized below.
Simulation condition 1: In this condition, a P&O algorithm was adopted for MPPT control. For the voltage stabilization control strategy, the reference voltage was set to 1000 V. The SOC of ESS 1 and 2 were set to 0.8, which corresponded to SOC max . At this point, only the discharge of the ESS was permitted. The duration of the simulation was 0.8 s. The different operational modes and working conditions considered in this simulation are summarized in Table 3. The results of this simulation are shown in Figure 11. curve, the switching transistor of the boost circuit will be in an open state, the PV unit will be in a short-circuit state with zero output power and zero load current, and the system will become unstable. If the proposed anti-dead zone control algorithm is adopted, the system will not slide to the left of the U-P curve, as the initial working point of the PV system will be on the right side of the maximum power point. The output power will be clamped and the output voltage can be reduced without it being equal to the reference voltage. The stability of the microgrid can thus be maintained to a certain extent. This state is consistent with Mode V.
Simulation of PV-ESS Microgrid
In this section, we verify the operation of the proposed PV control and energy storage control algorithms using simulations. To compare the different ESS control strategies, we built a PV DC microgrid model with two energy storage batteries. The control strategy proposed in this paper was adopted in the first battery (ESS 1), while the conventional control strategy was adopted in ESS 2. The operation of the DC microgrid under different simulation conditions is summarized below.
Simulation condition 1: In this condition, a P&O algorithm was adopted for MPPT control. For the voltage stabilization control strategy, the reference voltage was set to 1000 V. The SOC of ESS 1 and 2 were set to 0.8, which corresponded to SOCmax. At this point, only the discharge of the ESS was permitted. The duration of the simulation was 0.8s. The different operational modes and The PV microgrid operation in Mode V was simulated from 0-0.2 s. From Figure 11, it can be observed that, when the ESS was fully charged and the load power was less than the maximum power of the PV system, the PV module was able to supply the load independently. At this stage, the PV system was able to stabilize the bus voltage at 1000 V, and the ESS was inactive. The output power was maintained at 8.33 kW, which was less than the maximum power point, which validated the anti-dead zone control method.
Mode VI of PV microgrid operation was simulated from 0.2-0.5 s. Here, the load decreased sharply to 35 Ω while the irradiance was kept constant. As a result, the PV system was not able to maintain the power balance of the microgrid independently, and the ESS had to be involved in the voltage regulation. The control strategy for the PV system was switched from voltage stabilization to MPPT, to achieve the maximum output power of 20 kW. The effects of adopting a droop control strategy focused on SOC protection in ESS 1 can be seen in Figure 11c. We note that, at a high SOC, the discharge rate of ESS 1 is higher than that of ESS 2, which lacks SOC protection.
From 0.5-0.8 s, the load was kept constant, while the irradiance was suddenly reduced from 1000 W/m 2 to 700 W/m 2 . The maximum power point of PV system was decreased accordingly. To maintain the power balance, the ESSs increased their output powers. At this point, the output of ESS 1 increased faster than that of ESS 2, so that overcharging was avoided in the case of a high SOC.
The results of this simulation show that, when the proposed control systems are included in PV-ESS DC microgrids, the PV modules can stabilize the DC bus voltage, and the transfer between the voltage stabilization and MPPT modes of operation is smooth. Moreover, the droop control strategy, which considers the SOC of the energy storage units, can increase the output power of the ESS in a manner wherein overcharging is avoided when the SOC is high, in contrast to the conventional control, where the SOC is not considered. Hence, adopting this SOC-based droop control strategy can be effective in protecting the storage batteries.
Simulation condition 2: In this condition, a P&O algorithm was adopted for MPPT control and the reference for the voltage stabilization control strategy was 1000 V. The SOC of ESS 1 and 2 were set to 0.35, and the duration of the simulation was 0.8 s. The different operational modes and working conditions considered in this simulation are summarized in Table 4, and the results of the simulation are shown in Figure 12.
Incident Time (s) Mode
Working Condition 1 0-0.2 V S = 1 kW/m 2 , Rl = 120 Ω 2 0. The PV microgrid operation in Mode III was simulated from 0-0.3 s. At this stage, while the SOC of the ESSs was below SOCmax, the PV modules supplied the load and were operated in the MPPT mode. The residual PV output was used to charge the ESSs and the SOC of both ESS 1 and 2 increased. The output power at this point was 20 kW. Since an SOC-based droop control strategy was adopted in ESS 1, it had a faster charging rate at a low SOC than ESS 2, which used a conventional control strategy.
Mode IV was simulated from 0.3-0.5 s. At this stage, the irradiance was constant, while the load was reduced to 35 Ω. The PV modules were unable to maintain the load power, and the ESS regulated the DC voltage by discharging. As ESS 1 adopted an SOC-based droop control strategy, at low SOCs, its discharge rate was slower than that of ESS 2, which did not adopt the proposed strategy, to avoid entering an over-discharging state. This slower discharge can be seen by comparing the decrease in SOC using Figure 12(c). The PV microgrid operation in Mode III was simulated from 0-0.3 s. At this stage, while the SOC of the ESSs was below SOC max , the PV modules supplied the load and were operated in the MPPT mode. The residual PV output was used to charge the ESSs and the SOC of both ESS 1 and 2 increased. The output power at this point was 20 kW. Since an SOC-based droop control strategy was adopted in ESS 1, it had a faster charging rate at a low SOC than ESS 2, which used a conventional control strategy.
Mode IV was simulated from 0.3-0.5 s. At this stage, the irradiance was constant, while the load was reduced to 35 Ω. The PV modules were unable to maintain the load power, and the ESS regulated the DC voltage by discharging. As ESS 1 adopted an SOC-based droop control strategy, at low SOCs, its discharge rate was slower than that of ESS 2, which did not adopt the proposed strategy, to avoid entering an over-discharging state. This slower discharge can be seen by comparing the decrease in SOC using Figure 12c. The microgrid operation in Mode IV was again simulated from 0.5-0.8 s. However, in this case, the load was kept constant, while the irradiance was reduced from 1000 W/m 2 to 700 W/m 2 . The maximum power of the PV modules was consequently decreased. To maintain the power balance of the microgrid, the output powers of the ESSs were increased. Because of this mandated increase in the output power, the SOC of ESS 2 dropped even faster, as seen in Figure 12c. In contrast, at low SOCs, ESS 1, which adopted the SOC-based droop control strategy, maintained a slower discharge rate, to avoid premature encounter of the lower SOC limit and over-discharging.
The results of this simulation show that an ESS that adopts the SOC-based droop control strategy will be able to maintain a higher charging rate and lower discharge rate, as appropriate, at lower SOCs, than one controlled using a conventional strategy. This SOC-based control is effective in preventing the ESS from reaching the lower SOC limit prematurely, and entering the over-discharge condition, thus protecting the PV-ESS system.
Conclusions
In this paper, we analyzed the independent operation of PV energy storage DC microgrids. Six typical operation modes including boundary conditions were defined, from the perspective of power balance of a PV-ESS DC microgrid in off-grid operation. The boundary conditions defined in this paper were functions of the maximum power of the PV modules, the load power at the current DC bus voltage, and the state of the ESS. By combining the characteristics of the PV modules and energy storage units, we analyzed the key control technologies for the transition between the different modes of operation, for designing improved control strategies.
Our analysis revealed that the voltage regulation mode of a PV system in the off-grid operation was prone to failure, because of the closed double controller traditionally used in this mode. By adaptively limiting the amplitude of the current of the inner loop, we established a voltage stabilization process in which the PV modules were operated in the region to the right of the maximum power point only, as defined by the U-P curve of the generation units, preventing the system from entering the dead zone. We also designed a control strategy for the seamless transfer of the PV generation units between MPPT operation and DC bus voltage stabilization. Through simulations, the method was proven to be effective in reducing the output voltage and power fluctuations caused by PV mode switching.
Finally, we compared the performance of a traditional droop control process with that of an SOC-based droop voltage stabilization strategy. Our experiments indicated that the second strategy was able to prevent lifetime damage of the energy storage units, caused by excessive charging and discharging. As this method is simple to implement, it is suitable for use in PV-ESS DC power generation systems.
Constrained by computational capabilities of simulation platform in the laboratory, it is unfortunate that the paper could not include larger time horizon simulation. Moreover, in this study, since the simulations include many switching devices (boost and bidirectional DC-DC converters), in order to improve the simulation accuracy as much as possible, we used a very small simulation step size (5 × 10 −7 s), which required a lot of computational memory. Hence, a simulation time of over several seconds could not be examined by using this type of simulation.
Notwithstanding these limitations, the study suggests that the method is effective in reducing the output voltage and power fluctuations caused by PV mode switching. Further research can be conducted to determine the effectiveness of the proposed method under other types of simulation studies, such as real-time simulations, which can be used to perform longer simulations. | 13,265 | 2018-10-02T00:00:00.000 | [
"Engineering"
] |
Probabilistic Performance-based Optimum Seismic Design Framework: Illustration and Validation
In the field of earthquake engineering, the advent of the performance-based design philosophy, together with the highly uncertain nature of earthquake ground excitations to structures, has brought probabilistic performance-based design to the forefront of seismic design. In order to design structures that explicitly satisfy probabilistic performance criteria, a probabilistic performance-based optimum seismic design (PPBOSD) framework is proposed in this paper by extending the state-of-the-art performance-based earthquake engineering (PBEE) methodology. PBEE is traditionally used for risk evaluation of existing or newly designed structural systems, thus referred to herein as forward PBEE analysis. In contrast, its use for design purposes is limited because design is essentially a more challenging inverse problem. To address this challenge, a decision-making layer is wrapped around the forward PBEE analysis procedure for computer-aided optimum structural design/retrofit accounting for various sources of uncertainty. In this paper, the framework is illustrated and validated using a proof-of-concept problem, namely tuning a simplified nonlinear inelastic single-degreeof-freedom (SDOF) model of a bridge to achieve a target probabilistic loss hazard curve. For this purpose, first the forward PBEE analysis is presented in conjunction with the multilayer Monte Carlo simulation method to estimate the total loss hazard curve efficiently, followed by a sensitivity study to investigate the effects of system (design) parameters on the probabilistic seismic performance of the bridge. The proposed PPBOSD framework is validated by successfully tuning the system parameters of the structure rated for a target probabilistic seismic loss hazard curve. The PPBOSD framework provides a tool that is essential to develop, calibrate and validate simplified probabilistic performance-based design procedures.
probabilistic performance-based assessment methodology, herein referred to as the inverse PBEE analysis. In order to conquer the inverse problem of explicitly satisfying probabilistic performance criteria confronted in the design process, computer-aided structural design using mathematical optimization becomes essential because of the increased complexity in the probabilistic design process [Austin, Pister and Mahin (1987); Haukaas (2008)]. It is worth noting that significant research has been performed on a closely related topic, i.e., reliability-based seismic design optimization [Jensen, Valdebenito, Schuëller et al. (2009); Taflanidis and Beck (2009); Barbato and Tubaldi (2013); Tubaldi, Barbato and Dall'Asta (2016)], which also addresses the inverse problem in the presence of uncertainties. In these studies, the seismic design problem is treated as an inverse problem considering uncertainties associated with the earthquake loading (intensity and time history) and in some cases the structural model parameters. The inverse problem was cast either as a zerofinding problem [Barbato and Tubaldi (2013)] to achieve a target reliability, or as an optimization problem, in which reliability metrics (i.e., the probability of failure of the system) are used to define the objective/constraint functions. However, these studies focused on the system reliability (or probability of failure) based on a pre-defined critical threshold value of a response quantity, instead of the full probabilistic description of the structural system performance at a continuum of levels of response (demand) and loss and at a discrete set of damage states. It is also worth mentioning that the above studies represent the earthquake ground motions analytically as a random process (e.g., nonstationary filtered white noise process) linked to a ground motion intensity measure such as the peak ground acceleration (PGA). In contrast, the study reported in this paper uses ensembles of scaled historic earthquake ground motion records to represent the record-torecord variability in the forward PBEE analysis. These earthquake records are selected based on the magnitude-distance deaggregation of the site seismic hazard, the geological and seismological conditions and the local site conditions. This earthquake ground motion characterization is currently predominantly used in performance-based earthquake engineering, both at the level of research and engineering practice. The aforementioned needs calls for an innovative optimum seismic design framework in the presence of uncertainty by using the versatile and modular probabilistic PBEE methodology. Aiming at promoting the practical application of probabilistic methods for design purposes, this paper proposes a probabilistic performance-based optimum seismic design (PPBOSD) framework. This framework is an extension of the PBEE methodology obtained by wrapping a decision-making layer in the design process around the forward PBEE analysis using mathematical optimization. The PPBOSD framework is illustrated and validated using a simplified nonlinear inelastic single-degree-of-freedom (SDOF) model of a bridge structure as a proof-of-concept study, before applying it to more complex and realistic engineering problems in the future. In the validation example, a well-posed optimization problem of tuning system (design) parameters of the structure to achieve a target probabilistic loss hazard curve is defined and solved using the proposed PPBOSD framework. This paper is structured as follows. First, the motivation behind the proposed PPBOSD framework is articulated, and an illustrative example of a SDOF bridge model, which is used to demonstrate conceptually the application of the PPBOSD framework, is presented. Second, the steps of the forward PBEE analysis, which is an indispensable component of PPBOSD, are described in the context of quantitatively assessing the seismic performance of the illustrative structure in probabilistic terms. Note that a multilayer Monte Carlo simulation procedure is implemented to estimate efficiently the total seismic loss hazard of the structure, which is needed in the PPBOSD framework. Third, a parametric probabilistic PBEE analysis is conducted to investigate the effects of the system (design) parameters on the probabilistic seismic performance of the structure. Finally, for illustration and validation purposes, the inelastic SDOF bridge model parameters are optimized (i.e., tuned), using the PPBOSD framework, to achieve a target seismic loss hazard curve of the bridge. The underlying assumptions and limitations of the presented research are critically discussed in the conclusions.
PPBOSD framework and illustrative application
The well-established PEER PBEE methodology is used primarily to sequentially quantify and analyze the uncertainties in the seismic intensity and earthquake records, structural response (demand), structural capacity, seismic damage (i.e., limit-state exceedances), and eventually the seismic loss (e.g., repair cost, down time) for a structure, at a given site, due to future earthquakes. The PBEE methodology (i.e., forward PBEE analysis) consists of four analytical steps: probabilistic seismic hazard analysis, probabilistic demand hazard analysis, probabilistic damage hazard analysis, and probabilistic loss hazard analysis (Fig. 1). Each step determines the probabilistic characteristics of intermediate (or interface) variables, respectively referred to as the earthquake ground motion Intensity Measure (IM), Engineering Demand Parameter (EDP), Damage Measure (DM), and Decision Variable (DV) such as monetary loss. For a newly designed or an existing structure, forward PBEE analysis can be used as a reliable tool to assess its probabilistic seismic performance, which depends on the system parameter vector x consisting of geometric, material and mechanical properties of the various structural components and seismic mitigation devices of the structure. However, the probabilistic performance of the structure may be unacceptable or not optimal according to target seismic design objectives, which are typically defined, based on the public's expectations, by stakeholders, decision-makers and design code committees. This underlies the motivation behind an inverse PBEE analysis. For example, through the evaluation process using PBEE, an initial structural design is characterized by its seismic demand or loss hazard curve (i.e., probability of exceedance of any specified value of EDP or DV in 100 years) such as the hazard curve #1 in Fig. 2 expressed in terms of the probability of exceedance in 100 years. In contrast, the target performance can be characterized by hazard curve #2, #3, or #4 which would require tuning the design parameter vector x for this target design specification. Ideally, it is desirable to reduce the seismic risk (i.e., probability of exceedance) across the entire range of EDP or DV values, e.g., from hazard curve #1 to hazard curve #4. However, if hazard curve #4 is not feasible due to practical design constraints such as the initial construction cost, the decisionmakers (e.g., engineers, stakeholders, or owners) can aim at improving the design by targeting hazard curve # 2 or hazard curve #3 as an alternative to reducing the seismic risk across all EDP or DV values. Namely, the decisions are made to place more emphasis in the seismic performance either at the low hazard level (or short return period or high probability of exceedance) or at the high hazard level (or long return period or low probability of exceedance), respectively. When aiming at improving the seismic performance at low hazard (or short return period or high probability of exceedance) levels, the performance of the initial structural design can be improved from hazard curve #1 to hazard curve #2. As seen from Fig. 2, this can be achieved by either minimizing the probability, DV (dv0.86), both of which may lower the structural performance at high hazard levels (or long return period or low probability of exceedance). Conversely, improving the seismic performance at high hazard levels may reduce the performance at low hazard levels. In such a case, for example, the initial structural design could be altered so that its performance characterized by hazard curve #1 is improved to the performance characterized by hazard curve #3. Similarly, this can be achieved by either minimizing the probability, ( ) 2 V P x , of exceeding a high threshold value v2 of the EDP or DV or by minimizing the 10 th percentile of the EDP (edp0.10) or DV(dv0.10) (see Fig. 2). Note that 10% (a high hazard level) and 86% (a low hazard level) probability of exceedance in 100 years correspond to return periods of 945 years and 50 years, respectively, based on the assumption of the Poisson random occurrence model. Alternatively, and more generally, the complete target loss hazard curve (defined by many discrete points at different hazard levels) can be used to express the probabilistic design objectives, as shown later in the illustrative example. The above inverse PBEE problem, which is confronted for design improvement or design optimization in the face of uncertainty, can be solved by the innovative optimum structural design framework (i.e., PPBOSD) proposed in this paper. PPBOSD extends the PBEE evaluation methodology, which can be viewed as an open loop, by wrapping a decision-making layer using optimization around the forward PBEE analysis in order to close the loop as shown in Fig. 3. This decision-making layer allows the use of various computational optimization tools, e.g., OpenSees-SNOPT [Gu, Barbato and Conte (2012)], to update the initial structural design to achieve the performance objectives. The probabilistic seismic design objectives can be defined in terms of demand hazard, damage hazard, and/or loss hazard characteristics (e.g., hazard curves of EDP or DV, probability of limit-state exceedances, or statistics of EDP, DM, and/or DV in a specified exposure time). These design objectives can be cast into either objective or constraint functions in the optimization problem formulation. Thus, the proposed PPBOSD framework provides a tool to search for either a feasible design that satisfies all constraint functions or an optimum design that minimizes the objective function while satisfying all constraint functions. In PPBOSD, the current design is first assessed using the forward PBEE analysis for its probabilistic performance, which is compared with the design objectives expressed in terms of target hazard levels or statistics. If the design objectives are not satisfied, the current design will be updated in the decision-making layer through optimization by tuning the structural design parameters x. This paper focuses on the illustration and validation of the proposed PPBOSD framework, rather than a practical application to a complex large-scale bridge system, which is considered as the next stage of this research. Accordingly, a simple nonlinear bridge structural model is selected herein for simplicity but without loss of generality. This structural model consists of an inelastic SDOF system, which is commonly used to represent macroscopically a bridge behavior in its longitudinal or transverse direction. A nonlinear FE model of the Humboldt Bay Middle Channel Bridge (HBMC, see Fig. 4(a)) previously developed in OpenSees [Conte and Zhang (2007)] is used to calibrate the nonlinear SDOF system parameters. The initial stiffness of the SDOF model obtained from the static pushover analysis of the bridge in the longitudinal direction is k 0 = ?? 137,200 kN/m, and the effective lumped mass accounted for is m=6.15×10 6 kg, thus leading to an initial fundamental period of vibration T1 = 1.33 s. The nonlinear model parameters associated with this inelastic SDOF model ( Fig. 4(b)) are the yield strength Fy=10,290 kN (i.e., corresponding to a yield displacement Uy=0.075 m), and the postyield stiffness ratio (=ratio of the post-yield stiffness to the initial stiffness) b=0.10. The Menegotto-Pinto hysteretic material model is used to approximately represent the cyclic force-displacement response behavior and energy dissipation capabilities of an inelastic structural system such as a bridge. Furthermore, linear viscous damping with a damping ratio of 2% is incorporated in the SDOF bridge model to account for sources of energy dissipation beyond the hysteretic energy dissipation due to inelastic action of the materials during an earthquake. Note that the nonlinear SDOF bridge model is used in this study only for the purpose of illustrating and validating the proposed PPBOSD framework, these being the main objectives of this paper, without the intention to assess comprehensively the probabilistic seismic performance of the actual bridge. In engineering practice, various structural response quantities or parameters, referred to as engineering demand parameters (EDPs), strongly correlated with different types of structural or non-structural damage are of interest. This study considers three EDPs, namely relative displacement ductility, μ , peak absolute acceleration, , and normalized hysteretic energy dissipated, , as defined in Eqs.
In the equations above, d t =earthquake duration (i.e., the total duration of the ground motion record downloaded from the PEER NGA database), ( ) u t displacement response relative to the ground, =relative acceleration response, =earthquake ground acceleration, g =acceleration due to gravity, ( ) R t =internal resisting force, and =elastic strain energy stored in the system at time = d t t . The three response parameters defined in Eqs. (1)-(3) are selected as EDPs associated with the following damage/failure or limit-states: first-excursion failure, dynamic stability of vehicles traversing the bridge during the earthquake, and cumulative damage (e.g., low-cycle fatigue damage), respectively.
Forward PBEE analysis
The PEER PBEE methodology breaks down the seismic risk assessment procedure into
Probabilistic Performance-based Optimum Seismic Design Framework
??
four successive steps. These probabilistic steps sequentially quantify the uncertainty in the earthquake ground motion intensity measure (IM), the engineering demand parameter (EDP), the damage measure (DM) and the decision variable (DV), as implied by the underlying mathematical model expressed in Eq. (4). Here, ( ) X x ν denotes the mean annual rate (MAR) of occurrence of the random event { }, > X x namely the MAR of random variable X exceeding a given value x, and ( ) ( ) = > = G x y P X x Y y represents the conditional complementary cumulative distribution function (CCDF) of random variable X given random variable Y = y. In the probabilistic conditioning and deconditioning process, "one-step" forward dependence is assumed, i.e., This process aims at propagating the uncertainty related to the seismic input and structural capacity, all the way to the EDPs, DMs, and DVs using the total probability theorem. The four steps of the PEER PBEE methodology are described below with select results to illustrate the process of forward PBEE analysis, as well as deaggregation results to increase the transparency of the hazard analysis. A multilayer Monte Carlo simulation method [Zhang (2006); Yang, Moehle, Stojadinovic et al. (2009)] is implemented to estimate efficiently the total loss hazard of the structure.
Probabilistic seismic hazard analysis
Pioneered by the theoretical framework developed by Cornell [Cornell (1968)], probabilistic seismic hazard analysis (PSHA), Step (1) of the PBEE methodology, has become the most accepted approach for assessing the site-specific seismic hazard in a probabilistic manner [Shome, Cornell, Bazzurro et al. (1998); Luco and Cornell (2007); Petersen, Frankel, Harmsen et al. (2008)]. The probabilistic seismic hazard, which consists of the uncertainty quantification of the earthquake ground motion IM, is characterized by the MAR of the earthquake ground motion IM exceeding a specified threshold value im , . Based on the Poisson process assumption for the random occurrence of earthquakes in time, the MAR of exceedance can be converted to the probability of exceedance (PE) in a specified exposure time (e.g., annual PE or PE in 50 years abbreviated as PE50). The IM is selected as the 5% damped linear elastic pseudo-spectral acceleration at the fundamental period ( 1 T ) of the structural system ( ) , which has been shown to be a statistically efficient and sufficient predictor among a family of earthquake ground motion intensity measures [Shome, Cornell, Bazzurro et al. (1998); Luco and Cornel (2007)]. The PSHA for a specific site location and soil condition can be performed using the 2008 Interactive Deaggregation tool provided by the United States Geological Survey (USGS). The site location in this study is assumed to be in the City of Oakland, California, at latitude = 37.803° N and longitude = 122.287° W. The soil condition is characterized by the average shear wave velocity in the top 30 meters of soil at the site location (Vs30 = 360 m/s). The seismic hazard curve obtained from the 2008 Interactive Deaggregation tool will be needed in Step (2) of the forward PBEE analysis. The seismic hazard can be deaggregated with respect to the seismological variables, i.e., magnitude (M) and source-to-site distance (R), to gain additional insight into the contributing earthquakes. This insight will benefit the earthquake ground motion selection for the ensemble time history analyses. Fig. 5 shows the M-R deaggregation of the seismic hazard corresponding to PE50 = 2%, and two modes are observed in the M-R plane. The higher mode is mainly contributed by the Hayward Fault to the east of Oakland, and the lower mode is mainly contributed by the San Andreas Fault to the west of Oakland. This deaggregation information (i.e., 5.9 < M < 7.3 corresponding to the primary mode, 0 < R < 40 km) guided the earthquake ground motion selection process, together with the geological and seismological conditions (i.e., fault mechanism as strikeslip) and local site conditions. Accordingly, a large number (i.e., 146) of horizontal earthquake ground motion records are selected from the PEER NGA database of historical records, and are used as seismic inputs for the ensemble nonlinear time history analyses required for the probabilistic characterization of the seismic response (or demand) in the second step of the PBEE analysis.
Probabilistic demand hazard analysis
Probabilistic demand hazard analysis (PDeHA) aims at predicting probabilistically the structural response (i.e., EDP) to future earthquakes. The probabilistic characterization of an EDP is obtained through the corresponding seismic demand hazard curve, which is defined as the MAR of EDP exceeding a threshold value edp , Thus, a crucial step of probabilistic demand hazard analysis is to find the probability distribution of the EDP of interest given a value im of IM, P EDP edp IM im > = , which is referred to as the probabilistic demand conditional on the seismic hazard level. The conditional probabilistic demand analysis can be performed through the commonly used cloud method [Baker (2005)]. In this method, an ensemble of nonlinear dynamic analyses of the structure of interest are performed for the selected suite of earthquake ground motion records, which have various IM values. The corresponding seismic response dataset for the selected earthquakes, ( 1 where â and b are obtained through regression analysis. Accordingly, the conditional random variable { } | = EDP IM im is fully characterized by the conditional probability density function (PDF) (see Fig. 6 where Φ is the standard normal CDF and ln denotes the natural logarithmic function. , vol.?, no.?, pp.??, 2019 Figure 6: Conditional seismic demand hazard analysis result for EDP=relative displacement ductility
CMES
The conditional probabilistic demand reflects the record-to-record variability when the earthquake IM is fixed. To account for the uncertainty in the earthquake IM, the convolution of the conditional CCDF of the EDP and the seismic hazard curve obtained through PSHA is performed according to Eq. (6). This leads to the (unconditional) probabilistic demand hazard curve, is 1.84×10 -2 , 0.77×10 -2 , 0.16×10 -2 , 0.04×10 -2 , and 0.01×10 -2 , respectively, as indicated by the solid circles in Fig. 7(a). Note that the MAR of exceedance of 1.84×10 -2 corresponds to a mean return period of 55 years (=1/1.84×10 -2 ), i.e., the relative displacement ductility will exceed 1.0 (i.e., the structure will yield) at least once every 55 years on average. The seismic demand hazard with a given MAR of exceedance arises from a continuous range of seismic hazard levels (or IM values) as expressed by Eq. (6) and the contribution of each seismic hazard level to the demand hazard varies with the demand hazard level.
In order to investigate the relative contribution of an IM bin ( ) The right-hand term in the above equation is referred to as the deaggregation of the demand hazard (at EDP = edp) with respect to the intensity measure IM, indicating the contribution of the IM bin, shown on the probabilistic seismic demand hazard curve in Fig. 7a. The deaggregation curves shift towards higher IM values (i.e., to the right) as the EDP values increase, which reflects the fact that earthquake ground motions of higher intensity levels contribute more to higher values of the EDP. Similarly, the other two EDPs defined in Eqs. (2) and (3) are quantified probabilistically but not presented here due to space limitation [Li (2014)].
(a) (b) Figure 7: (a) Probabilistic seismic demand hazard curve for the relative displacement ductility (solid circles denoted for the points to be deaggregated), and (b) deaggregation of demand hazard points shown in Fig. 7(a) with respect to the intensity measure IM
Probabilistic damage hazard analysis
The third step of the PBEE methodology, probabilistic damage hazard analysis (PDaHA), is to predict probabilistically the seismic damage to the structure of interest due to future earthquakes. Practically, seismic damage is associated to a damage or failure mode (or mechanism). This study considers three damage or failure modes for the illustrative bridge structure, which are associated with the three selected EDPs, respectively. For each damage/failure mode/mechanism, a set of discrete limit-states are considered, corresponding to discrete values of the damage measure, DM = k . In this study, it is assumed that there are three limit-states ( ) in which the conditional probability [ ] | ≥ = P DM k EDP edp is referred to in the literature as probabilistic capacity curve (or function) and characterizes the uncertainty in predicting the structural capacity against the k-th limit-state of the damage/failure mechanism of interest. Probabilistic capacity curves are typically obtained through comparing analytical or empirical capacity models with corresponding experimental data [Gardoni, Mosalam, Der Kiureghian (2002)]. For the purpose of this study, the probabilistic capacity curves for each of the three limit-states associated with each of the three damage/failure modes considered (first-excursion failure, dynamic stability of vehicles traversing the bridge, cumulative damage) are postulated (as normal CDFs) and defined in Table 1; they are also depicted graphically in Fig. 8(a). The conditional probability of exceeding a damage (or limit-) state (e.g., conditional probability of a damage state exceedance can then be convolved with the seismic demand hazard curve to yield the seismic damage hazard as in Eq. (10). Fig. 8(b) reports the probability of exceeding damage or limit-states I, II, and III in 50 years for each of the three damage or failure modes considered, as well as the mean return periods (RPs) of damage/limit-state exceedances, which are commonly used to measure their occurrence frequency in engineering practice. The seismic damage hazards calculated above contain contributions from a continuous range of EDP bins, as well as a continuous range of IM bins of the earthquake input ground motions. Similar to the demand hazard deaggregation, the damage hazard can be deaggregated with respect to the associated EDP and the IM, respectively. Eqs. (11) Figure 8: (a) Illustration of the probabilistic capacity curves and definition of conditional probability of damage states given EDP, and (b) seismic damage hazard results for the damage (or limit-) states associated with the EDPs considered for the illustrative example The damage hazard deaggregation with respect to EDP and IM, shown in Fig. 9, reveals the relative contributions of different EDP or IM bins to the damage hazard. It shows that exceedance of increasingly severe damage/limit-states are predominantly contributed by increasing higher EDP or IM values or bins.
(a) (b) Figure 9: Damage hazard deaggregation with respect to (a) EDP and (b) IM for different damage/limit-state exceedances associated with EDP=relative displacement ductility (values of MAR of exceedance are indicated for each limit-state)
Probabilistic loss hazard analysis
The objective of the final step of the PBEE methodology, probabilistic loss hazard analysis (PLHA), is to quantify the decision variable (DV) probabilistically. The DV can be the direct economic loss (i.e., total repair or replacement cost, T L ) due to seismic damage, or the loss factor defined as the total loss normalized by the system replacement cost. The total loss hazard can be expressed in the form of a loss hazard curve, which provides the MAR or annual probability of the DV exceeding a threshold value. The total loss T L is defined as the summation of all the component-wise repair costs ( j L , j =1, 2, 3 here) associated with the three damage/failure modes considered here. In a real-world bridge application, which is more involved/detailed than the illustrative example considered here, the damage/failure modes would consist of: failure of bridge piers, failure of shear keys, failure of abutment, deck unseating, etc. In the present illustrative example, j L is assumed to lump all the component-wise repair costs associated with the j-th damage/failure mechanism of the bridge. For each component, the loss hazard curve, ( ) j L l ν , is obtained according to Eq. (13), in which the integration reduces to a summation over the discrete damage states (as favored in practice) considered for the j-th damage/failure mode. . Due to lack of statistical data on repair and replacement costs, they are assumed to be normally distributed with the means and coefficients of variation (c.o.v.) presented in Table 2 to facilitate illustration of the methodology proposed in this paper. With the probabilistic characteristics of the component losses determined in terms of component loss hazard, the total loss hazard can be computed through a multi-fold integration of the joint PDF of the component losses. However, it is computationally prohibitive, if not impossible, to derive that joint PDF and carry out the multi-fold integration, especially when a large number of components and damage/failure modes exist in real-world applications. To address this challenge, a multilayer Monte Carlo Simulation (MMCS) method is implemented and used as a simple yet powerful technique to estimate the total loss hazard. This method can efficiently incorporate and propagate the uncertainties arising at all stages of the PBEE analysis (e.g., random time occurrences of earthquakes governed by a Poisson process, IM, EDP, and DM) all the way to the final random variable DV= T L . Such a treatment of uncertainty propagation in the forward PBEE analysis empowers the proposed PPBOSD framework, which involves a large number of forward PBEE analyses during the optimization process. The flowchart of the MMCS method developed for this study is shown in Fig. 10 and presented in detail below. First, the number of earthquakes in the year being simulated is randomly generated according to the Poisson random occurrence model, and IM for each earthquake ground motion is simulated according to its probabilistic characteristics derived from PSHA. Second, for a given IM level, a set of EDPs is then stochastically simulated according to the joint PDF of the EDPs estimated through the results of an ensemble of FE seismic response analyses of the structure of interest. Note that the conditional joint PDF of the EDPs given IM can be approximated by a NATAF model [Liu and Der Kiureghian (1986)] defined by the marginal PDFs and correlation coefficients of the EDPs estimated from the results of the ensemble of nonlinear time-history analyses performed in the PSDeH analysis. This relaxes the more restrictive assumption, that the EDPs are jointly lognormal, used in FEMA P-58 and by Yang et al. [Yang, Moehle, Stojadinovic et al. (2009)]. Third, the damage measure for each component (or lump of components in the illustrative example presented here) is randomly generated from the probabilistic capacity curves, and the component loss is simulated according to the PDF of the corresponding repair cost. For each year simulated, the total loss for that year is obtained by summing the repair costs over all the damaged components and all the earthquakes that occurred during that year. By simulating the seismic activity and resulting structural damage and economic loss for a large number of years (e.g., 100,000), an empirical CDF and CCDF of the total loss can be obtained. The CCDF of the total loss is referred to as the seismic loss hazard curve. Fig. 11 for the bridge structure considered in this study was obtained using the MMCS method developed. The total loss hazard curve indicates the annual probability of the repair or replacement costs exceeding a threshold value. For example, from Fig. 11, there is 0.3% probability that for a given year, the seismic repair cost for this bridge will exceed 20% of the total bridge replacement cost (i.e., loss factor of 0.2) or, alternatively, this level of loss for the bridge has a mean return period of exceedance of 330 years (=1/0.003).
Parametric forward PBEE analysis
Following the forward PBEE analysis procedure presented in the previous section, a parametric study (i.e., one-at-a-time perturbation-based sensitivity analysis) is performed in order to explore the effects of parametric changes on the forward PBEE analysis results. For the system considered in this paper, the yield strength (i.e., Fy) and the initial stiffness (i.e., k0) of the nonlinear SDOF system are each perturbed by -25% and 50%. The effects of varying the yield strength on the demand hazard curves for the relative displacement ductility and peak absolute deck acceleration are shown in Figs. 12(a) and 12(b), respectively. Note that an increase in the yield strength reduces the demand hazard for the relative displacement ductility, while it increases the demand hazard for the peak absolute deck acceleration. Consequently, varying the yield strength affects the loss hazard curve as well as shown in Fig. 13(a). By comparing Figs. 13(a) and 13(b), it is worth noting that the initial stiffness and the yield strength have opposite effects on the loss hazard curve. The sensitivity study of the forward PBEE analysis results indicates that the loss hazard changes as a function of the system parameters, thus giving rise to an inverse PBEE problem. For example, it is of interest to the various stakeholders and owner of the structure of interest to tune the system design (i.e., design parameters) such that an expected performance, expressed in terms of a target or desired probabilistic loss hazard curve, is achieved.
Inverse PBEE problem
The aforementioned inverse PBEE problem, i.e., achieving a probabilistic performance objective, is highly challenging as the design objective is probabilistic and defined based on the loss hazard (i.e., result of the last step of the forward PBEE analysis). As such, this design or inverse PBEE problem can be solved using the PPBOSD framework newly proposed in this paper.
For validation purposes, a well-posed inverse PBEE problem needs to be set up such that the solution to this problem is known as a priori. Thus, the target loss hazard is defined as the probabilistic loss hazard, function defined here is based on the total loss hazard curve, which involves a complicated implicit function evaluated through executing the simulator (e.g., the finite element model of the structure of interest subject to an ensemble of earthquake excitations) and evaluating performance objectives (e.g., the forward PBEE analysis). The proposed PPBOSD framework is expected, by using as a starting point an arbitrary but reasonable initial design, e.g., 0 = 100,000 kN/m k , = 14,000 kN y F ), to steer the design process such that the loss hazard curve gets as close as possible to the target loss hazard curve and, in this validation example, to recover the optimum design parameters which are known a priori. The validation problem for the proposed PPBOSD framework is illustrated in Fig. 14. Note that in this validation case, the optimum parameters are selected a priori with the corresponding loss hazard curve taken as the target loss hazard curve. However, in a regular (real-world) problem, the optimum design is not known in advance and instead is expected to be determined using the PPBOSD framework presented here.
Figure 14:
Illustration of validation problem for the proposed PPBOSD framework
Solution to the inverse PBEE problem
In the PPBOSD framework, tuning the initial design parameters requires computer-aided adjustment in an iterative way through mathematical optimization. Different optimization algorithms can be integrated in the PPBOSD framework, but this issue is beyond the scope of this study. Instead, the sparse nonlinear optimization software SNOPT, which was linked with OpenSees into the extended framework denoted as OpenSees-SNOPT [Gu, Barbato, Conte et al. (2012)], is used in the current version of the PPBOSD framework. For the validation example considered here, gradient-based sequential-quadratic programming (SQP) algorithms in SNOPT are used to tune the system parameters for a SDOF structural bridge model optimally rated for the target loss hazard curve. The optimization process (which stopped when the relative reduction in the objective function value was less than 5 1.0 10 − × ) and results are summarized in Fig. 15, including the iteration path over the plot of the objective function (both the (3D) surface plot and the contour plot Fig. 16. It is observed that over six iterations, both the loss hazard and demand hazard curves are driven closer and closer to their respective target hazard curves corresponding to the a priori selected optimum design parameters. Thus, the proof-of-concept example presented successfully illustrates and validates the proposed PPBOSD framework. The proposed PPBOSD framework is expected to be applied to more complex real-world problems in the field of earthquake engineering, and to support the decision-making process in structural design/retrofit with probabilistic performance objectives highly pertinent to the various stakeholders. Note that the illustrative example considers the continuous range of hazard levels, while in practice, a finite (small) set of discrete hazard levels can be used to define practical probabilistic performance objectives, e.g., focusing on a low hazard level and a high hazard level. Additionally, the objective functions of the optimization problems solved using the PPBOSD framework can also be defined in terms of the conditional demand hazard, unconditional demand hazard, and damage hazard, instead of the loss hazard exemplified in this paper.
Conclusions and discussion
The well-established probabilistic performance-based earthquake engineering (PBEE) methodology has been mainly used for performance evaluation of existing or newly designed structural, geotechnical or soil-foundation-structural systems, thus referred to herein as forward PBEE analysis. In contrast, the use of the PBEE methodology for design purposes in the presence of uncertainty is more limited, because design is strictly a more challenging inverse PBEE analysis problem. To address the performance-based design issue, this paper proposes a probabilistic performance-based optimum seismic design (PPBOSD) framework as an extension to the existing PBEE assessment methodology. In the PPBOSD framework, a decision layer supported by computational optimization is wrapped around the forward PBEE analysis methodology, which is aimed to tune the design parameters of the civil infrastructure system of interest to achieve seismic performance objectives expressed in probabilistic terms. As a first step of promoting the proposed PPBOSD framework, this paper focuses on illustrating and validating the framework using a simple proof-of-concept example, i.e., a nonlinear inelastic SDOF model representing macroscopically the longitudinal or transverse behavior of a bridge structure with a priori selected optimum design parameters. The PPBOSD framework in conjunction with the combined structural modeling and optimization software OpenSees-SNOPT successfully recovered the a priori selected optimum design parameters from a set of initial parameter values purposely taken away from the optimum values. It shows that a complicated and implicit probabilistic performance objective (e.g., defined in terms of a targeted probabilistic loss hazard curve) can be achieved using the PPBOSD framework. Note that the illustrative example used in this paper is based on a simple macroscopic structural model with two primary design variables for the purpose of clearly demonstrating the concepts and procedure. However, the design of complex real-world civil infrastructure systems (with more design variables) can also utilize the proposed PPBOSD framework, with the computational cost issue appropriately addressed (e.g., by using cloud and/or high-performance computing). However, when applying the proposed PPBOSD framework to real-world structures, the following potential difficulties or limitations will need to be addressed in future research.
(1) A numerically robust nonlinear model of the real-world structure is required. Noncollapse related non-convergence issues during the seismic response analysis need to be resolved using, for example, adaptive switching between nonlinear solution algorithms, integration methods, convergence criteria, etc., or explicit integration. If a physical collapse related convergence issue occurs, the collapse probability needs to be considered in the overall methodology [Zhang (2006); Romano, Faggella, Gigliotti et al. (2018)]. However, distinguishing between lack of convergence due to numerical issues or due to imminent physical failure (collapse) of the structure being analyzed is challenging and can possibly be addressed by artificial intelligence.
(2) To render the PBEE analysis and the PPBOSD more practical, fragility curves for various structural members as well as the associated repair/replacement costs need to be developed and compiled for various types of structures as in the FEMA P-58 PACT tool [FEMA (2012)] for buildings.
(3) The gradient-based optimization algorithms currently available in OpenSees-SNOPT may lead to a local minimum, and this issue can be addressed by using multiple starting points or using other global optimization methods. It is worth noting that from a practical viewpoint, a local minimum could already be highly beneficial, representing a significant improvement over the initial design. All the challenging issues mentioned above and possibly others can be appropriately tackled and implemented in the versatile architecture of the proposed framework. The probabilistic design objective is expressed in terms of the target loss hazard curve in this study, but it can be defined to closely reflect the design objectives of decision-makers in practice. More importantly, this framework provides the proper tool needed to develop, calibrate and validate simplified probabilistic performance-based design procedures for engineering practice. Finally, the proposed PPBOSD framework can be extended to other natural and man-made hazards (e.g., tsunami, wind/hurricane/tornadoes, storm surge, fire, blast), as well as multi-hazard design problems. | 9,023 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Terrestrial Laser Scanner Resolution : Numerical Simulations and Experiments on Spatial Sampling Optimization
An empirical approach is proposed in order to evaluate the largest spot spacing allowing the appropriate resolution to recognize the required surface details in a terrestrial laser scanner (TLS) survey. The suitable combination of laser beam divergence and spot spacing for the effective scanning angular resolution has been studied by numerical simulation experiments with an artificial target taken from distances between 25 m and 100 m, and observations of real surfaces. The tests have been performed by using the Optech ILRIS-3D instrument. Results show that the discrimination of elements smaller than a third of the beam divergence (D) is not possible and that the ratio between the used spot-spacing (ss) and the element size (TS) is linearly related to the acquisition range. The zero and first order parameters of this linear trend are computed and used to solve for the maximum efficient ss at defined ranges for a defined TS. Despite the fact that the parameters are obtained for the Optech ILRIS-3D scanner case, and depend on its specific technical data and performances, the proposed method has general validity and it can be used to estimate the corresponding parameters for other instruments. The obtained results allow the optimization of a TLS survey in terms of acquisition time and surface details recognition. OPEN ACCESS Remote Sens. 2011, 3 168
Introduction
Terrestrial laser scanning (TLS) is a remote sensing technique for high density acquisition of the physical surface of scanned objects, leading to the creation of accurate digital models.For this reason, the TLS technique is currently used in geologic surveys, engineering practice, cultural heritage, and mobile mapping [1].
A time-of-flight TLS instrument is a laser rangefinder equipped with a radar-like beam orientation system where oscillating and rotating mirrors allow a fast scan of the observed surface.Long-range instruments (i.e., instruments with range larger than 100 m) with pulse repetition frequency of 2.5-10 kHz or more are available, and instruments with 30-50 m range and pulse repetition frequency of 200 kHz also exist [2].The typical beam width of the available instruments is 0.15-0.25 mrad, i.e., 0.009°-0.015°,whereas angular sampling steps up to 0.001° can be applied.In other words, a spot spacing (or sampling step) of ~1.8 mm at 100 m acquisition distance can be applied, but the corresponding footprint diameter can reach 20 mm, ten times higher.The actual spatial resolution of a TLS instrument, i.e., its ability to detect two objects on adjacent lines-of-sight (LOSs), depends on both sampling step and laser beam width and is in-between them, as shown by several authors [3][4][5].The fact that the minimum sampling step can be much smaller than the beam width implies that a reliable evaluation of the spatial resolution is necessary in order to optimize the amount of acquired data and the corresponding acquisition time.If the sampling step is excessively low, a useless, excessively large amount of data could be acquired in an excessive amount of time, whereas if the sampling step is larger than the resolution, information loss can occur and an accurate modeling of some features of the observed object could be hard or impossible.In other words, the key factor is the choice of a sampling step able to ensure the required resolution.
Licthi and Jamtsho [4] discussed this fact by evaluating the Effective Instantaneous Field of View (EIFOV) for some instruments by assuming that the probability governing the angular position of a range measurement is uniform over the projected laser footprint.Their results show that the higher constraint on EIFOV, which is assumed to be the spatial resolution, is due to the beam width.The distribution of the irradiance along a cross-section of a laser rangefinder beam is typically Gaussian.This fact does not necessarily imply that the corresponding probability governing the angular position is Gaussian itself, but a decrease of the effect of the beam width with respect to the case of uniform probability distribution could occur.On the other hand, [5] observed a significant effect of the sampling step.Nevertheless, some assertions that can be read in some instrument brochures or in some papers [6,7], which simply identify the resolution with the sampling step, seem to be too optimistic.The Nyquist-Shannon sampling theorem [8] implies that the spatial sampling frequency must be at least two times the maximum spatial frequency occurring in the observed surface.For this reason, ideally, the minimum size of the observable features should be two times the sampling step.
This paper presents the results of both numerical simulations and observations aimed to the optimization of a TLS survey at typical acquisition distances in architectural and cultural heritage application of TLS technique, i.e., up to about 100 m.In particular, an artificial target with differently spaced wood bricks has been scanned from 25 m, 50 m, 75 m and 100 m distance, the corresponding numerical simulations have been carried out and, finally, a brick wall and a historical building, with architectural details and a bust, have been observed.Besides the analysis of the obtained data, insights about the spot spacing necessary to obtain the required resolution (if achievable) in a general case are provided.This paper mainly focuses on evaluation of the sampling step necessary to obtain the required resolution.In this way, operational suggestions to the TLS practitioner are provided.An analysis of effects on the results of the material reflectance completes this study.
Laser Scanner Spatial Resolution
Two kinds of resolution can be defined for a TLS instrument: The range resolution, which accounts for its ability to differentiate two objects on the same LOS; and the angular resolution, which is the ability to distinguish two objects on adjacent LOSs.The first specification is governed by pulse length and typically is 3-4 mm for a long range instrument, whereas the second one depends on spatial sampling interval and laser beam width and should lead to a corresponding spatial resolution of ~10-15 mm at 50 m distance [4].Besides these specifications, also the intensity resolution, i.e., the ability to differentiate adjacent areas, having similar but not equal reflectance, can be defined.In this paper the angular resolution, in particular the spatial resolution as a function of the acquisition distance, is the main topic.
In order to maximize the localization performance of a laser rangefinder, in particular of a TLS instrument, the geometry of the laser resonator is such that a Gaussian beam is generated [9]. Figure 1 shows the power distribution of a Gaussian beam on a cross-section normal to the propagation direction.The beam diameter is commonly defined as the diameter that encircles 86% of the total beam power, that corresponds to e −2 of the power axial value (with regard to the field amplitude, at the beam diameter it is e −1 times the field axial value).Let w 0 be the waist of a Gaussian beam, i.e., the minimum spot diameter.For a range r and wavelength λ, the beam width is Clearly, it is w 0 = w(0).The waist is related to the full divergence angle for the fundamental mode by Besides Θ, for a plane wave front incident upon a circular aperture of diameter D F , which is the case in focusing optics, the full-cone angle of the central disc is defined: This full-cone angle encircles about 84% of the total energy transmitted by the aperture.The angles Θ and Θ p are very similar, but not equal, and are sometimes confused.If r is sufficiently large, which is the case for a long range TLS, a linear approximation of (1) can be used.For example, in the case of Optech ILRIS-3D instrument, it is where both the spot diameter D(r) = 2w(r) and the range r are expressed in meters [7].The constant is negligible for r larger than 300 m.
According to [4], a good measure of the angular resolution of the instrument can be the EIFOV, which can be computed taking into account the Average Modulation Transfer Functions (AMTFs), see [10], related to: (i) laser beam width; (ii) spatial sampling; (iii) quantization effects, (iv) focusing optics, where the latter in general can have negligible effects if the diameter of the optics is larger than the beam width.In general, the Modulation Transfer Function (MTF) is the magnitude of the Fourier transform of the point spread function and therefore is able to account for the response of an imaging system to an infinitesimal source of light.An AMTF is the spatially averaged MTF by assuming adequate probability density distributions of the independent variables, in this case the angular coordinates of a spherical reference frame.The AMTF of the whole system is the product of the component AMTFs (note that the variables in the Fourier space are spatial frequencies).Finally, the EIFOV is the length that corresponds to the cut-off spatial frequency threshold, i.e., the frequency for which the whole system AMTF is 2/π.To obtain a reliable EIFOV, reasonable hypotheses on the component AMTFs are necessary.With this very interesting approach, the authors obtained a resolution of 17.7 mm at 50 m for the ILRIS-3D instrument.Nevertheless, the practice of this instrument shows that 17.7 mm seems to be a too pessimistic value and that a ~10 mm value seems to be a better estimate for the instrumental resolution at 50 m distance if an adequate spatial sampling is used.Such a discrepancy could be related to the modeling of the beam width's AMTF, where a uniform probability density function was assumed instead of a Gaussian one.
The EIFOV estimation is not in the scope of this paper, where the resolution is evaluated by means of numerical simulations and observation of an artificial target.The main aim of the paper is instead to search for a way to optimize the sampling step.Note that in all the studied cases, the observations are simulated or performed in normal incidence.If the incidence is not normal, the corresponding effects must be taken into account (e.g., elliptical spot instead of circular spot).
Numerical Simulations
In order to evaluate the effects of sub-sampling and beam width, several series of numerical simulations have been carried out by defining virtual bricks having different spacing.As in the case of the artificial target described in Section 4, each simulated brick has 100 mm length, 50 mm high and 18 mm visible width with respect to the cement joint, and three systems of parallel bricks are modeled, with 10 mm, 15 mm and 20 mm spacing respectively (Figure 2).As in the experiments described in Section 4, the simulations are related to 25 m, 50 m, 75 m and 100 m acquisition distances.To carry out the numerical simulations, it is necessary to model: (a) the geometry of the objects, and (b) the TLS-based observation of these objects.The calculations are performed in MATLAB environment.
The geometry of each system of bricks is represented by a 2.5D elevation model z hk = z(x h ,y k ), where x h and y k are defined on a regular grid with 0.1 mm spacing and whose plane is parallel to the simulated wall (Figure 2).The indices h and k vary from 1 to N x and N y respectively, depending on the system size.For example, if the three bricks are 10 mm spaced and simulated together with a 30 mm edge around them, it is N x = 2300 and N y = 1600.This grid size is 1-2 orders of magnitude lower than the simulated sampling step (the minimum ss taken into account in the simulations is 1 mm), spot size (the minimum spot size is 16.3 mm, that corresponds to 25 m acquisition distance) and brick spacing (the minimum value is 10 mm).For this reason, according to the Nyquist-Shannon sampling theorem [8], such a grid does not affect the results and therefore is reasonable.The TLS acquisitions are modeled according to the technical specifications of ILRIS-3D instrument (Table 1) by means of: (i) Generation of Gaussian noise to model the error of the TLS' rangefinder unit; (ii) Introduction of a Gaussian filter to model the effect of the power distribution of the laser beam; (iii) Decimation of the obtained points to model the spatial sampling.b The field of view can be extended to 110° × 360° by using of a rotating/tilting head; c To obtain reliable radiometric information, the minimum range is about 15 m, see [12].The geometric information is good even if the range is lower than 15 m; d Long range divergence, see Equation (2).
The effect of the uncertainties, due to the rangefinder unit, is modeled by means of Gaussian noise with 5 mm standard deviation (SD).The elevation of the hk th point therefore is z nhk = z hk + n hk ), where n hk is a random entry coming from the probability density function i.e., the Gaussian centered at z hk with SD σ = 5 mm.The chosen SD comes from both the ILRIS-3D technical specifications and the comparison between the results of numerical simulations and observations of the artificial target.
As stated above, the power distribution of a TLS laser beam generally has Gaussian profile.In the case of ILRIS-3D instrument, this fact is also confirmed by [11].For each point (x h ,y k ,z nhk ), a Gaussian filter with standard deviation equal to the beam radius, i.e., 2 / D with reference to Equation ( 4), is therefore introduced to blur the model.The filter equation for the hk th point is and is used to obtain the blurred elevation z bhk by means of the equation where N b = 3D/(2gs), with gs = 0.1 mm (reference grid spacing).In other words, the weighted averages in Equation ( 7) are computed within three SDs and, in this way, the fact that all the points within the spot concur to the pulse reflection is taken into account.Equation ( 7) is implemented in MATLAB in a very efficient way by means of the ‗fspecial' function that performs the 2D special filtering.
Finally, the points of the noised and filtered model are decimated in order to simulate the spatial sampling of a TLS instrument.In these calculations the sampling steps 1 mm, 2 mm, etc., up to 20 mm are used.For each acquisition distance, 20 simulations are therefore carried out.The model is very simple, but the technical data provided by the instrument manufacturer do not allow a more sophisticated modeling.
The results for some selected cases are shown in Figure 3, and the main results are summarized in Table 2.In this table, in particular, the cases where two different bricks can be easily recognized as distinct objects are shown together with the cases where the recognition is very difficult (borderline cases, corresponding to object spacing like the spatial resolution) and the ones where no recognition as a distinct object can be performed because the object spacing is below the spatial resolution.If a typical notebook with a 32 bit operating system is used, the simulation for an assigned acquisition distance is carried out in a few seconds, and all the results presented here can be obtained in a few minutes.This fact confirms that the 0.1 mm spacing for the reference matrix is a good choice.Since these simulations allow an evaluation of the effects on spatial resolution of changes in beam width and in spatial sampling, they can be used to simulate the behavior of an instrument on the basis of its technical specifications.In particular, the results could be used for choosing instrument type for a specific application.Finally, the MATLAB scripts developed to perform the numerical simulations can be requested from the authors, see contact information.
Artificial Target
The observations are carried out by using the Optech ILRIS-3D instrument, whose main technical specifications are summarized in Table 1.In order to allow an evaluation of the instrumental resolution in controlled conditions, an artificial target is used (Figure 4).The artificial target is composed by 10 wood bricks, each one measuring 100 mm in length, 50 mm height and 18 mm visible width, placed on a wood panel that represents the plane of the cement joint.Three bricks are 10 mm spaced, three 15 mm spaced and, finally, another three are 20 mm spaced.In this way, the spatial resolution can be experimentally evaluated at several acquisition distances (here 25 m, 50 m, 75 m and 100 m) in conditions like the ones considered in the already described numerical simulations.Since the depths along the LOS of the acquired objects are significantly larger than the range resolution (18 mm compared with 3-4 mm), only effects related to the angular resolution are observed here.Figure 5 shows the 25 m range point clouds acquired with different settings for spot spacing.In particular, the acquisition with 7 mm sampling step still allows the recognition of target shapes but it is critical.The main qualitative results are summarized in Table 3. 4, where the maximum spatial sampling (ss, in mm) for which the target separation (TS, in mm) can be observed is provided together with the corresponding ss/TS ratio.For each acquisition range, a maximum ss/TS value is therefore available in order to correctly settle the sampling step to achieve the required resolution.Since range r and spot diameter D are correlated as in Equation ( 4), the ss/TS ratios can also be considered as functions of D. These ss/TS values are provided based on the 25 m, 50 m, 75 m and 100 m acquisition ranges and for three target (bricks on the artificial target) separations, i.e., 10 mm, 15 mm and 20 mm.It is noteworthy that at 100 m range the achieved point cloud resolution is not good enough to allow the detection of the 10 mm gap between bricks, which is about a third of the spot divergence.For this reason and under the reasonable hypothesis of linear increase of the spot diameter with range, the assumption that the lower limit for details detection is one third of the laser beam is used later on.Figure 6 shows the qualitative comparison between results from real and synthetic point clouds.The -2‖ and -0‖ values are used to test for success or failure respectively, according to the symbols used in Tables 2 and 3. Good agreement is found, but the synthetic model appears in some cases to be too optimistic, while results from real data seem to be reasonable.At long distance, the relation between D and r should be linear, as shown by Equations ( 1) and ( 4), and non-linearity effects are improbable.The discrepancies therefore are probably due to the choice of the Gaussian window for the model blurring.Further investigations will be carried out to clarify this fact for distances over 100 m, to evaluate the tendency of the observed behavior.In any case, the operational indication for a TLS user whose choices are driven by the numerical simulations only, is that the set out ss should be about 30% lower than the one directly suggested by these simulations, if the acquisition distance is over 100 m and the wished resolution is ~2D/3.Moreover, the experimental results show that a resolution better than D/3 cannot be reached even if the simulations show that such a result is possible.
Resolution Estimation and Optimization of the Sampling Step
The ss/TS ratios are simply analyzed by means of a linear fit (Figure 7), showing a clear linear trend at the studied ranges.In this way, a simple empirical formula can be obtained in order to estimate the maximum sampling step that must be selected in order to ensure the desired resolution, under the conditions in which such a resolution can be effectively achieved ( 3 / D TS ).A linear fit is where r is expressed in m.With this formula the sampling step can be chosen in order to allow the acquisition of all the desired geometric information by means of TLS acquisition at the range r with the minimum possible scanning times.The fact that the beam divergence, D, does not appear in Equation ( 8) should be noted.This formula is valid for the ILRIS-3D instrument only.However, the same experimental procedure can also be carried out in the case of other instruments, leading to the estimation of the zero and first order parameters of the equations that correspond to Equation (8).The results obtained using the described experimentations, in particular the empirical Equation ( 8), which allow the creation of tabular values to easily define the most suitable ss for the specific application.For each range from 20 to 100 m and for three desired target separations, i.e., desired spatial resolutions, Table 5 reports the necessary maximum sampling step, expressed both in mm and in terms of the minimum sampling step of ILRIS-3D instrument.The cases where the desired resolution cannot be reached because it is similar or lower than a third of the beam width are also listed, while significant results are shown in bold.ratio obtained with Equation ( 5) at defined ranges r from 20 m to 100 m; D is the spot diameter; ss is the minimum achievable spot spacing with given r; TS is the target separation; max ss is the maximum spot spacing usable to detect the desired TS; and unit*ss is the corresponding number of spot spacing units in the case of ILRIS-3D instrument.The distances for which the desired resolution can be obtained (D > 3TS) are highlighted in bold.A column with 1/3 D is also shown because it is the lower limit for detection of small structures (see Section 4.1).r (m) D (mm) ss (mm) EMPIR
Brick Wall
To validate the results obtained with the artificial target, the brick wall of a building is observed from about 35 m distance.Figure 8 shows a wall portion.The brick length varies from 130 mm to 260 mm, the mean brick height is 55 mm and the mean visible width, with respect to the mean plane of the cement joint is 5 mm (the SDs in height and width are 3 mm).The mean spacing between the bricks is 10 mm, with 3 mm SD.Using the empirical table (Table 5), and the reference range of about 40 m, the maximum spot spacing for an efficient acquisition of wall shapes (10 mm) results not larger than 5.8 mm, while the minimum detectable size is 6.3 mm.For this reason, the distance between bricks (10 mm) should be measurable.The results highlight the ability of an efficient scanning up to the 5.8 threshold (i.e., 4.9 mm at about 35 m).Moreover, the fact that no good results can be obtained if a sampling step significantly larger than the threshold is settled should be noted, to confirm the validity of the proposed empirical method.Finally, image enhancement methods like those based on anisotropic diffusion could be used to further improve the results leading to a better resolution (e.g., [13,14]).Nevertheless, such a step is out of the scope of the present paper, which instead focuses on a fast method for TLS survey optimization.
Historic Building
The artificial target and the brick wall are both composed of parallel elements, the simulated and real bricks respectively.In these cases, the performance of a TLS instrument can be easily evaluated, but in the TLS practice simple shapes are not commonly observed.For this reason, investigations on TLS performance are generally carried out by using both simple and complex artificial targets, for example the 3D Siemensstern (see e.g., [15]).
In order to check the validity of the proposed optimization method in the case of a complex surface, characterized by relatively fine details, the front a historic building is observed from about 40 m.The typical size of the architectural details and busts that should be detected is 10 mm.The ss value, i.e., the maximum spatial sampling step that can be chosen in order to ensure the required resolution, is 5.8 mm in the case of parallel lines (Table 5).
Figures 9 and 10 show the results (the upper part of a window of the building facade and a zoom of it respectively).The main results are: (i) no significant differences can be found if the sampling step is 2.4 mm (high oversampling case), 3.6 mm or 4.2 mm; (ii) a large amount of the details can be recognized if the sampling step is 5.4 mm; (iii) if the sampling step is 7.8 mm, some details are lost.The general validity of the method is confirmed, even though the fact that the results obtained in the case of parallel lines are slightly optimistic should be noted.In order to reliably use Table 5 in the case of general geometry, the ss values should be reduced by a factor of 10%.
Discussion and Conclusions
The long range TLS instruments generally work at ranges from a couple of ten meters to some hundreds of meters, and are characterized by a laser beam whose diameter linearly increases along with the distance.The chosen spot spacing is generally smaller than the spot diameter providing a partial spot overlap.In the case of Optech ILRIS-3D, for example, a ratio between spot diameter (D) and spot spacing (ss) equal to 10 can be set.However, the actual resolution of a point cloud depends on both D and ss.Numerical modeling and experiments were carried out in order to study the ability of the ILRIS-3D instrument in surface shape details capturing for ranges lower than or equal to 100 m.The analyses described here were driven by the necessity of planning a series of TLS surveys in the city of Bologna, therefore for architectural survey and cultural heritage purposes.To carry out these surveys in an efficient way, a knowledge of the actual limitations imposed by ss and D on the target detection was very important.An approach to sampling step optimization in a TLS survey, aimed to reliably estimate the maximum sampling step that can be used to warrant the acquisition of a surface with the desired level of detail, was therefore developed.In other words, the aim was the computation of the optimal ratio between the sampling step and the desired resolution, since a larger sampling step with the same amount of information leads to less acquisition time.
The results show that the resolution estimates provided by [4] seem to be quite pessimistic.For example, at 50 m distance their estimate for ILRIS-3D instrument is 17.7 mm, but the experimental and numerical results obtained here show that a 10 mm interstice between two bricks can be easily acquired and modeled for such an acquisition distance.On the other hand, too optimistic affirmations can also be found, as in the case of [6], which identifies the resolution with the sampling step thanks to the correlated sampling, i.e., the sampling with overlapped laser spots.If a 0.9 mm sampling step is used at 50 m acquisition distance, i.e., the minimum possible ILRIS-3D spot spacing at such a distance, according to [6], this should lead to a point cloud with a similar resolution, but this is incorrect.The resolution is 10 mm instead and can be achieved by applying a 5.0 mm sampling step (ss/TS = 0.5).If the approach suggested by [6] is used, a larger amount of data is acquired in a time correspondingly times higher, without any gain in the actual amount of acquired information.
A very interesting result is that an empirical relation between the range and the ss/TS ratio exists, Equation (8).With this equation, or the corresponding Table 5, the user can settle the optimal sampling step, ss, for which a required object separation, TS, is obtained.The equation holds for ILRIS-3D instrument, but the proposed method can easily be applied to other instruments to obtain the corresponding empirical equations.The method has a general validity, since the physical processes involved in the TLS-based acquisition are the same for each time-of-flight instrument, but a data fit based on the specific characteristics of the used instrument is necessary.The comparison between the experimental data and the simulation results shows that from the beam width and the sampling step, and assuming reliable noise levels, a relatively accurate estimation of the spatial resolution as well as of the optimal ss/TS ratio can be carried out.In this way, a user can evaluate the required sampling step for the available instrument, or can choose between two or more instruments for a specific observation if they are available.The fact that the obtained empirical law is related to a simple geometry of the target should be noted.This is an ideal limit for surveying and therefore the corresponding ss/TS values are insuperable thresholds.If a ss/TS value larger than this threshold is set, surely the required resolution is not achieved.Nevertheless, in the case of a complex surface, the sampling step should be 10% shorter than the threshold, as shown in the case of the historical building observation.
The experiments and the numerical simulations deal with acquisition distances up to 100 m, and the results could not be directly extrapolated to significantly longer distances.Other experiments, with artificial targets having different shapes and sizes, will be carried out in order to extend the results at distances up to one kilometer.Moreover, other experiments are planned to study the real relation between r and D for some commercially available long range TLS instruments.
The fact that the model resolution is beyond the scope of this paper should be noted.Although the final result of a TLS survey is sometimes a digital model, this study deals with instrumental performance, in particular spatial resolution, not with model resolution.If the modeling of sharp features like edges, thin tubes, wires and others is necessary, the obtained ss/TS values could be overestimated because a more adequate number of points would be required.
Figure 1 .
Figure 1.Power distribution around the laser beam axis (left panel), where z indicates the propagation direction, and radial power profile (right panel).The x, y and radial coordinates are normalized to the beam width w(z), that corresponds to exp(−2) of the power axial value.
Figure 3 .
Figure 3. Targets modeled from simulated point clouds at 50 m range.Besides the depicted case, the simulations at the 25 m, 75 m and 100 m modeled distances have been performed.
Figure 4 .
Figure 4. Artificial target for resolution testing.
Figure 5 .
Figure 5. Point clouds at 25 m range.The blue plane is parallel to the target and is used to highlight the shapes and details detection.
Figure 6 .
Figure 6.Black and gray columns represent the experiment's success at defined range (r) and spot spacing (ss) using synthetic and real point clouds respectively.The red lines show complete failure.
Figure 7 .
Figure 7. Ratio between maximum sampling step (ss, in m) for which a target separation (TS, in m) can be achieved for 25 m, 50 m, 75 m and 100 m acquisition distance.
Figure 8 .
Figure 8. Four point clouds acquired at about 35 m distance with different spot spacing.In particular, the last (7.6 mm) is not able to provide target shapes.The difference in gray level is due to different intensity of reflected pulses and has no effect on the geometric data.
Figure 8
Figure 8 shows the point clouds acquired on the wall portion of the house in front of the INGV office.A range of view is chosen and several scans are started, characterized by different spot spacing.The results highlight the ability of an efficient scanning up to the 5.8 threshold (i.e., 4.9 mm at about 35 m).Moreover, the fact that no good results can be obtained if a sampling step significantly larger than the threshold is settled should be noted, to confirm the validity of the proposed empirical method.Finally, image enhancement methods like those based on anisotropic diffusion could be used to further improve the results leading to a better resolution (e.g.,[13,14]).Nevertheless, such a step is out of the scope of the present paper, which instead focuses on a fast method for TLS survey optimization.
Figure 9 .
Figure 9. Details of a historic building observed from 40 m distance, with five different sampling steps.
Figure 10 .
Figure 10.Details of a historic building observed from 40 m distance, with five different sampling steps.Details of the upper part of a window.
a A version with 10-kHz pulse repetition frequency is available;
Table 2 .
Qualitative results of the target detection at 25, 50, 75 and 100 m.Legend: ss: spatial sampling; 2: recognition of two simulated bricks as distinct objects is easy; 1: borderline recognition cases, i.e., recognition difficult; 0: recognition not possible.
Table 4 .
For each range r and the corresponding spot diameter D, the table shows the maximum sampling step (ss) that allows a correct separation between two targets whose spacing is TS, as well as the corresponding ss/TS ratio.
Table 5 .
The EMPIR column provides the TS ss / | 7,561.4 | 2011-01-14T00:00:00.000 | [
"Engineering",
"Physics"
] |
THE TASEP ON GALTON–WATSON TREES
. We study the totally asymmetric simple exclusion process (TASEP) on trees where particles are generated at the root. Particles can only jump away from the root, and they jump from x to y at rate r x,y provided y is empty. Starting from the all empty initial condition, we show that the distribution of the configuration at time t converges to an equilibrium. We study the current and give conditions on the transition rates such that the current is of linear order or such that there is zero current, i.e. the particles block each other. A key step, which is of independent interest, is to bound the first generation at which the particle trajectories of the first n particles decouple.
Introduction
The one-dimensional totally asymmetric simple exclusion process (TASEP) is among the most studied particle systems.It is a classical model which describes particle movements or traffic jams, studied by scientists from statistical mechanics, probability and combinatorics over several decades.The model is simple but shows a variety of phase transitions and phenomena such as the formation of shocks [17,26].It can be briefly described as follows.A set of indistinguishable particles are individually placed on distinct integer sites.Each site is endowed with a Poisson clock, independently of all others, which rings at rate 1.Should a particle occupy a given site, the particle attempts to jump one unit to the right when the site clock rings, and the jump is performed if and only if the target site is unoccupied, otherwise it is suppressed.This last condition is the exclusion rule.One-dimensional TASEP is only a particular example of an exclusion process, with a degenerate jump kernel on Z × Z given by p(x, x + 1) = 1 for all x ∈ Z.When different jump kernels are considered, exclusion processes can be defined on any graph, including higher dimensional lattices or trees and they have also been studied extensively; see for example [26].
In this article we define the TASEP on (directed) rooted trees.This way the particle system retains the total asymmetry of its one-dimensional analogue, while having more space to explore. Figure 1 shows a snapshot of the evolution.In our setup, particles jump only in the direction pointing away from the root under the exclusion rule and choose their target site according to some jump kernel, that puts mass only on the children of their current location.In addition, we create particles at the root at a constant rate through a reservoir.Our underlying tree may be random as long as it doesn't have leaves so particles cannot be eternally trapped.We will restrict our attention to TASEP on supercritical Galton-Watson trees without leaves, including the special case of regular trees.Moreover, we will assume that the tree is initially empty.λ 1 2 Figure 1.Snapshot of a Tree-TASEP evolution.Particles enter at the root at rate λ and then move down the tree, i.e. their distance from the root can only grow.They attempt a jump when the Poisson clock of an edge in front of them rings and the target site will be the child associated to the edge.The jump is suppressed if the target site is occupied (e.g.look at particle 1 attempting to jump at the occupied child) otherwise the jump is performed (e.g.particle 2).
Ideas to investigate the TASEP on trees can already be found in the physics literature as a natural way to describe transport on irregular structures, like blood, air or water circulations system; see [3,31,44].Exclusion processes on trees, but with no forbidden directions, were studied when the particles perform symmetric simple random walks; see [9,18].
One-dimensional TASEP provided an early connection between interacting particle systems and last passage percolation (LPP) on the two dimensional lattice, in an i.i.d.exponential environment.Viewing the particle system as queues in series, one can utilize Burke's theorem to find a family of invariant LPP models; see [1].These models can be exploited to obtain, for example, sharp variance bounds for last passage times.Burke-type theorems usually imply that the model in question is an integrable example of the KPZ universality class; see [13] for an overview and articles [2,11,14,32,42] for other lattice examples having Burke's property.In particular, the exponential corner growth model and the one-dimensional TASEP, which are linked through specific initial conditions and a height function representation, provably exhibit the correct scalings and Tracy-Widom weak limits associated with the KPZ class [22].Recently, it was shown that for a large class of initial conditions, TASEP converges to the KPZ fixed point [30].
Coupling the TASEP to a growth model can be done via the current (or aggregated current) of the particle system.The current states how many particles pass through a certain site (or generation) by a given time.Our interests are two-fold.On one hand, we fix a time window and we want to know the current across a given generation by that time.The dual question is to fix a generation window and see how many particles occupy sites in there, by a given time.We study both of these questions.
Finally, we investigate the law of the process in a finite region for large times to derive properties of the limiting equilibrium measures.An important observation is that once two particles are on distinct branches of the tree, they do not effect the transitions of each other.We make use of this observation by locating where the particle trajectories disentangle and the particles start to move independently.Quantifying the location of disentanglement is a key step in our analysis.The proof utilizes combinatorial, geometric and probabilistic arguments.
In the next subsection we give a formal introduction to the TASEP on trees and present our results on the disentanglement, the current and the large time behaviour of the particles.Our main results are Theorem 1.5, Theorem 1.12, Theorem 4.1, Theorem 4.2, Lemma 6.3, and Lemma 6.6.
1.1.1.TASEP on trees.We will work with Galton-Watson trees; see [28,Chapter 4] for a general introduction.Let T = (V, E, o) be an infinite, locally finite, rooted tree with directed edges pointing away from the root o, and let T be the set of all such trees.Definition 1.1.Let µ be a distribution on N 0 = N ∪ {0} and set p ℓ := µ(ℓ) for all ℓ ∈ N 0 .A Galton-Watson tree with offspring distribution µ is a tree in T sampled as follows.We start with the root o and draw a number of children according to µ.Then for each child, we again draw a number of children according to µ independently, and iterate.All edges in the tree are directed edges from parents to their respective children.
Note that the Galton-Watson branching process with offspring distribution µ induces a probability measure GW on T ; see [28,Chapter 4].This includes the special case of regular trees when µ is a Dirac measure.
Next, we fix a tree T = (V, E, o) ∈ T drawn according to GW.On this tree T, the totally asymmetric simple exclusion process (TASEP) (η t ) t≥0 with a reservoir of intensity λ > 0 and transition rates (r x,y ) (x,y)∈E is given as follows.A particle at site x tries to move to y at rate r x,y provided that (x, y) ∈ E. However, this move is performed if and only if the target is a vacant site.Moreover, we place a particle at the root at rate λ whenever the root is empty.We will choose the transition rates (r x,y ) (x,y)∈E such that (η t ) t≥0 is a Feller process; see [27] for an introduction.More precisely, (η t ) t≥0 will be the Feller process on the state space {0, 1} V with generator (1.2) for all cylinder functions f .Here, we use the standard notation η(z) for z = x, y , η(x) for z = y , η(y) for z = x , and η x (z) = η(z) for z = x , 1 − η(z) for z = x , to denote swapping and flipping of values in a configuration η ∈ {0, 1} V at sites x, y ∈ V .
The following statement gives a sufficient criterion on the transition rates such that the totally asymmetric simple exclusion process on T is indeed a Feller process.Proposition 1.2 (c.f.Proposition A.1 in [18]).Assume that for GW-almost every tree in T , the transition rates (r x,y ) are uniformly bounded from above.Then for GW-almost every tree T, the TASEP on T is a Feller process.
For a tree T ∈ T , let P T denote the law of the TASEP on T. Furthermore, we set P = GW × P T to be the semi-direct product where we first choose a tree T ∈ T according to GW and then perform the TASEP on T. For x ∈ V , let |x| denote the shortest path distance to the root.We set (1.4) Z ℓ := {x ∈ V : |x| = ℓ} and we will refer to Z ℓ as the ℓ th generation of the tree, for ℓ ∈ N 0 .
Throughout this article, we will consider the d-regular tree for some d ≥ 3 with common rates per generation as an example.In this case, the offspring distribution µ is the Dirac measure on d − 1 and we let the rates (r x,y ) (x,y)∈E satisfy 1.1.2.Conditions on rates.In the following, let the rates be bounded uniformly from above for GW-almost every Galton-Watson tree, and let the tree be initially empty.We start with an upper bound on the first generation at which the first n particles are located in different branches of the tree, and hence behave like independent random walks.Throughout this section, we will impose the following two conditions on the transition rates.Our first assumption on (r x,y ) is a non-degeneracy condition, which ensures that the particle system can in principle explore the whole tree.
Note that (UE) guarantees that the first n particles will eventually move on different subtrees of T and behave as independent random walks after a certain generation; see Proposition 2.1.To state our next assumption, we define to be the minimal and maximal transition rates in generation ℓ for all ℓ ∈ N 0 .The following assumption guarantees that the rates are not decaying too fast, which may cause certain branches of the tree to become blocked for the particles.At this point we want already to keep four quintessential examples in mind, that we will use to highlight the results and to show different regimes of behaviour.They are all on the d-regular tree which can be viewed as a Galton-Watson tree with µ ∼ δ d−1 and the rates are equal across their generation, making the tree endowed with the rates a spherically symmetric object.Equipped with these two assumptions, we will now introduce some notation to state our main results.In the following, we let (1.8) d min := min{i : be the minimal number of offspring and the mean number of offspring when conditioning on having at least two offspring, respectively.Let (1.9) and define the integer function for all n ∈ N, where we use the convention inf{∅} = ∞.In words, (D n ) n∈N denotes a sequence of generations along which all rates decay at least polynomially fast.The order of the underlying polynomial depends on the structure of the tree.In particular, for exponentially fast decaying rates, D n will be of order log n.We are now ready to quantify the generation where decoupling of the first n particles is guaranteed.
Theorem 1.5 (The disentanglement theorem).Consider the TASEP on a Galton-Watson tree and assume that the transition rates satisfy assumptions (UE) and (ED).Recall ε ∈ (0, 1] from (UE).Let δ > 0 be arbitrary, but fixed, and define M n for all n ∈ N as follows.
(1) When lim sup ( Then P-almost surely, the trajectories of the first n particles decouple after generation M n for n large enough, i.e. the first n particles visit distinct sites at level M n . Remark 1.6.If in Theorem 1.5 neither (1) or ( 2) is satisfied, one could either pass to subsequences which satisfy (1) or (2), or instead apply Proposition 2.1 from Section 2 which will give a coarse bound of order n on the generation M n .
Example 2 (Disentanglement generations).For the four examples on the d-regular tree, and for δ > 0 arbitrarily small (C) (Constant rates) r x,y = 1.Here D n = +∞ and so by (1.12) (E) (Exponentially decaying, homogeneous, rates) D n is of logarithmic order, so (S) (Slow rates) r x,y = (d − 1) −|x|−1 g(|x|) where g(s) → 0 as s → ∞ at most exponentially fast.Because of g, we cannot solve for D n explicitly, but D n is still of logarithmic order as above.If for example g(s) = s −p for some p > 0, we may bound for n large enough (P) (Polynomially decaying rates of power p) r x,y = (|x| + 1) −p where p > 0. D n is polynomial in order, so for c low > 0 arbitrarily small Remark 1.7.Using the pigeonhole principle, we see that for GW-almost every tree and any family of rates, the first generation of decoupling of n particles will be at least of order log n.When the rates decay exponentially fast, the disentanglement theorem ensures that order log n generations are sufficient to decouple n particles.In particular, in this case the bounds in Theorem 1.5 are sharp up to constant factors.
Remark 1.8.A similar result on the disentanglement of the particles holds when we replace the reservoir by any dynamic that generates almost surely a number of particles which grows linearly in time.This may for example be a TASEP on a half-line attached to the root and started from a Bernoulli-ρ-product measure for some ρ ∈ (0, 1).
1.1.3.Currents.Using Theorem 1.5, we now study the current for the TASEP on Galton-Watson trees.For any pair of sites x, y ∈ V , we say that y is below x (and write x ≤ y) if there exists a directed path in T connecting x to y.Let the starting configuration η 0 be either the empty configuration -as we will mostly assume in the following -or contain finitely many particles.Then we define the current (J x (t)) t≥0 across x ∈ V by (1. the sum of outgoing rates at site x.For m ≥ ℓ ≥ 0, we define and set R min ℓ := R min ℓ,ℓ as well as R max ℓ := R max ℓ,ℓ .Intuitively, R min ℓ,m and R max ℓ,m are the expected waiting times to pass from generation ℓ to m + 1 when choosing the slowest, respectively the fastest, rate in every generation.In the following, we state our results on the current in Theorems 1.9 and 1.11 only for exponentially decaying rates, i.e. if there exists some c up > 0 such that holds for all ℓ ∈ N. We provide more general statements in Section 4 and 5 from which Theorems 1.9 and 1.11 as well as Examples 3 and 4 follow.Fix now some integer sequence (ℓ n ) n∈N with ℓ n ≥ M n for all n ∈ N, where M n is taken from Theorem 1.5.For every n ∈ N, we define a time window [t low , t up ] in which we study the current through the ℓ th n level of the tree, and where we see a number of particles proportional to n passing through Z ℓn .
Theorem 1.9 (Time window for positive current under (1.18)).Suppose that (UE) and (ED) hold, and that the rates satisfy (1.18).Then for any δ ∈ (0, 1), there exist some c > 0 such that for all choices of t low = t low (n) and t up = t up (n) with Example 3 (Exponentially decaying, homogeneous rates (E)).On the d-regular tree for d ≥ 3 with homogeneous rates from (1.6), let ℓ n = M n = c d log n with M n from (1.13) and c d > 0. Then there exists a constant c > 0 such that (1.20) holds when we set Note that the precision of the bounds on the current strongly depend on the transition rates and the structure of the tree.See Examples 7-11 for polynomially decaying rates in Section 5. Theorem 1.9 will be deduced from the more general Theorem 4.1, while Example 3 follows directly from Theorem 1.9 and Example 2.
Next, we let t be a fixed time horizon and define an interval [L low , L up ] of generations.Recall the generation M n from Theorem 1.5 for the first n particles and define (1.23) Note that for exponentially decaying rates, the quantity n t will be a polynomial in t.For large times t, the next theorem gives a window of generations where we expect to see the first n t particles which entered the tree.
Theorem 1.11 (Generation window for positive current under (1.18)).Suppose that (UE) and (ED) holds, and the rates satisfy (1.18).Then there exists a constant c > 0 such that for all L low = L low (t) and L up = L up (t) with for t ≥ 0, we see that P-almost surely We will obtain Theorem 1.11 as a special case of Theorem 4.2 in Section 4. For the d-regular tree with homogeneous rates, we have the following current estimate in a sharp window of generations; see Section 5.2 for the proof.
Example 4 (Exponentially decaying, homogeneous rates (E)).On the d-regular tree for d ≥ 3 with rates from (1.6), we have that P-almost surely, for every α = α(t) going to 0 where we can choose L low and L up to be of the form 1.1.4.Large time behaviour.We study the law of the TASEP on trees for large times.Again, we let the TASEP start from the all empty initial configuration according to ν 0 , where for ρ ∈ [0, 1], ν ρ denotes the Bernoulli-ρ-product measure on {0, 1} V .In contrast to the previous results, the geometry of the tree does not play an important role.However, we need assumptions on the transition rates.We assume that the rates are bounded uniformly from above.For x ∈ V (T ) recall r x from (1.16).Let the net flow q(x) through x be where x is the unique parent of x.The rates satisfy a superflow rule if q(x) ≥ 0 holds for all x ∈ V (T ) \ {o}.In particular, when q(x) = 0 for all x = o we say that the rates satisfy a flow rule with a flow of strength q(o); see Figure 2 for an example of flow with homogeneous rates on the binary tree.The rates satisfy a subflow rule if (E) (Exponentially decaying, homogeneous, rates) r x,y = (d− 1) −|x|−1 .The rates here satisfy a flow rule.(S) (Slow rates) r x,y = (d − 1) −|x|−1 g(|x|) where g(s) → 0 as s → ∞.The condition on g gives that the rates satisfy a subflow rule.(P) (Polynomially decaying rates of power p) r x,y = (|x| + 1) −p where p > 0. The rates satisfy a superflow rule if log 2 ≤ p −1 log(d − 1).
We endow the probability measures on {0, 1} V with the topology of weak convergence.
Theorem 1.12 (Fan and shock behaviour).Let (S t ) t≥0 be the semi-group of the TASEP (η t ) t≥0 on a fixed tree T ∈ T , where particles are generated at the root at rate λ > 0.
We assume that T is infinite and without leaves.For all choices of (r x,y ), there exists a stationary measure π λ of (η t ) t≥0 with Then under a superflow rule, π λ = ν 1 and the current (J o (t)) t≥0 through the root is P Talmost surely linear in t, Moreover, if in addition λ < r o as well as the system exhibits a fan behaviour, i.e. we have Under a subflow rule, the system exhibits a shock behaviour, i.e. π λ = ν 1 and We direct the reader to Section 6 for the proof of Theorem 1.12, a more detailed discussion and further results regarding the TASEP on trees in equilibrium.
1.2.One-dimensional TASEP and parallels with TASEP on trees.A great strength of various particle systems are their explicit hydrodynamic limits, as macroscopic and microscopic behaviour are connected; see for example [16] for a beautiful survey on TASEP, and references therein.Hydrodynamic limits for the homogeneous TASEP, in the sense of a rigorous connection to the Burgers equation, were originally established by Rost in the rarefaction fan case [34].This result was extended in various ways in [36,37,38,39].The particle density was shown to satisfy a scalar conservation law with an explicit flux function that turns out to be the convex dual of the limiting level curve of the last passage limiting shape.The density is the almost sure derivative of the aggregated current process, which, when appropriately scaled, satisfies a Hamilton-Jacobi-Bellman equation.
A key endeavour is to understand the equilibrium measures.For the homogeneous onedimensional TASEP, the extremal invariant measures are Bernoulli-product measures and Dirac measures on blocking states; see [6,23,25].In Lemma 6.3 we verify for the TASEP on trees that a flow rule implies the existence of non-trivial invariant product measures.
When the jump rates are deterministic, but not equal, we have a spatially inhomogeneous TASEP.In this case, even in dimension one, less is known and usually the results have some conditions on the rates.For example, consider the one-dimensional particle system where we alter the rate of the Poisson process of the origin only; any particle at 0 jumps to site 1 at rate r < 1, while all other jump rates remain 1.The question is whether this local microscopic change affects the macroscopic current for all values of r < 1.This is known as the 'slow bond problem', introduced in [20,21] on a finite segment with open boundaries.On Z, progress was made in [40] where a coupling with the corner growth model in LPP showed that the current is affected for r less than ∼ 0.425, and a hydrodynamic limit for the particle density was proven.A positive answer to the question appeared in [4].
When the inhomogeneities are not local but macroscopic, several articles show hydrodynamic limits for TASEP (or study the equivalent inhomogeneous LPP model) with increasing degree of complexity for admissible deterministic rates coming from a macroscopic speed function [8,10,19,33].The commonality between them is that the rates need to behave in a nice way so that the current of TASEP at position ⌊nx⌋ at time ⌊nt⌋ remains linear in n.In this article we use a coupling with the corner growth model to bound the current; see Section 3.3.We only make the Assumptions (UE) and (ED) for admissible rates.As such, evidenced by Theorems 4.1 and 4.2, we have different regimes for the order of current up to a given time, where the current is not necessarily linear in time.Our results such as the sharp order of magnitude for the time window assume more on the decay of the jump rates across the tree; see Section 5 for explicit calculations on the d-regular tree.
Depending on initial particle configurations, the macroscopic evolution of the particle density in one-dimensional TASEP may exhibit a shock or a rarefaction fan, as one can see from the limiting partial differential equation.In a simple two-phase example, even starting from macroscopically constant initial conditions, one can see the simultaneous development of shocks and fans, depending on the common value of the density [19].In this article, we can still describe shock or fan behaviour of the limiting particle distribution; see Theorem 1.12, even without a hydrodynamic limit.In particular, we show that this behaviour in fact occurs in the limit, starting from the all empty initial condition.A tool we are using is to approximate the TASEP by a finite system with open boundaries; see Section 6. Stationary measures for the one-dimensional TASEP with reservoirs and deaths of particles were studied using elaborated tools like the Matrix product ansatz; see [5], or combinatorial representations, like staircase tableaux and Catalan paths [12,29,35].
1.3.Outline of the paper.In the remainder of the paper, we give proofs for the results presented in Section 1.1.We start in Section 2 with the proof of the disentanglement theorem.The proof combines combinatorial arguments, geometric properties of Galton-Watson trees and large deviation estimates on the particle movements.In Section 3, we introduce three couplings with respect to the TASEP on trees which will be helpful in the proofs of the remaining theorems.This includes the canonical coupling for different initial configurations, a coupling to independent random walks and a comparison to a slowed down TASEP on the tree which can be studied using inhomogeneous last passage percolation.These tools are then applied in Section 4 to prove Theorems 4.1 and 4.2 on the current, which in return give Theorems 1.9 and 1.11.We show in Section 5 that the current bounds can be sharpened for specific rates on the d-regular tree.In Section 6, we turn our focus to the large time behaviour of the TASEP and prove Theorem 1.12.This uses ideas from [24] as well as the canonical coupling.We conclude with an outlook on open problems.
The disentanglement theorem
The proof of Theorem 1.5 will be divided into four parts.First, we give an a priori argument on the level where the particles disentangle, requiring assumption (UE).We then study geometric properties of Galton-Watson trees.Afterwards, we estimate the time of n particles to enter the tree.This will require only assumption (ED).In a last step, the ideas are combined in order to prove Theorem 1.5.
2.1.
An a priori bound on the disentanglement.In this section, we give an a priori bound on the disentanglement of the trajectories within the exclusion process.This bound relies on a purely combinatorial argument, where we count the number of times a particle performing TASEP has a chance to disentangle from a particle ahead.Recall that we start from the configuration where all sites are empty.For a given infinite, locally finite rooted tree T and x, y ∈ V (T), recall that we denote by [x, y] the set of vertices in the shortest path in T connecting x and y.We set to be the number of vertices in [o, x] \ {x} with degree at least 3.For any fixed tree (T, o) ∈ T , let d T be the smallest possible number of offspring a site can have.Note that when T is a Galton-Watson tree, d T = d min holds GW-almost surely for d min from (1.8).For all i, m ∈ N, let z m i ∈ Z m denote the unique site at generation m which is visited by the i th particle which enters the tree.
Proposition 2.1.For (T, o) ∈ T , consider the TASEP on T where n particles are generated at the root according to an arbitrary rule.Assume that (UE) holds for some ε > 0. Then , where for all m, n ∈ N, we set We will use Proposition 2.1 to control the probability that two particles have the same exit point at Z m , in a summable way, provided that F n (m) ≥ c log(n) for some c = c(ε) > 0. Note that this bound can in general be quite rough as for example on the regular tree with rates as in (1.6), we expect n particles to disentangle already after order log n generations.
Proof of Proposition 2.1.Consider the j th particle for some j ∈ [n] := {1, . . ., n} which enters the tree.We show that the probability of particle j to exit from x ∈ Z m satisfies (2.4) for all j ∈ [n].Note that if particle j exits through x, it must follow the unique path [o, x]; see also Figure 3.Our goal is to find a generation m large enough that guarantees that on i j Figure 3. Visualization of the key idea for the proof of the a priori bound on the disentanglement.When (UE) holds, the probability that particle i follows the blue trajectory of particle j is at most any ray the particle will have enough opportunities to escape this ray.
For d T ≥ 3, we argue that any particle will encounter at least F n (m) many locations on [o, x] which have at least 2 holes in front when the particle arrives.To see this, suppose that particle j encounters at least n(d T − 1) −1 generations among the first n generations with no two empty sites in front of it when arriving at that generation.In other words, particle j sees at least d T − 1 particles directly in front of its current position when reaching such a generation.Since particle j may follow the trajectory of at most one of these particles, this implies that particle j encounters at least (d T − 1) • n d T −1 = n different particles in total until reaching level n.This is a contradiction as j ≤ n and the tree was originally empty.
For d T ∈ {1, 2} we apply a similar argument.We need to find m large enough so that every possible trajectory has min x∈Zm F (o, x) ≥ n locations where, when a particle arrives there are at least two children, and there is no particle ahead.By definition, every possible trajectory has at least F (o, x) ≥ F n (m) + n sites with at least two children.Observe that in order to follow the trajectory [o, x] for some x ∈ Z m , the first accepted transition at every stage must be along [o, x].But there can be at most n sites at which the first attempt was not to follow [o, x] and this attempt was suppressed.This is because in order to block an attempt of leaving [o, x], the blocking particle cannot be on [o, x] and thus block only a single attempt of particle j to jump.Hence, there must be at least F n (m) sites of degree at least 3 accepting the first attempted transition.Now we prove (2.4).Suppose that particle j is at one of the F n (m) many locations, say y ∈ Z ℓ , on [o, x] where two different children z 1 , z 2 of y are vacant.At most one of them belongs to [o, x], say z 1 .Using (UE), the probability of selecting z 1 is bounded from above by (1 + ε) −1 .To stay on [o, x], we must pick the unique site in [o, x] at least F n (m) many times, independently of the past trajectory.This shows (2.4).Since particle i is not õ o T o Tõ Figure 4.A core at the left-hand side and one of its corresponding Galton-Watson trees on the right-hand side.We obtain the Galton-Watson tree from the core (the core from the Galton-Watson tree) by adding (removing) the smaller vertices depicted in gray.
influenced by the motion of particle j for all j > i, we conclude , applying (2.4) for the last inequality.
2.2.
Geometric properties of the Galton-Watson tree.Next, we give an estimate on the number F (o, x), defined in (2.1), which will be essential in the proof of Theorem 1.5 when there is a positive probability to have exactly one offspring.
We define the core of a Galton-Watson tree to be the Galton-Watson tree, which we obtain by conditioning in the offspring distribution with respect to (p k ) k∈N on producing at least 2 sites.Intuitively, we obtain the core from a given tree by collapsing all linear segments to single edges.On the other hand, given a core T according to the conditioned offspring distribution, we can reobtain a Galton-Watson tree with the original offspring distribution according to (p k ) k∈N , by extending every edge ẽ to a line segment of size G ẽ where (G ẽ) ẽ∈E( T) are i.i.d.Geometric-(1 − p 1 )-distributed random variables supported on N 0 .Moreover, we have to attach a line segment [o, õ] of Geometric-(1 − p 1 )-size to the root õ of T and declare o to be the new root of the tree.
An illustration of this procedure is given in Figure 4. We now give an estimate on how much the tree is stretched when extending the core with the conditioned offspring distribution to a Galton-Watson tree with an offspring distribution with respect to (p k ) k∈N .Lemma 2.2.Let (H n ) n∈N be an increasing sequence that goes to infinity and assume that p 1 ∈ (0, 1).Recall m from (1.8).Set M n := ⌈αH n ⌉ for all n ∈ N, where .
Proof.Note that all sites in the core T other than the root have at least degree 3. Hence, it suffices to bound the probability that all sites at generation H n of T are mapped to a generation less or equal than M n in the corresponding Galton-Watson tree.Using Markov's inequality, we see that Note that each site x at level H n in T is mapped to a generation given as the sum of H n -many independent Geometric-(1 − p 1 )-distributed random variables (G i ) i∈ [Hn] .Using Chebyshev's inequality, we see that when we set t = log( 1+p 1 2p 1 ).Fix some site x ∈ Z Mn .Now condition on the number of sites at level H n in T and apply (2.7) together with a union bound to see that using (2.8) and the definition of M n for the last two steps.
2.3.
Entering times of the particles in the tree.We now define an inverse for the current.For any n ∈ N, m ∈ N 0 , we set (2.9) τ n m := inf{t ≥ 0 : J m (t) ≥ n} .In words, τ n m gives the time that the aggregated current across generation m becomes n, or equivalently, precisely n particles reached Z m .Hence, the following two events are equal: The main goal of this section is to give a bound on the first time τ n 0 at which n particles have entered the tree.Note that this random time τ n 0 depends on the underlying tree as well as on the evolution of the exclusion process.
Proposition 2.3.Fix a number of particles n.Consider a supercritical Galton-Watson tree without extinction and assume that (ED) holds for some constant c low .Recall c o from (1.9).There exists a constant c > 0 such that for all n sufficiently large.
In order to show Proposition 2.3, we require a bit of setup.Let Z m be the m th generation of the subtree T x rooted at x.For a tree (T, o) ∈ T and a site x, we say that the exclusion process on T has depth of traffic D x (t) ∈ N 0 with (2.11) D x (t) = inf{m ≥ 0 : η t (z) = 0, for some z ∈ Z (x) m } , at site x at time t.In words, D x (t) is the distance to the first generation ahead of x which contains an empty site.Note that for any fixed x, the process (D x (t)) t≥0 is a non-negative integer process.It takes the value 0 when η t (x) = 0 and it is positive when η t (x) = 1.Note that (D x (t)) t≥0 can go down only in steps of one, unless at 0 where it jumps to some positive integer.The following lemma gives a bound on the depth of traffic at the root in Galton-Watson trees.
Lemma 2.4.Let H n = log 2 n and recall M n and m from Lemma 2.2.Then (2.12) In words, this means that with probability at least 1 − 2n −2 , the depth at the root is smaller than M n whenever no more than n particles have entered the tree.
Proof of Lemma 2.4.Observe that the root can only have depth ℓ when all vertices until level ℓ are occupied and that there are at most n particles until time τ n 0 .Note that Lemma 2.2 guarantees, with our choice of H n , that with probability at least 1 − 2n −2 , the tree up to generation M n contains more than n sites.Hence, there is at least one empty site until generation M n by the definition of τ n 0 .Next, we give a bound on the renewal times of the process (D o (t)) t≥0 .For t ≥ 0 and x ∈ V , we define the first availability time ψ x (t) after time t to be This is the time it takes until x is empty, observing the process from time t onward.Lemma 2.5.Fix a tree (T, o) ∈ T with root o, and assume that (ED) holds for some c low , κ > 0.Moreover, let t = t(ℓ) ≥ 0 satisfy 0 ≤ D o (t) ≤ ℓ.Then for all c > 0 (2.13) Proof of Lemma 2.5.Since D o (t) ≤ ℓ, there exists a site y with |y| ≤ ℓ + 1 and η t (y) = 0, such that the ray connecting y to o is fully occupied by particles.Thus, ψ o (t) is bounded by the time a hole at level ℓ + 1 needs to travels to o.By (ED), . Visualization of the TASEP on trees and the different generations D n and M n involved in the proof for n = 4.The particles are drawn in red.Note that it depends on the next successful jump of the particle at generation 3, if the first 4 particles are disentangled at generation M n = 4, i.e. they will disentangle if the particle jumps at the location indicated by the arrow.by using Cramér's theorem.This yields an upper bound on the left-hand side in (2.13); see Theorem 2.2.3 [15].
Proof of Proposition 2.3.Recall that a particle can enter the tree if and only if the root is empty, and that particles are created at the root at rate λ.Thus (2.14) Together with Lemma 2.4 and Lemma 2.5 for ℓ = c o log n, we obtain that holds for some c > 0 with GW-probability at least 1 − 2n 2 for all n sufficiently large.
2.4.Proof of the disentanglement theorem.For the proof of Theorem 1.5 we have the following strategy.We wait until all n particles have entered the tree.We then consider a level in the tree which was reached by no particle yet.For every vertex at that level as a starting point, we use the a priori bound on the disentanglement from Proposition 2.1; see also Figure 5.
Starting from the empty initial configuration, we study the maximal generation which is reached until time τ n 0 .The next lemma gives an estimate on the degrees of the vertices along the possible trajectories of the particles.Lemma 2.6.Let (L n ) n∈N be an integer sequence such that L n ≥ c log n holds for some c > 0 and n ∈ N. Then we can find a sequence (δ n ) n∈N with δ n tending to 0 with n such that the following statement holds with GW-probability at least 1 − n −2 for all n large enough: for every site x ∈ Z ⌈Ln(1+δn)⌉ , there exists a site y ≤ x, i.e. y is on a directed path from the root to x, with |y| ≥ L n and deg(y) ≤ log log n.
Proof.It suffices to consider the case where the offspring distribution has infinite support.Using Markov's inequality, we see that with GW-probability at least 1−(2n) −2 , the Galton-Watson tree contains at most (2n) 2 m Ln sites at generation L n .We denote by (T i ) i∈[|Z Ln |] the trees with roots o i attached to these sites.We claim that with GW-probability at least 1−(2n) −4 m −Ln , every ray [o i , x] for x at level ⌈δ n L n ⌉ of T i contains at least one vertex which has at most log log n neighbors.To see this, we use a comparison to a different offspring distribution.Recall that the mean of the offspring distribution is m < ∞, and that p i is the probability of having precisely i offspring.We define another offspring distribution for weights (p i ) i∈{0,1,... } , where pi := Let mn denote the mean of the distribution given by (p i ) i∈{0,1,... } , and note that mn → 0 holds when n → ∞.Observe that the probability that all rays up to generation ⌈δ n L n ⌉ contain at least one vertex of degree at most log log n is equal to the probability that the tree with offspring distribution drawn according to (p i ) i∈{0,1,... } dies out until generation ⌈δ n L n ⌉.Using a standard estimate for Galton-Watson trees, this probability is at least 1 − m⌈δnLn⌉ n .Set (2.15) L n log m mn and note that δ n → 0 holds when n → ∞.From this, and L n ≥ c log n for some c > 0, for all n large enough m⌈δnLn⌉ n ≤ (2n) −4 m −Ln follows.We conclude with a union bound over all trees T i at level L n .
Next, for all t ≥ 0, we let S(t) denote the generation when starting from the configuration where all sites are empty.Proof.By Lemma 2.6, with GW-probability at least 1 − n −2 , there exists some generation ℓ ≥ D n such that for every i ∈ [n], the i th particle has at most log log n neighbors.Let ζ i be the holding time at this generation for particle i and note that ζ i satisfies the stochastic domination ζ i ω i ∼ Exp(r max Dn log log n) .Set t = cn c low co+1 log n for c > 0 sufficiently large such that for all n large enough is monotone increasing for the first inequality, and Proposition 2.3 for the second step.For the same choice of t and using the definitions of D n and S(t) holds for some constant c 1 > 0 and all n sufficiently large, with GW-probability at least 1 − n −2 .An integral test shows that all error terms in the above estimates are summable with respect to n, and we obtain (2.16) by the Borel-Cantelli lemma.
Proof of Theorem 1.5.Note that when the event in Lemma 2.7 occurs, P-almost surely no ray contains more than D n (1 + δ n ) particles out of the first n particles for all n sufficiently large.We will use this observation to apply the a priori bound from Proposition 2.1 for all trees (T i ) rooted at generation D n (1 + δ n ) which eventually contain at least one of the first n particles.In the following, we assume that D n < n.For D n ≥ n, we directly apply Proposition 2.1 for the original tree T with n particles.
We start with the case where d min ≥ 2 holds.Let δ ∈ (0, 1) be fixed and set (2.17) Mn = 1 Moreover, we fix a tree T i rooted at generation D n (1 + δ n ) which eventually contains a particle.We claim that by Proposition 2.1, all of the at most D n (1 + δ n ) particles entering T i are disentangled after Mn generations in T i with P T -probability at least 1 − cn −2−δ for some constant c > 0. To see this, recall (2.3) and observe that We then apply (2.2) to obtain the claim.Note that this holds for GW-almost every tree (T, o) ∈ T .Moreover, the events that the particles disentangle on the trees (T i ) are mutually independent, and we conclude using a union bound for the trees (T i ).
Now suppose that d min = 1 holds.Recall c o from (1.9) and that δ ∈ (0, 1) is fixed.Note that δ n ≤ δ holds for all n sufficiently large and set Observe that (2 + δ) log 1+ε n ≥ log 2 n for all n using the definition of ε in (UE).Let
Couplings
In this section, we discuss three methods of comparing the TASEP on trees to related processes via couplings.We start with the canonical coupling which allows us to compare the TASEP on trees for different initial configurations.Next, we introduce a comparison to independent random walks.This coupling is used to prove a lower bound on the time window in Theorem 4.1 and an upper bound on the window of generations in Theorem 4.2.Our third model is a slowed down TASEP which is studied using an inhomogeneous LPP model.It is used to give an upper bound on the time window in Theorem 4.1 and a lower bound on the window of generations in Theorem 4.2.In all cases, we fix a tree T = (V, E, o) ∈ T and a family of rates (r x,y ) x,y∈E such that the TASEP is a Feller process.
For every edge e = (x, y) ∈ E, consider independent rate r x,y Poisson clocks.Whenever a clock rings at time t for an edge (x, y), we try in both processes to move a particle from x to y, provided that η We place a rate λ 1 Poisson clock at the root.Whenever the clock rings, we try to place a particle at the root in both processes.Furthermore, if λ 1 = λ 2 , we place an additional independent rate (λ 2 − λ 1 ) Poisson clock at the root.Whenever this clock rings, we try to place a particle at the root in (η 2 t ) t≥0 .
Let denote the component-wise partial order on {0, 1} V and denote by P the law of the canonical coupling.Lemma 3.1.Let (η 1 t ) t≥0 and (η 2 t ) t≥0 be two TASEPs on trees within the above canonical coupling.Suppose that λ 1 ≤ λ 2 holds, then Similarly, we can define the canonical coupling for the TASEP on trees when we allow reservoirs of intensities λ v 1 and λ v 2 at all sites v ∈ V , respectively.The canonical coupling preserves the partial order provided that λ v 1 ≤ λ v 2 holds for all sites v ∈ V .3.2.A comparison with independent random walks.We start by comparing the TASEP (η t ) t≥0 on T to independent biased random walks on T. Assume that the TASEP is started from some state η, which is -in contrast to our previous assumptions -not necessarily the configuration with only empty sites.We enumerate the particles according to an arbitrary rule and denote by z i t the position of the i th particle at time t ≥ 0. We define the waiting time σ (i) ℓ in level ℓ for all i ∈ Z and ℓ ∈ N to be the time particle i spends on generation ℓ once it sees at least one empty site.Recall R max ℓ from (1.17) and, with a slight abuse of notation, let denote the stochastic domination for random variables.Then holds for all i ∈ [n] and ℓ ≥ 0, where ω (i) ℓ are independent Exponential-1-distributed random variables.We now define the independent random walks (η t ) t≥0 started from η.
Each particle at level ℓ waits according to independent rate (R max ℓ ) −1 Poisson clocks, and jumps to a neighbor at generation ℓ + 1 chosen uniformly at random when the clock rings.When a particle is created in (η t ) t≥0 , create a particle in (η t ) t≥0 as well.
Note that in these dynamics, a site can be occupied by multiple particles at a time.Let zi t denote the position of the i th particle in (η t ) t≥0 at time t ≥ 0 and denote by ( Jℓ (t)) t≥0 the aggregated current of (η t ) t≥0 at generation ℓ ∈ N 0 .The following lemma is immediate from (3.2) and the construction of the random walks (η t ) t≥0 .
Lemma 3.3.There exists a coupling P between the TASEP (η t ) t≥0 on T and the corresponding independent random walks (η t ) t≥0 such that In particular, J ℓ (t) ≤ Jℓ (t) holds for all ℓ ∈ N 0 and t ≥ 0.
Using the comparison to independent random walks, we can give bounds on the current using estimates on weighted sums of Exponential random variables.We will frequently use the following estimates.Lemma 3.4.For ℓ ∈ N and c 0 , c 1 , c 2 , . . ., c ℓ , t ≥ 0, set S := ℓ i=0 c i −1 as well as c := min i∈{0,1,...,ℓ} c i .Let (ω i ) i∈{0,1,...,ℓ} be independent Exponential-1-distributed random variables.Then for any δ ∈ (0, 1), Proof.By Chebyshev's inequality, we see that holds.Since the logarithm is increasing, we can rearrange the sums to get the second upper bound.For the first upper bound, again apply Chebyshev's inequality for Using concavity of the logarithm, we obtain for all i ∈ {0, 1, . . ., ℓ} and all x > −1 that For x = δ in (3.5), together with (3.4), this yields the first upper bound.For the lower bound, we use again Chebyshev's inequality and (3.5) with x = −δ to get that This finishes the proof of the lemma.
3.3.
A comparison with an inhomogeneous LPP model.In this section, we compare the TASEP on T to a slowed down exclusion process, which we study using last passage percolation (LPP) in an inhomogeneous environment.To describe this model, we will now give a brief introduction to last passage percolation, and refer the reader to [41,43] for a more comprehensive discussion.
Consider the lattice N × N, and let (ω i,j ) i,j∈N be independent Exponential-1-distributed random variables.Let π m,n be an up-right lattice path from (1, 1) to (m, n), i.e.
The set of all up-right lattice paths from (1, 1) to (m, n) is denoted by Π m,n .The last passage time in an environment ω is defined as for all m, n ∈ N. Equivalently, the last passage times are defined recursively as In the following, we will restrict the space of lattice paths, i.e. we consider the set of paths For any (i, j) in N × N, we define where Π i,j (A m ) contains all up-right paths from (1, 1) to (i, j) that do not exit A m , i.e.
Based on the environment ω, we define an environment ω = {ω i,j } i∈N,j∈N by (3.9) ωi,j := see Figure 6 for a visualization.The next lemma shows that the last passage times in ω can be used to study the entering time of the n th particle in the TASEP on trees.Lemma 3.5.Let m, n ∈ N be such that m ≤ M n holds, where M n is defined in Theorem 1.5.Then there exists a coupling between G ω n,n+m and the time τ n m of the TASEP on trees, defined in (2.9), such that P-almost surely, for all n large enough which holds for all n large enough by Theorem 1.5.In particular, note that if D n holds, whenever one of the first n particles reaches generation M n , it no longer blocks any of the first n particles.Moreover, observe that when it is possible to jump for particle i from generation ℓ, the time σ ℓ until this jump is performed is stochastically dominated by an Exponential-distributed random variable with the smallest possible rate out from generation ℓ.In other words, the inequality holds for all i, ℓ ∈ N.
We construct now a slowed down TASEP (η t ) t≥0 where the i th particle waits a time of (r min ℓ ) −1 ω ℓ+i+1,i to jump from generation ℓ to ℓ + 1, but only after particle i − 1 left generation ℓ + 1.Moreover, we assume without loss of generality that all particles follow the trajectories of the original dynamics (η t ) t≥0 .As before, let z i t and zi t denote the position of the i th particle in (η t ) t≥0 and (η t ) t≥0 , respectively.The following lemma is immediate from the construction of the two processes.Lemma 3.6.There exists a coupling P between the TASEP (η t ) t≥0 on T and the corresponding slowed down dynamics (η t ) t≥0 such that for any common initial configuration Proof of Lemma 3.5.It suffices to show that the time in which the n th particle reaches generation m in the slowed down dynamics has the same law as G ω n+m,n (A Mn ).Let G m,n be the time the n th particle jumped m − n times in the slowed down process and note that for all m, n The right-hand side of the last three stochastic equalities are the recursive equations and initial conditions for the one-dimensional TASEP, in which particle i waits on site ℓ for (r min ℓ ) −1 ω ℓ,ℓ+1 amount of time, after ℓ + 1 becomes vacant.Note that any maximal path from (0, 1) up to (n, n + M n ) will never touch the sites for which the environment is 0, so the passage times in environment (3.7) and (3.8) coincide with those in environment (3.9), as long as we restrict the set of paths to not cross the line ℓ − i = M n .For any time t ≥ 0, on the event D n , this yields m and conclude as D n holds P-almost surely for all n large enough.We use this comparison to an inhomogeneous LPP model to give a rough estimate on the time τ n m for general transition rates.Note that this bound can be refined when we have more detailed knowledge about the structure of the rates.Proof.Let G (1) m,n be the passage time up to (m, n) in an i.i.d.environment with Exponential-1-distributed weights.Observe that we have the stochastic domination n+Mn,n+Mn .
For all α > 0, we obtain from Theorem 4.1 in [37] that
Proof of the current theorems
We have now all tools to prove Theorem 1.9 and Theorem 1.11.In fact, we will prove more general theorems which allow for any transition rates (r x,y ) satisfying the assumptions (UE) and (ED).We start with a generalization of Theorem 1.9 on the current in a time window [t low , t up ].Recall the notation from Section 1.1.In particular, recall (1.17), and set (4.1) For the lower bound of the time window, we define Note that both terms in the maximum can give the main contribution in the definition of t low , depending on the rates.For the upper bound, we define (4.4) and fix some δ ∈ (0, 1).We let t up = t up (δ) be (4.5) with some sequence (θ n ) n∈N tending to 0 satisfying Consider the first n particles which enter the tree, starting with the configuration which contains only empty sites.The following theorem states that we see at least an aggregated current in [t low , t up ] of order n.
Theorem 4.1.Suppose that (UE) and (ED) hold and let (ℓ n ) n∈N be a sequence of generations with ℓ n ≥ M n for all n ∈ N. Fix δ ∈ (0, 1) and let t low and t up = t up (δ) be given in (4.2), (4.3) and (4.5).Then P-almost surely Note that Theorem 4.1 indeed implies Theorem 1.9 for rates which satisfy (1.18).
Proof.We start with the lower bound involving t up .Recall D n from (3.11) as the event that the first n particles are disentangled at generation M n , and τ Mn n from (2.9) as the first time such that the first n particles have reached generation M n .Set and define t 2 := t up − t 1 .Combining Theorem 1.5, Lemma 3.5 and Lemma 3.7, we see that (4.8) D n ∩ {τ n Mn ≤ t 1 } holds P-almost surely for all n sufficiently large.In words, this means that all particles have reached generation M n by time t 1 and perform independent random walks after level M n .We claim that it suffices to show that (4.9) holds, where (ω i ) are independent Exponential-1-distributed random variables.To see this, let B i be the indicator random variable of the event that the i th particle did not reach level ℓ n by time t up .From (4.9), we obtain that (B i ) i∈[n] are stochastically dominated by independent Bernoulli-p-random variables when conditioning on the event in (4.8).Hence, we obtain that holds using Chebyshev's inequality for the second step.Together with a Borel-Cantelli argument and (4.8), this proves the claim.
In order to verify (4.9), we distinguish two cases depending on the value of θ defined in (4.4).Suppose that θ < ∞ holds.Then by Lemma 3.4 and a calculation, we obtain that holds for all n large enough and δ ∈ (0, 1), using the Taylor expansion of the logarithm for the second step.Similarly, when θ = ∞, we apply Lemma 3.4 to see that (4.10) holds for all n large enough and some sequence (θ n ) n∈N according to (4.6).In this case, we obtain that for any fixed δ ∈ (0, 1), the right-hand side in (4.10) converges to 0 when n → ∞.Thus, we obtain that (4.9) holds for both cases depending on θ, which gives the lower bound.
Next, for the upper bound, we use a comparison to the independent random walks (η t ) t≥0 defined in Section 3.2.By Lemma 3.3, holds for all δ > 0, where ( Jt ) t≥0 denotes the current with respect to (η t ) t≥0 .Fix some δ > 0 and let (ω i ) i∈N 0 be independent Exponential-1-distributed random variables.We claim that the probability for a particle in (η t ) t≥0 to reach level ℓ n is bounded from above by (4.11) for all n sufficiently large, where we recall that particles enter the tree at rate λ > 0. To see this, we distinguish two cases.Recall the construction of t low in (4.2) and (4.3), and assume that t low = t low 1 .By the first upper bound in Lemma 3.4, holds for all δ ∈ (0, 1).For δ = (ρ ℓn R max 0,ℓn ) −1/2 and using the Taylor expansion of the logarithm, we see that the right-hand side in (4.11) converges to 0 when n → ∞.Similarly, for t low = t low 2 the second upper bound in Lemma 3.4 yields where the right-hand side converges to 0 for n → ∞ using the definition of t low 2 and comparing the leading order terms.Since particles enter in both dynamics at the root at rate λ, note that for all n large enough, at most 5 4 λt low particles have entered by time t low .By Chebyshev's inequality together with (4.11), P-almost surely no particle has reached generation ℓ n by time t low for all n sufficiently large.
Example 6 (d-regular tree, Constant rates (C)).Let us choose ℓ n to be ℓ n = 1 + 2M n for all n ∈ N, where we recall from Example 2 that M n is of order n for constant rates.Then
R min
Mn,ℓn = which give θ = ∞.Hence, if we choose θ n = 1/ log n, we have For t low we have For more examples when the rates decay polynomially or exponentially we refer to Section 5. Now let t be a fixed time horizon and define an interval [L low , L up ] of generations.Recall M n from Theorem 1.5 and define the generations (4.12) L low := M nt and L up := min(L up 1 , L up 2 ) + 1 for n t from (1.23) and, recalling (1.7), .
Since r max i is bounded from above uniformly in i, L up 1 and L up 2 are both finite.The following theorem is the dual result of Theorem 4.1.Recall n t from (1.23).We are interested in a window of generations [L low , L up ] where we can locate the first n t particles.Theorem 4.2.Suppose that (UE) and (ED) hold.Then the aggregated current through generations L low and L up satisfies P-almost surely Note that Theorem 4.2 implies Theorem 1.11 for rates which satisfy (1.18), keeping in mind that in the setup of Theorem 1.11, there exist some c > 0 such that n 5t ≤ cn t for all t ≥ 0.
Proof.Let us start with the bound involving L up .Let (ω i ) i∈N 0 be independent Exponential-1-distributed random variables.Note that P-almost surely, no more than 2λt particles have entered the tree by time t for all t > 0 large enough.Using a similar argument as after (4.11) in the proof of Theorem 4.1, it suffices to show that By Lemma 3.4 and using the definition of where the right-hand side converges to 0 for t → ∞.Moreover, by Lemma 3.4 holds for any δ ∈ (0, 1) which may also depend on t.Note that sup ℓ∈N ρ ℓ < ∞ holds by our assumptions that the transition rates are uniformly bounded from above.Set δ = 2(t 2/3 ρ L up 2 ) −1 log t for all t large enough.Using the definition of L up 2 and the Taylor expansion of the logarithm, we conclude that the right-hand side in (4.15) converges to 0 for t → ∞.Since L up = min(L up 1 , L up 2 ), we obtain (4.14).
For the remaining bound in Theorem 4.2, recall the slowed down exclusion process from Section 3.3.By Lemma 3.5 and Lemma 3.7, note that for some c > 0 holds P-almost surely when t is sufficiently large.Consider a sequence of times (t i ) i∈N such that t i → ∞ as i → ∞ and (4.17) lim n t By possibly removing some of the t i 's, we can assume without loss of generality that n t i < n t i+1 .This way, n t i ≥ i for all i ∈ N. Therefore by (4.16) and the Borel-Cantelli lemma, we obtain that J L low (5t i ) ≥ n t i holds almost surely for all i large enough.Theorem 4.2 follows from (4.17).
Remark 4.3.Note that the bound in Theorem 4.2 involving L low continues to hold when we replace n t by some n with n t ≥ n > c ′ log t.
Current theorems for the TASEP on regular trees
In this section, we let the underlying tree be a d-regular tree, i.e. we assume that the offspring distribution is the Dirac measure on d − 1 for some d ≥ 3. Our goal is to show how the results of Theorems 4.1 and 4.2 can be refined when knowing the structure of the tree and the rates.This is illustrated in Section 5.1 for polynomially decaying rates, and in Section 5.2 for exponentially decaying rates, including the homogeneous rates from (1.6).
5.1.The regular tree with polynomially decaying rates.Consider the d-regular tree and homogeneous polynomial rates, i.e. we assume that we can find some p > 0 such that the rates satisfy ).Thus, we see that holds.Therefore, combining (5.5) and (5.6), we obtain a sharp time window where we see a current of order n when b = 0. We obtain the correct leading order for the time window to observe a current linear in n in the case of 0 < b < ∞.
We conclude this section by discussing some examples of the sequence (ℓ n ) n∈N for the d-regular tree with d ≥ 3, and rates which satisfy (5.1), with the tree-TASEP starting from an all empty initial condition.
In the following examples, we take δ → 0 when estimating M n in Example 2.Moreover, because the rates decay polynomially, c low can be taken to be arbitrarily close to 0.
Example 7. Fix some c > max (2, 3 1+p ).Then for every δ ′ > 0, we have that P-almost surely This is because the condition on c guarantees b = 0. Proposition 5.2.Consider the TASEP on the d-regular tree with exponentially decaying rates, and fix some δ ∈ (0, 1).We set Then there exists some C = C(δ, c up ) > 0 such that if c exp ≤ C, then (5.13) lim t→∞ J Lup (t) = 0 and lim In particular, for c exp = 0, we can choose Lup and Llow such that Lup ∼ Llow holds.
Proof.We start with the lower bound Llow .Observe that by Theorem 1.5, there exists some C = C(δ, c up ) ∈ (0, 1) such that the first ⌈t C ⌉ particles are P-almost surely disentangled at generation L low for all t sufficiently large.Since J m (t) is decreasing in the generation m, and M n is increasing in the number of particles n, we apply Theorem 4.2 and Remark 4.3 to conclude the second statement in (5.13).
For the first statement, we follow the proof of Theorem 4.
Since Lup t −1 exp(c up i) ≥ 0 holds for all i ∈ N, we see that Plugging in the definition of Lup from (5.12), a computation shows that the right hand side converges to 0 when t → ∞.This yields (5.14).
We conclude this paragraph by revisiting the d-regular tree with homogeneous rates from (1.6) in Section 1.1.
Proof of Example 4. Note that the first bound involving L up follows immediately from Proposition 5.2.For the second bound involving L low , note that we have c exp = 0 for N t = t α to conclude.
Invariant distributions and blockage
In this section, our goal is to show Theorem 1.12.The different parts of Theorem 1.12 will be shown in Propositions 6.1, 6.5, 6.7 and 6.9, respectively.Let T = (V, E, o) ∈ T be a locally finite, rooted tree on which the TASEP is a Feller process with respect to a given family of rates (r x,y ).For a pair of probability measures π, π on {0, 1} V , we say that π is stochastically dominated by π (and write π π), if (6.1) f dπ ≤ f dπ holds for all functions f which are increasing.Moreover, recall that for ρ ∈ [0, 1], ν ρ is the Bernoulli-ρ-product measure on {0, 1} V and that we consider the TASEP on T with initial distribution ν 0 .
In order to show Proposition 6.1, we adopt a sequence of results from Liggett [24].Let T n denote the tree restricted to level n, where particles exit from the tree at x ∈ Z n at rate r x .For every n, let π n λ denote the invariant distribution of the dynamics (η n t ) t≥0 on T n with semi-group (S n t ) t≥0 .We extend each measure π n λ to a probability measure on {0, 1} V (T) by taking the Dirac measure on 0 for all sites x ∈ V (T) \ V (T n ).Lemma 6.2 (c.f.Proposition 3.7 in [24]).For any initial distribution π, the laws of the TASEPs (η n t ) t≥0 and (η n+1 t ) t≥0 on T n and T n+1 , respectively, satisfy t for all t ≥ 0. In particular, π n λ π n+1 λ holds for all n ∈ N.
Proof.We follow the arguments in the proof of Theorem 2.13 in [24].We note that for all n ∈ N, the generators L n and L n+1 of the TASEPs on T n and T n+1 satisfy for any increasing function f which does only depend on V (T n ), for all η ∈ {0, 1} V (T) .Using the extension arguments from Theorem 2.3 and Theorem 2.11 in [24], we obtain that (6.4) for any increasing function f which only depends on V (T n ), for all t ≥ 0. It suffices now to show that (6.4) holds for all increasing functions f which only depend on V (T n+1 ).This follows verbatim the proof of Theorem 2.13 in [24] by decomposing f according to its values on V (T n+1 ) \ V (T n ).Lemma 6.2 implies that the probability distribution π λ given by (6.5) π λ := lim n→∞ π n λ exists; see also Theorem 3.10 (a) in [24].More precisely, Lemma 6.2 guarantees for every increasing cylinder function f that Since the set of increasing functions is a determining class, (6.5) follows.Furthermore, since S n t f converges uniformly to S t f for any cylinder function f , π λ is invariant for (η t ) t≥0 ; see Proposition 2.2 and Theorem 4.1 in [24].We now have all tools to show Proposition 6.1.
Proof of Proposition 6.1.Since we know that π λ is invariant, we apply the canonical coupling from Lemma 3.1 to see that for all t ≥ 0, ν 0 S t π λ .Moreover, by Lemma 6.2, for all t ≥ 0 and all n ∈ N ν 0 S n t ν 0 S t .
To prove Proposition 6.1, it suffices to show that lim t→∞ f d [ν 0 S t ] = f dπ λ holds for any increasing cylinder function f .Combining the above observations holds for every n ∈ N and for any increasing cylinder function f .We conclude the proof recalling (6.5); see also the proof of Lemma 4.3 in [24].
Next, we show that if the rates satisfy a flow rule then there exists an invariant Bernoulliρ-product measure for some ρ ∈ (0, 1) for the TASEP on the tree; see Theorem 2.1 in [27, Chapter VIII].The superflow given at the left-hand side is decomposed into two flows of strengths 5 and 2, respectively, shown at the right-hand side.
Lemma 6.3.Let T be a locally finite, rooted tree with rates satisfying a flow rule for a flow of strength q.Assume that particles are generated at the root at rate λ = ρq for some ρ ∈ (0, 1).Then ν ρ is an invariant measure for the TASEP (η t ) t≥0 on T.
Proof.We have to show that for all cylinder functions f , Due to the linearity of L, it suffices to consider f of the form with η ∈ {0, 1} V (T) and A some finite subset of V (T).A calculation shows that if o / ∈ A, We conclude using the flow rule, noting r o = q = λ ρ and recalling the definition of r o .Remark 6.4.Note that the measure ν 1 is always invariant for the TASEP on trees.Theorem 1 of [7] shows that the TASEP on T with a half-line attached to the root, where all edges point to the root, has an invariant Bernoulli-ρ-product measure with ρ ∈ (0, 1) if and only if a flow rule holds.If a flow rule holds, a similar argument as Theorem 1.17 in [26,Part III] shows that ν ρ is extremal invariant for all ρ ∈ [0, 1].
Next, we consider the case where the rates do not necessarily satisfy a flow rule.In the following, we will without loss of generality assume that λ < q(o) holds.When λ ≥ q(o), the canonical coupling in Lemma 6.2 yields that the current stochastically dominates the current of any TASEP with rate λ ′ for some λ ′ < q(o).We now characterize the behavior of the TASEP in the superflow case.Proposition 6.5.Assume that a superflow rule holds.Let (J o (t)) t≥0 be the current at the root for the TASEP on a tree T with a reservoir of rate λ = ρq(o) at the root for some ρ ∈ (0, 1), and initial distribution ν 0 .Then the current (J o (t)) t≥0 through the root satisfies almost surely, where π λ is given by (6.2).
In order to prove Proposition 6.5, we will use the following lemma, which shows that the law of the TASEP on trees is always dominated by a certain Bernoulli-ρ-product measure on the tree.Lemma 6.6.Assume that the rates satisfy a superflow rule and consider the TASEP (η t ) t≥0 with a reservoir of rate λ = ρq(o) for some ρ ∈ (0, 1).If P(η 0 ∈ •) ν ρ holds, then (6.9) P(η t ∈ •) ν ρ for all t ≥ 0. In particular, the measure π λ from (6.2) satisfies π λ ν ρ .
Proof.In order to show (6.9), we decompose the rates satisfying a superflow rule into flows starting at different sites.More precisely, we claim that there exists a family of transition rates ((r z x,y ) (x,y)∈E(T) ) z∈V (T) with the following two properties.For every i ∈ V (T) fixed, the rates (r z x,y ) (x,y)∈E(T) satisfy a flow rule for a flow of strength q(z) for the tree rooted in z.Moreover, for all (x, y) ∈ E(T), z∈V (T) r z x,y = r x,y ; see also Figure 7.We construct such a family of transition rates as follows.We start with the root o and choose a set of rates (r o x,y ) (x,y)∈E(T) according to an arbitrary rule such that the rates satisfy a flow rule for a flow of strength q(o) starting at o, and r o x,y ≤ r x,y for all (x, y) ∈ E(T).Next, we consider the neighbors of o in the tree.For every z ∈ V (T) with |z| = 1, we choose a set of rates (r z x,y ) (x,y)∈E(T) according to an arbitrary rule such that the rates satisfy a flow rule for a flow of strength q(z) starting at z.Moreover, we require that r z x,y ≤ r x,y − r o x,y holds for all (x, y) ∈ E(T).The existence of the flow is guaranteed by the superflow rule.More precisely, we use the following observation.Whenever the rates satisfy a superflow rule, we can treat the rates as maximal capacities and find a flow (r o x,y ) of strength q(o) which does not exceed these capacities.Note that the reduced rates (r x,y − r o x,y ) must again satisfy a superflow rule, but now on the connected components of the graph with vertex set V (T) \ {o}.This is due to the fact that the net flow vanishes on all sites V (T) \ {o}.We then iterate this procedure to obtain the claim.
Let (η t ) t≥0 be the exclusion process with rates (r x,y ) (x,y)∈E(T) , where in addition, we create particles at every site x ∈ V (T) at rate q(x)ρ.Due to the above decomposition of the rates and Lemma 6.3, we claim that the measure ν ρ is invariant for (η t ) t≥0 .To see this, we define a family of generators (L z ) z∈V (T) on the state space {0, 1} V (Tz) .Here, the trees T z are the subtrees of T rooted at z, consisting of all sites which can be reached from site z using a directed path.For all cylinder functions f , we set for all cylinder functions f on {0, 1} V (T) , and that at most finitely many terms in the sum in (6.11) are non-zero since f is a cylinder function.Hence, we obtain that ν ρ is an invariant measure of (η t ) t≥0 by combining (6.10) and (6.11).Using Remark 3.2, we see that the canonical coupling P the TASEP on trees satisfies P (η t ηt for all t ≥ 0 | η 0 η0 ) = 1 .
Proposition 6.7.Consider the TASEP (η t ) t≥0 on the tree T = (V, E) for some λ = ρq(o) > 0 with ρ ∈ (0, 1).Moreover, assume that a superflow rule holds and that (1.29) is satisfied.Then the measure π λ from Proposition 6.1 satisfies Let δ > 0 be arbitrary and fix some m ∈ N such that ρ m ≤ δ 2 .Moreover, for all x ∈ Z n , fix a sequence of sites (x = x 1 , x 2 , . . ., x m ) with (x i , x i+1 ) ∈ E for all i ∈ [m − 1].Note that the sites (x i ) i∈[m] are disjoint for different x ∈ Z n and that by Lemma 6.6 Hence, combining (6.13), (6.15) and (6.16), we see that for all n sufficiently large, Since δ > 0 was arbitrary, we conclude.
We use a similar argument to determine when we have a positive averaged density.Corollary 6.8.Suppose that a superflow rule holds.Consider the TASEP (η t ) t≥0 on the tree T = (V, E) for some λ = ρq(o) > 0 with ρ ∈ (0, 1).Moreover, assume that T has maximum degree ∆, and that (6.17 holds.Together with (6.17 Since the rates satisfy a superflow rule, we conclude by applying Proposition 6.5. Next, we consider the case where the rates in the tree decay too fast, i.e. when a subflow rule holds; see (1.27).We show that the current is sublinear.Proposition 6.9.Suppose that the rates satisfy a subflow rule.Then the current (J o (t)) t≥0 of the TASEP (η t ) t≥0 on a tree T = (V, E) with a reservoir of rate λ > 0 satisfies (6.19) lim t→∞ J o (t) t = 0 almost surely.Moreover, the limit measure π λ of Lemma 6.1 is the Dirac measure ν 1 .In particular, (η t ) t≥0 has a unique invariant measure.
Proof.By (6.14), it suffices for (6.19) to prove that for every ε > 0, there exists some m = m(ε) such that the aggregated current (J m (t)) t≥0 at generation m satisfies lim sup t→∞ J m (t) t ≤ ε .
Recall r x from (1.16) for all x ∈ V , and let (X x t ) t≥0 be a rate r x Poisson clock, indicating how often the clock of an outgoing edge from x rang until time t.In order to bound (J m (t)) t≥0 , recall that we start with all sites being empty, and observe that the current can only increase by one if a clock at an edge connecting level m − 1 to level m rings.Thus Hence, we obtain that π λ (η(z) = 1) = 1 holds for all z ∈ V with |z| = 1 as well.We iterate this argument to conclude.
Open problems
We saw that under certain assumption on the rates, the first n particles in the TASEP will eventually disentangle and will continue to move as independent random walks.Intuitively, one expects for small times that the particles in the exclusion process block each other.This raises the following question.Question 7.1.Consider the TASEP (η t ) t≥0 on T started from the all empty configuration.Let (η t ) t≥0 be the dynamics on T where we start n independent random walks at the root.Let p n,ℓ and pn,ℓ denote the P T -probability that the first n particles are disentangled at level ℓ in (η t ) t≥0 and (η t ) t≥0 , respectively.Does pn,ℓ ≤ p n,ℓ hold for all ℓ, n ∈ N?
It is not hard to see that this is true for n = 2.However, already the case n = 3 is not clear (at least not to us).The next question is about the behaviour of the current.The last open problem concerns the properties of the equilibrium measure π λ from Theorem 1.12.In Lemma 6.6, we saw that π λ is stochastically dominated by some Bernoulli product measure.In analogy to the TASEP on the half-line; see Lemma 4.3 in [24], we expect the following behaviour of π λ .Conjecture 7.3.Consider TASEP with a reservoir of rate λ = ρq for some ρ ∈ (0, ∞) such that a flow rule holds for some flow of strength q.Recall π λ from (6.2).Then for ρ ≤ 1 2 , we have that π λ = ν ρ .For ρ > 1 2 , it holds that
Figure 2 .
Figure 2. The 3-regular (or binary) tree satisfying a flow rule with r min j = r max j
Example 5 (
Flow conditions).For the four examples on the d-regular tree, µ(d − 1) = 1, d ≥ 3 we have (C) (Constant rates) r x,y = 1.This is a superflow rule but not a flow rule.
for some sequence (ω i ) i∈[n] of i.i.d.Exponential-1-distributed random variables.Recall (1.9) where for d min > 1, we take c o such that M n = c o log n, and set c o = 1/ log d min otherwise.Rewriting τ n 0 as a telescopic sum yields
( 3 1 Figure 6 .
Figure 6.Visualization of the environment which is used to describe the slowed down TASEP as a last passage percolation model.The numbers in the cells are the parameters of the respective Exponential-distributed random variables.The square at the bottom left of the grid corresponds to the cell (1, 1).
5 Proposition 5 . 1 . 1 .x
(5.1) 1 j p = r min j = r max j for all j ∈ N.For this choice of the rates, we want to show how the bounds in Theorem 4.1 on a time window can be improved.In the following, we will writea n ∼ b n if lim n→∞ a n (b n ) −1 = 1.Note that D n and M n from (1.10) and (1.12) satisfyD n ∼ n 2+coc low log 3 n 1 p and M n ∼ d − 1 + δ d − 2 min(D n , n)for all p > 0 and δ > 0; see Example 2. Recall that we are free in the choice of the sequence of generations (ℓ n ) n∈N with ℓ n ≥ M n for all n ∈ N along which we observe the current created by the first n particles entering the tree.We assume that (ℓ n ) n∈N satisfies (some a ∈ [0, 1) and b ∈ [0, ∞).We apply now Theorem 4.1 in this setup.Consider the TASEP on the d-regular tree with polynomial weights from (5.1) for some p > 0, and a and b as in (5.2) for some sequence of generations (ℓ n ) n∈N .Let t up , t low be taken from (4.2), (4.3) and (4.5).For a ∈ [0, 1) and b = 0, For a ∈ [0, 1) and b ∈ (0, ∞), holds for some constants c, c ′ > 0. Proof.Recall the notation from Section 1.1.For b ∈ (0, ∞), we observe that for the above choice of transitions rates ( min Mn<|x|≤ℓn r x )R min Mn,ℓn = r ℓn ℓn k=Mn p dx ∼ 1 − a 1 + p ℓ n holds, and hence θ = ∞ in (4.4 for b = 0 shows that t up ∼ (1 − a)((d − 1)(1 + p)) −1 ℓ p+1 n holds.For the lower bound t low , we use that t low ≥ t low 1
Example 8 . 1 2− 1 d − 2 n 1 2Example 9 . 2 . 10 ./ 2 in Proposition 5. 1 . 11 . 5 . 2 .
Let p = 2(2 + c o c low ), and note that (5.7)D n ∼ n log 3/p n and M n ∼ d log 3/p n holds.Choosing ℓ n = n (2+p)/(2+2p) log 3/(1+p) n for all n ∈ N yields that a = 0 and b ∈ (0, ∞) in (5.2).Hence, we can choose t up and t low in Proposition 5.1 to be both of order n 1+p/2 log 3 n.Let p = 1, and note that(5.8)D n ∼ n and M n ∼ d − 1 d − 2 n holds.Choosing ℓ n = n for all n ∈ N yields that a ∈ (0, 1) and b ∈ (0, ∞) in (5.2).Hence, we can choose t up and t low in Proposition 5.1 to be both of order nExample Let p = 1 2 and d ≥ 4. Then D n and M n satisfy (5.8).Choosing ℓ n = n 2 for all n ∈ N yields that a = 1/(d − 2) and b = 0 in (5.2).Hence, we can choose (5.9)t up ∼ t low ∼ 2(d − 3) 3(d − 1)(d − 2) n 3Example Let p = 3 4 and d ≥ 3. Then D n and M n satisfy (5.8).Choosing ℓ n = n 2 for all n ∈ N yields that a = 0 and b = 0 in (5.2).Hence, we can choose (5.10)t up ∼ t low ∼4The regular tree with exponentially decaying rates.We now study the d-regular tree with exponentially decaying rates, i.e. the rates satisfy κe −cupℓ = r min ℓ = r max ℓ for all ℓ ∈ N and some constants κ, c up > 0. In this setup, our goal is to improve the bounds on the window of generations in Theorem 4.2.Let (N t ) t≥0 be some integer sequence and assume that(5.11)lim t→∞ log N t log t = c exp holds for some c exp ∈ [0, 1).
(ω i ) are independent Exponential-1-distributed random variables.Using Chebyshev's inequality, we obtain that
Figure 7 .
Figure 7. Visualization of the superflow decomposition used in Lemma 6.6.The superflow given at the left-hand side is decomposed into two flows of strengths 5 and 2, respectively, shown at the right-hand side.
Question 7 . 2 .
What can we say about the order of the current and its fluctuations in Theorem 4.1 and Theorem 4.2?
n cc low + n exp(c low ℓ n ) for some c > 0. This upper bound can be used as a simple potential value for t up in (1.19).
holds almost surely.Using the subflow rule, we can choose m = m(ε) sufficiently large to conclude(6.19).To prove that π λ is the Dirac measure on all sites being occupied, use Proposition 6.5 to see that (6.19) holds if and only if π λ (η(o) = 1) ∈ {0, 1}.Since the rate λ at which particles are generated is strictly positive and π λ is an invariant measure, we conclude that π λ (η(o) = 1) = 1.Using the ergodic theorem, we see that almost surely for all neighbors z of o, π λ (η(o) = 1, η(z) = 0)r o,z = lim t→∞ J z (t) t ≤ lim t→∞ J o (t) t = 0 . | 20,328 | 2020-08-24T00:00:00.000 | [
"Mathematics"
] |
Towards Profile-Based Document Summarisation for Interactive Search Assistance
This paper presents an investigation into the utility of profile-based summarisation in the context of site and enterprise search. We employ log analysis to acquire continuously updated profiles to provide profile-based summarisations of search results with the intention of highlighting aspects of the documents that best fit the general user profile. We first introduce the wider context of the research and then focus on a first task-based evaluation using TREC Interactive Track guidelines that compares a search system that uses the outlined profile-based summarisation with two baseline systems to assess whether such summaries could be helpful in a realistic search setting.
MOTIVATION
Finding a specific document on a company Web site or within a university intranet can be non-trivial, because unlike in Web search, such document collections tend to be sparse, in that there might be only a single matching document for a user query, which could be difficult to find.A search for the exam timetable on the University of Essex Web site, for example, will only satisfy the (most common) user need if it results in a specific Excel file that might not be easy to identify when navigating the site.This is a typical problem of enterprise search (Hawking 2011).The growth of document collections on Web sites of universities, companies and other institutions, i.e., collections other than the Web as such, will contribute to this becoming a more and more common problem.Finding solutions that address this problem is considered a challenging task (Hawking 2011).
Generally speaking, there appears to be a trend in assisting a user in the search process, but this could be done in a variety of forms (e.g., apart from query suggestions (Kato et al. 2012), faceted search (Koren et al. 2008) has become very popular in recent years.However, in this process of guiding/assisting the searcher, no dominant paradigm has emerged which would be comparable to the Google-style paradigm of ad hoc search.This is the (wider) area that we will explore.
One of the trends that play an important role in the exponential growth of document collections is traditional (generic) summarisation (Nenkova and McKeown 2011).However, in this case, the impact of human interest for readers has seldom been considered.Traditional summarisation generates the same summary for different users by utilizing the same methodology without taking into account who is reading.As most of the current summarisation systems generate a uniform version of a summary for one document for all users, most traditional summarisation methods fail to capture user interests during summarisation because they treat their outputs as static and plain texts.However, users need personalisation because they have individual preferences with regard to a particular source document collection; in other words, each user has different perspectives on the same text.So, traditional summarisation methods are to some extent insufficient because, obviously, a universal summary for all users might not always be satisfactory (Yan et al. 2011).As such, personalised text summarisation could be useful in order to present different summaries corresponding to reader interests and preferences (Zhang and Ma 1990).We are interested in personalised summarisation.One of the techniques used to achieve personalisation is user profiling.User profiles may include the preferences or interests of a single user or a group of users and may also include demographic information (Gauch et al. 2007).Normally, a user profile contains topics of interest to that single user.We are interested in capturing profiles not of single users but groups of users.
Broadly speaking we try to address the following research questions with our work: 1. Can site search and enterprise search benefit from the automated summarisation of results?
2. Will a continuously updated model capturing search and navigation behaviour of a user (or groups of users) be beneficial for the summarisation process?
3. Will such methods result in measurable (quantifiable) benefits such as shorter search sessions, fewer interactions etc?
In this paper we report on initial experiments we have conducted to address the questions.
RELATED WORK
Automatic summarisation (Nenkova and McKeown 2011) is a process that creates a shortened version of one or more texts that contains the most important points of the original text, and is both concise and comprehensive.(Hassel 2004) divides summarisation approaches into two main groups: abstractive summaries and extractive summaries.
Here, we focus on extractive summarisation.
Some systems generate a summary based on multiple source documents (for example, a cluster of news stories on the same topic), which are known as multi-document summarisation systems (Maybury and Mani 1999), while others can use a single-source document, which are known as singledocument summarisation systems.(Radev et al. 2002) use abstraction and extraction methods to apply the text summarisation process to single and multiple documents.The most frequently used methods to build extractive generic summaries use position, thematic words, indicative expressions, text typography, proper nouns and title as described by (Hahn and Mani 2000;Díaz and Gerv ás 2007).An ongoing issue in this field is that of evaluation; human judgements often have wide variance on what is considered a good summary, thus making the automatic evaluation process particularly difficult (Lin and Hovy 2002).
The above algorithms do not involve interactive mechanisms to capture reader interests, nor do they utilize user preferences for personalisation in summarisation.They usually are traditional extensions of generic summaries.According to (Díaz and Gerv ás 2007), experiments have shown that personalised summarisation is important, as the summary sentences match the user's interests.Personalisation techniques include seeking to adapt to individual users, which aims to improve user satisfaction by adapting future interactions and predicting user needs through building models/profiles (Gauch et al. 2007) of user behaviour.One approach of building adaptive community profiles is a biologically inspired model based on ant colony optimisation applied to query logs as an adaptive learning process (Albakour et al. 2011).This approach has been considered in our work to build a domain model/profile that is then applied for summarisation.
A number of personalised summarisation methods have been explored e.g., (Berkovsky et al. 2008;Wang et al. 2007;Park 2008;Zhang et al. 2003).However, (Wang et al. 2007) focused on querybased summarisation for Web pages based on the extraction and ranking.(Park 2008), on the other hand, performed a personalised summary in which the sentences relevant to user interests have been extracted for the query-based and generic summary with regard to a given query based on nonnegative matrix factorization (NFM).The potential of personalised summarisation over generic/traditional summaries has already been demonstrated e.g., (Díaz and Gerv ás 2007), but summarisation of Web documents is typically based on the query rather than a full profile e.g., (Wang et al. 2007;Park 2008).However, such scenarios may not be sufficient enough in that they depend on only the current submitted query, which might not contain much information that accurately describes user interests in the generated summaries.Our specific interest lies in site search and enterprise search, which is different from Web search and has attracted less attention (Hawking 2011).Our approach can effectively and implicitly learn the user profile, which contains user interests, and keep it up-to-date and then generate personalised summaries that reflect those user interests under a wider practical scenario.The benefit of this context is that we can expect a more homogeneous population of searchers who are likely to share interests and information needs.
We then apply the acquired profiles to generate summaries that support users who are searching a document collection.
LOG-BASED PROFILES
We use query logs to build profiles that represent not an individual user but the entire population of users that access a Web site.The intuition behind this is that, for example, on a university Web site or in a company's intranet people tend to share information needs and we assume that learning from one user (e.g. a new student trying to find out how to register) might benefit a whole range of future users.We utilise query logs to acquire a profile which is being automatically updated in a continuous learning cycle using an ant colony optimisation (ACO) analogy, as adopted from (Albakour et al. 2011).For the specific experiments, we use the log files collected on an existing site search engine over a period of three years1 to bootstrap such a model, i.e., our group profile.The idea is that query logs, the structure of logs discussed here, are segmented into sessions and then turned into a graph structure (Kruschwitz et al. 2011).We then apply this profile in the summarisation process.Our search system architecture and some relevant data structure have been discussed in (Alhindi et al. 2013a,b) where more details can be found.
INITIAL EXPERIMENT
To explore whether summarisation of documents in a site search context, and profile-based summarisation in particular, offers any measurable benefits we conducted a pilot study (Alhindi et al. 2013a).In that experiment, we simply assessed how users perceive summaries generated using a profile compared with different baselines.We found that a profile-based summarisation process significantly outperformed a centroid-based baseline that did not utilise any profile.We further identified that a query-specific profile gave the best results among a range of profile-base summarisation methods.Hence, we found that there is potential in utilising profiles of either users or groups of users in the summarisation process.
The next step is to investigate whether the results obtained in the pilot study can also be demonstrated in actual search applications.
TASK-BASED EVALUATION
Motivated by the findings of the pilot we designed a task-based evaluation that would use summarisation techniques to generate snippets for matching documents returned for a user query, similar to the one presented here (Tombros and Sanderson 1998) with respect to the use of summaries for search of non web content.Based on commonly used standards in task-based evaluations (Kelly 2009;Yuan and Belkin 2010;Diriye et al. 2010), we constructed search tasks from random samples of representative search requests on the domain of choice and conducted a within-subjects laboratory evaluation to compare three different information retrieval (IR) systems.The evaluation process started with exploring the potential of an adaptive search system by capturing the feedback on it from real users.In line with (Kelly 2009), we chose 18 subjects and each one attempted to complete 6 search tasks which asked them to find documents that are relevant to pre-determined topics.Each subject completed two searches on each system (a within-subjects design).In accordance with the TREC-9 Interactive Track guidelines (Hersh and Over 2001;Hersh 2002), subjects had 10 minutes for each task, and they were asked to complete a number of questionnaires.The questionnaires we used in this study are based on the ones suggested by the TREC-9 Interactive Track guidelines2 (using a 5-point Likert scale where appropriate).We used Entry questionnaire, Post-search questionnaire and Exit questionnaire.We discuss the experimental setup in more detail first; then we discuss the results.
Experimental Setup
We have developed an integrated Solr-based search system applying a number of different methods for building summaries for search results using the Web site of the University of Essex.One of the methods we have applied is our adaptive summarisation method and would like to test it against two baselines.These three IR systems will be called System A (baseline 1), System B (baseline 2) and System C (the adaptive system).All three systems looked identical to the user, but each one is characterized as follows: 1. System A is a copy of a standard search engine that the users usually use to locate their information needs.The query is submitted by the user, our search engine returns results, and the top 100 matches are displayed (10 results per page) using snippets returned by the search engine.
2. System B is the same as System A but uses a centroid-based approach (Radev et al. 2004) instead of snippets to summarise the document.This algorithm is designed for traditional (generic) summarisation, and it represents a widely used baseline e.g., (Yan et al. 2011).
3. System C is the same as System B but uses the ACO query refinements technique which is profile-based and query-specific instead of using snippets to summarise the document, as described in (Alhindi et al. 2013a).
C
Our Tuition Fee Payment and Liability Policy sets out the University's regulations regarding payment of fees and our Tuition Fee Deposit Policy offers guidance on who is required to pay fee deposits and the rules for doing so.
Note that, in System B and System C, sometimes we cannot generate a summary for a document for a number of reasons, such as there is no text within the document, the document cannot be parsed (e.g., it is a PDF and not HTML document), or there is no query refinements for the submitted query (as in System C).In this case, we present snippets provided by the search engine (as in System A). Table 1 shows an example of a document extracted from the three systems but with different snippets/summaries (according to the characteristics of each system).
Protocol and Search Tasks
We followed the procedure adopted in (Craswell et al. 2003) to guide the subjects during the taskbased evaluation, which was conducted in an office in a one-on-one setting.Systems and task orders revolved and were counterbalanced.Tasks were assigned to subjects based on a Graeco-Latin square design (Kelly 2009) to avoid task bias and potential learning effects.At the beginning, subjects were asked first to fill in the entry questionnaire.After that, subjects were given 5 minutes' introduction of the three systems without being told anything about the technology behind them.Then, each subject had to perform 2 search tasks on each system according to the matrix in (Kelly 2009).After each task, subjects were asked to fill in the post-search questionnaire.When they completed both search tasks on one system, they were asked to fill in the post-system questionnaire.Finally, when subjects finished all the search tasks, they had to fill in the exit questionnaire.
We constructed the search tasks in line with the brief review guidelines suggested by (Kules and Capra 2008).Tasks were tailored based on commonly submitted queries in logs of the existing Web site to make the search tasks as realistic as possible (Dignum et al. 2010).
Subjects
In order to get a good selection of different types of users and to avoid bias in the selection process, we sent an e-mail to the local university mailing list and All subjects declared that they use the Internet on a regular basis.The average time subjects have been doing online searching is 7.38 years (9 of them between 3 and 15 years, but there was also a user who stated 0 years).When asked for their searching behaviour, 15 (or 83%) of the participants selected 'daily'.Note that our users (who we would consider typical target users of the system) tend to have a lot of experience using Web search systems (mean: 4.94) but little experience using commercial search engines (mean: 2.78).
Completion Time / Number of Turns
Table 2 gives a picture of the average completion time (derived from the logged data) broken down for each task.We measured the time between presenting the search task to the users and the submission of the result.Overall, the average time spent on a search task on System A was 204 seconds, System B was 203 seconds, and System C was 202 seconds, with no statistically significant difference between each pair of them.
We also investigated the number of turns, which is the number of steps required to find the answer for a search task (Table 3).A turn can be inputting a query (this turn is considered here), following the link to the next 10 matches or following a hyperlink to open a document.On average, users needed 3.83 turns on System A, 3.68 turns on System B and 3.65 turns on System C. For five out of six tasks the average number of turns taken is shorter on the profile-based system than on any of the two baselines, although overall the differences are not significant.After participants finished each search task, they had to fill in a post-search questionnaire and answer a number of common questions.One question is 'Are you satisfied with your search results?' Overall, users were satisfied with the results returned by the three systems.A pairwise t-test over the average ratings of the tasks on each system indicates that there is no significant difference between each pair of the three systems.
Questionnaires
In this post-search questionnaire, after the common questions users were also asked to state whether they were able to complete their search task successfully.For System A, 5 answered with 'No'; for System B, 3 answered with 'No'; and for System C, there was 2 cases altogether.We also looked into the submitted documents during the search session after the user finish a task and judged whether we would consider a document a match for the task (as we already knew the required documents for each task).We found that a large number of submitted documents exactly matched the information request as specified by the task (34 on System C, 33 on System B and 31 on System A).Only 10 of the 108 search tasks did not result in exact matches, and that includes partial matches.There was no significant difference between the three systems in that respect, but there were two particularly difficult tasks: tasks 2 and 6 (clearly reflected by the user satisfaction).Only 12 of the 18 users found a correct document for task 6, and 14 were correctly submitted for task 2. If we look at those two tasks in detail and compare them to the results reported earlier, we find that they have a higher number of turns; on average, users needed 4.64 turns for task 2 and 5.2 turns for task 6 (see Table 3).We also find a higher average completion time for those two tasks (see Table 2).The success rate was comparable across all systems (with only 10 cases of incomplete tasks).
After two search tasks were performed on one system, participants filled in a post-system questionnaire.No statistically significant difference between the three systems can be observed from the overall results regarding learning to use, ease of use and understanding of each system.This is perhaps what we expect, as the three systems look identical and only differ in the snippets/summaries they make.
In the exit questionnaire, users were asked to answer the question 'Which of the three systems did you like the best overall?'There were marginal differences between systems, so most users found no difference: 5 users preferred System C, 2 users preferred System B, 2 users preferred System A, and 9 found no difference.A large majority of users also judged that there was no difference between the three systems in two other categories: in the ease of use (C: 4 users, B: 2, A: 2, no difference: 10) and in ease of learning to use (C: 3 users, B: 1, A: 2, no difference: 12).
CONCLUSION AND FUTURE DIRECTIONS
Our initial experiment suggested that there is certainly potential in using profile-based summarisation in a site search context.In the task-based evaluation we conducted we found that our profile-based summarisation approach was marginally better than the two baselines according to any of the criteria we investigated.This is a good starting point for further work which will aim to exploit the full potential of the approach.It needs to be pointed out that getting significant improvements over strong baselines such as the ones chosen here will not be easy to achieve (note that search engines tend to generate good snippets for queries).
Apart from exploring the full potential of profiles in the given context our future work will focus on multi-document summarisation and the application of the profiles in navigation rather than search, an area where profile-based suggestions of links has already been demonstrated to work well (Saad and Kruschwitz 2013).
Table 1 :
An example of different snippets for the same document returned for query "tuition fees".
Table 2 :
Average completion time (in seconds).
Table 3 :
Average number of turns to complete a task. | 4,603.2 | 2013-09-01T00:00:00.000 | [
"Computer Science"
] |
Inhibited Insulin Signaling in Mouse Hepatocytes Is Associated with Increased Phosphatidic Acid but Not Diacylglycerol*
Background: The mechanism underlying the association of triacylglycerol storage and insulin resistance is unclear. Results: Increasing phosphatidic acid (PA) in primary hepatocytes via de novo synthesis or action of phospholipase D or diacylglycerol kinase-θ disrupts insulin signaling. Conclusion: PA derived from different sources inhibits insulin signaling. Significance: Increases in hepatocyte PA may mechanistically link lipid storage and insulin action. Although an elevated triacylglycerol content in non-adipose tissues is often associated with insulin resistance, the mechanistic relationship remains unclear. The data support roles for intermediates in the glycerol-3-phosphate pathway of triacylglycerol synthesis: diacylglycerol (DAG), which may cause insulin resistance in liver by activating PKCϵ, and phosphatidic acid (PA), which inhibits insulin action in hepatocytes by disrupting the assembly of mTOR and rictor. To determine whether increases in DAG and PA impair insulin signaling when produced by pathways other than that of de novo synthesis, we examined primary mouse hepatocytes after enzymatically manipulating the cellular content of DAG or PA. Overexpressing phospholipase D1 or phospholipase D2 inhibited insulin signaling and was accompanied by an elevated cellular content of total PA, without a change in total DAG. Overexpression of diacylglycerol kinase-θ inhibited insulin signaling and was accompanied by an elevated cellular content of total PA and a decreased cellular content of total DAG. Overexpressing glycerol-3-phosphate acyltransferase-1 or -4 inhibited insulin signaling and increased the cellular content of both PA and DAG. Insulin signaling impairment caused by overexpression of phospholipase D1/D2 or diacylglycerol kinase-θ was always accompanied by disassociation of mTOR/rictor and reduction of mTORC2 kinase activity. However, although the protein ratio of membrane to cytosolic PKCϵ increased, PKC activity itself was unaltered. These data suggest that PA, but not DAG, is associated with impaired insulin action in mouse hepatocytes.
Although an elevated triacylglycerol content in non-adipose tissues is often associated with insulin resistance, the mechanistic relationship remains unclear. The data support roles for intermediates in the glycerol-3-phosphate pathway of triacylglycerol synthesis: diacylglycerol (DAG), which may cause insulin resistance in liver by activating PKC⑀, and phosphatidic acid (PA), which inhibits insulin action in hepatocytes by disrupting the assembly of mTOR and rictor. To determine whether increases in DAG and PA impair insulin signaling when produced by pathways other than that of de novo synthesis, we examined primary mouse hepatocytes after enzymatically manipulating the cellular content of DAG or PA. Overexpressing phospholipase D1 or phospholipase D2 inhibited insulin signaling and was accompanied by an elevated cellular content of total PA, without a change in total DAG. Overexpression of diacylglycerol kinaseinhibited insulin signaling and was accompanied by an elevated cellular content of total PA and a decreased cellular content of total DAG. Overexpressing glycerol-3-phosphate acyltransferase-1 or -4 inhibited insulin signaling and increased the cellular content of both PA and DAG. Insulin signaling impairment caused by overexpression of phospholipase D1/D2 or diacylglycerol kinasewas always accompanied by disassociation of mTOR/rictor and reduction of mTORC2 kinase activity. However, although the protein ratio of membrane to cytosolic PKC⑀ increased, PKC activity itself was unaltered. These data suggest that PA, but not DAG, is associated with impaired insulin action in mouse hepatocytes.
The accumulation of triacylglycerol in non-adipose tissues is highly associated with insulin resistance, but the mechanistic relationship remains unclear. Both diacylglycerol (DAG) 2 and phosphatidic acid (PA) have been implicated as modulators of insulin signaling that contribute to insulin resistance, but controversy exists as to their sources, their specific roles, and their mechanism(s) of action (1). PA and DAG are produced via the pathway of de novo glycerolipid biosynthesis, and PA can also be produced by phospholipase D (PLD)-mediated phospholipid hydrolysis and by phosphorylation of DAG by DAG kinase (DGK). The latter enzyme simultaneously diminishes the DAG substrate. DAG is believed to cause insulin resistance when it activates conventional and novel PKC isoforms that inhibit insulin signaling by serine phosphorylation of insulin receptor substrate (IRS), thereby decreasing insulin-stimulated tyrosine phosphorylation (2)(3)(4).
The link between elevated DAG and hepatic insulin resistance is strong (2,5,6), but exceptions remain in which hepatic insulin resistance is unaffected despite elevated DAG content (7)(8)(9) or in which insulin resistance occurs without an increase in DAG (10). These discrepancies could result from sequestration in lipid droplets in which the DAG may be unable to activate PKC because it is poorly accessible (11).
We have reported that PA derived from overexpressing glycerol-3-phosphate acyltransferase (GPAT) or acylglycerol-3phosphate acyltransferase (AGPAT) disrupts the association of mTOR and rictor and inhibits insulin signaling to Akt, thereby contributing to hepatic insulin resistance (12,13). In contrast, PLD-mediated hydrolysis of phosphatidylcholine produces PA that is said to activate mTORC1 signaling and inhibit insulin signaling by enhancing IRS1 phosphorylation at serine sites (14,15). PLD-derived PA also enhances the association of mTOR and rictor in kidney and breast cancer cells (16). In addition to synthesis by PLD, PA can also be generated by phosphorylation of DAG by one of ten DGK isoforms (17). DGK alters the cellular content of DAG and PA simultaneously and in opposite directions (18). Although one study showed an association between DGK␦ inhibition and insulin resistance in skeletal muscle, the study examined changes in DAG, but not PA (19). Thus, it is not known whether a DGK-mediated increase in PA content leads to impaired insulin signaling.
We asked whether one or both of the lipid intermediates, PA and DAG, is involved in regulating hepatic insulin sensitivity and whether lipid intermediates that originate from different enzymatic sources exert different effects. When we overexpressed DGK and isoforms of PLD and GPAT in mouse primary hepatocytes, our results showed that regardless of its source, elevated PA, particularly PA that contained 16:0 fatty acid species, was associated with impaired insulin action.
Mouse Liver Perfusion, Hepatocyte Isolation, and Culture-Hepatocytes were isolated from perfused livers from 8 -15-weekold C57/B6J mice (12) and cultured overnight in William's medium E supplemented with 10% FBS, 1% penicillin/streptomycin, and 4 mM glutamine. Hepatocytes were infected with adenovirus or lentivirus constructs of the different enzymes in FBS-free William's medium E supplemented with 1% penicillin/streptomycin and 4 mM glutamine for 24 h, followed by treatments for specific experiments.
cDNA Cloning, Adenovirus Production, and Hepatocyte Infection-The construction, generation, and purification of recombinant Gpat1-FLAG adenovirus and Ad-enhanced green fluorescent protein (EGFP) were described previously (21). Gpat3 cDNA was cloned from an adipocyte cDNA library, and a Flag tag (Clontech) was added at the carboxyl terminus. Gpat4 cDNA was cloned as described (22). EGFP, Gpat1-Flag, Gpat3-Flag, and Gpat4-Flag constructs were subcloned into adenoviral vectors using the AdEasy adenoviral vector system (Stratagene). EGFP, Gpat1, Gpat3, and Gpat4 adenoviruses were packaged and titered by the UNC Gene Therapy Core Facility. Adenoviral constructs of PLD1, PLD2, and the catalytically inactive mutants of PLD1 and PLD2 were produced as described (23). MOIs of 10 -20 were used for mouse hepatocyte adenoviral infection, based on pilot experiments to obtain high efficiency of infection without toxic effects on the cells.
cDNA Cloning, Lentiviral Packaging, and Hepatocyte Infection-Lentiviral constructs for GFP and mouse DGK were generous gifts from Dr. Daniel M. Raben (Johns Hopkins University, Baltimore, MD). The constructs were transfected into HEK293T cells together with lentiviral packaging plasmids pCMV delta-R8.2 and PCMV-VSV-G (Addgene, Cambridge, MA). The crude lentiviruses (ϳ1-2 ϫ 10 8 transduction unit/ml) produced in DMEM containing 4.5 g/liter glucose, 30% FBS, 0.1% penicillin/streptomycin, and 4 mM glutamine were concentrated by 2 h ultracentrifugation at 25,000 ϫ g at 4°C. The lentivirus pellet was reconstituted in phosphate-buffered saline to obtain concentrated lentivirus with a titer of 1-2 ϫ 10 10 TU/ml. MOIs of 1 were used for mouse hepatocyte lentiviral infection, based on pilot experiments to obtain high efficiency of infection without toxic effects on the cells.
Lipid Extraction and PA and DAG Assays-The total cell lipid was extracted (24), and PA and DAG were analyzed by LC/MS on a Shimadzu Prominence ultra fast liquid chromatography system with a C8 column and detected with an applied Biosystems 4000Q Trap triple quadripole LC/MS/MS system equipped with an electrospray ionization system as described (12,25). The amount of each glycerolipid species in the biological samples was calculated from the peak areas obtained using the software that controls the LC/MS system (Analyst 1.5; Applied Biosystems). Raw peak areas were corrected for recovery and sample loading and then transformed into amounts of analyte using standard curves made with commercially obtained glycerolipids. To correct for recovery, 0.1 nmol of 17:0 lysophosphatidic acid was added as an internal standard and normalized to the protein concentrations of the cellular lysates.
GPAT Activity-Mouse hepatocytes were homogenized in buffer (10 mM Tris-HCL, pH 7.4, 250 mM sucrose, 1 mM DTT, 1.0 mM EDTA). Total membrane fractions were isolated by centrifuging at 100,000 ϫ g for 1 h. Protein concentrations were determined by the bicinchoninic acid method using bovine serum albumin as the standard. GPAT specific activity was assayed for 10 min at 25°C with 800 M [ 3 H]glycerol-3-phosphate and 100 M palmitoyl-CoA. The reaction was initiated by adding 10 -30 g of total membrane protein to the assay mixture after incubating the membrane protein on ice for 15 min in the presence or absence of 1 mM NEM, which inhibits GPAT2, GPAT3, and GPAT4 (26). Microsomal activity (primarily GPAT3 and GPAT4) (22) was calculated as the activity inactivated by NEM.
PLD Activity-PLD activity was measured as previously described with [ 3 H]phosphatidylbutanol produced in [ 3 H]palmitate-labeled hepatocytes through PLD-catalyzed transphosphatidylation (27). Briefly, 24 h after adenoviral infections, hepatocytes were labeled for 24 h with [ 3 H]palmitate in the absence of serum. The labeling medium was then replaced with fresh medium containing 0.3% 1-butanol with or without 100 nM PMA, and 30 min later total cellular lipids were extracted and separated on Whatman LK5D silica gel thin layer chromatography plates (with an authentic phosphatidylbutanol positive control) (27,28) in a developing solvent system of ethyl acetate: 2,2,4-trimethylpentane:acetic acid:water (110:50:20:100, v/v/v/v). The plate was dried and exposed to iodine. The iodine-stained phosphatidylbutanol spot was marked and scraped, as was the remainder of the lane. Relative PLD activity was expressed as the ratio of the scintillation counts in the phosphatidylbutanol spot to those of the entire lane (27).
PKC Activity-Hepatocytes were washed with cold buffer I (20 mM Tris-HCl, pH 7.5, 2.0 mM EDTA, 0.5 mM EGTA, 1 mM phenylmethylsulfonyl fluoride, 1 mM dithiothreitol, and 0.33 M sucrose), scraped from the dish, centrifuged, and resuspended in cold buffer I containing 25 g/ml leupeptin and 0.1 mg/ml aprotinin. The cells were homogenized with 20 up and down strokes at 4°C in a motorized Teflon-glass homogenizer. The soluble fraction, obtained after centrifuging at 100,000 ϫ g for 60 min, was defined as the cytosolic extract. The pellets were washed with cold buffer I, resuspended in cold buffer II (buffer I without sucrose) containing 1% Triton X-100, and homogenized with 10 up and down strokes at 4°C. This fraction was defined as the membrane fraction. PKC activity was assayed with a protein kinase C assay system (Promega) according to the manufacturer's instructions. PKC activity was measured as the calcium-, phosphatidylserine-, and DAGstimulated transfer of 32 P from [␥-32 P]ATP to the substrate peptide (AAKIQAS*FRGHMARKK; the asterisk indicates that the preceding serine is the site that is phosphorylated by PKC) and expressed as pmol [ 32 P]/min/mg protein.
Immunoprecipitation and Kinase Activity Assay-Immunoprecipitation of rictor and the mTORC2 kinase assay were performed as described (31). Western blots were probed with phosphor-Akt (Ser-473) to assess mTORC2 activity.
Western Blot Analysis-Hepatocytes were harvested in lysis buffer (20 mM Tris-HCl, pH 7.5, 0.1 mM Na 3 VO 4 , 25 mM NaF, 25 mM glycerophosphate, 2 mM EGTA, 1 mM dithiothreitol, 0.5 mM phenylmethylsulfonyl fluoride, and 0.3% Triton X-100). The lysates were mixed 1:1 with 2ϫ Laemmli sample buffer and boiled before loading onto SDS-PAGE. Western blotting was carried out according to procedures recommended by suppliers of the antibodies. Horseradish peroxidase-conjugated secondary antibodies were detected with SuperSignal West Pico chemiluminescent substrate and exposure to x-ray film. The film was converted to digital images by an Epson scanner (Perfection 2400), and the images were cropped using Photoshop CS2.
Statistical Analysis-The values are expressed as means Ϯ S.E. Comparisons were determined using Student's t test (EGFP or GFP set as control). The data represent at least three inde-pendent experiments, each performed in triplicate. p Ͻ 0.05 was considered significant.
RESULTS
Overexpressed PLD1 or PLD2 Impaired Insulin Signaling to Akt in Mouse Hepatocytes-The PLD-mediated hydrolysis of phosphatidylcholine to produce PA has taken center stage in mTOR signaling (32,33). PA derived via PLD stimulates mTORC1/S6K1 signaling and increases the phosphorylation of IRS-1 on serines 307, 632, and 636/639, residues that inhibit insulin signaling to Akt (34,35). In contrast to the inhibition of insulin signaling by mTORC1, insulin signaling is enhanced by mTORC2, which phosphorylates and activates Akt at serine 473 (31). Because the PA derived via PLD in human kidney and mammary carcinoma cells stabilizes both mTORC1 and mTORC2 complexes (16), PA could produce paradoxical outcomes of both enhanced and inhibited insulin signaling.
To investigate the role of PLD in hepatic insulin signaling, we overexpressed PLD1 and PLD2 in primary mouse hepatocytes. Although the antibody for PLD1 also recognizes PLD2, overexpression was confirmed by Western blotting (Fig. 1A) by 2-fold (PLD1) and 25-fold (PLD2) increases in basal activity relative to EGFP; and by 10-fold (PLD1) and 32-fold (PLD2) increases in PMA-stimulated PLD activity, respectively (Fig. 1B). Overexpressing catalytically inactive PLD1 and PLD2 constructs did not increase PLD activity (Fig. 1B). Overexpressing PLD1 and PLD2 in mouse hepatocytes completely blocked insulin signaling, as represented by absent insulin-stimulated Akt phosphorylation at serine 473 and threonine 308. Specificity for these results was confirmed by the absence of inhibition when the inactive mutants of PLD1 and PLD2 were used (Fig. 1C). Thus, PLD-derived signals disrupted insulin signaling in hepatocytes. Overexpression of PLD1, PLD2, or the inactive PLD constructs did not alter basal or insulin-stimulated phosphorylation of IRS1 (Ser-612), consistent with an effect that occurred downstream of the insulin receptor ( Fig. 1, D and E).
Overexpressing PLD1 and PLD2 Increased the Cellular Content of Total PA and di16:0 PA, but Did Not Change Total DAG or di16:0 DAG Content-Our previous study showed that overexpression of GPAT1 in primary mouse hepatocytes diminished insulin signaling, blocked phosphorylation of Akt at the mTORC2 site serine 473, and markedly increased PA species, particularly di16:0 PA. Furthermore, of several molecular species of PA and LPA tested, only diC16:0 PA disrupted the association of mTOR and rictor (12). To determine which lipids produced by PLD1 and PLD2 were associated with disrupted insulin signaling in mouse hepatocytes, we examined the cellular content of PA and DAG species. Compared with cells expressing EGFP, overexpressed PLD1 increased total PA, 16:0containing PA, and the remaining PA species 7.9-, 10-, and 5.8-fold, respectively, and increased di16:0 PA 4.4-fold (Fig. 2, A and B, and Table 1). Overexpressed PLD2 increased total PA, 16:0-containing PA, and the remaining PA species 1.7-, 1.9-, and 1.5-fold, respectively, and increased di16:0 PA 3.2-fold (Fig. 2, A and B, and Table 1). Overexpressed PLD1 or PLD2 did not alter the total DAG content or the content of any DAG species (Fig. 2, C and D, and Table 2). These results indicate that PA, but not DAG, was associated with impaired insulin signaling. The eight PA species identified were increased 2.6 -13-fold by PLD1 overexpression; however, PLD2 overexpression increased di16:0 PA to the greatest extent (3.2-fold) (Fig. 2, A and B, and Table 1). Because the block in Akt phosphorylation was similar with overexpression of either PLD1 or PLD2 (Fig. 1C), these results suggest that di16:0 PA is most strongly associated with impaired insulin signaling.
Overexpressed DGK Impaired Insulin Signaling to Akt-Because PLD overexpression resulted in impaired insulin signaling, we asked whether PA produced by DGK-mediated phosphorylation of DAG would similarly inhibit insulin signaling. The DGK reaction is of particular interest because it reciprocally modulates the relative concentrations of DAG and PA, two lipids involved in signal transduction. Although DGK is expressed in liver (36,37), little is known about its physiological function related to insulin signaling. Overexpression of the DGK protein was confirmed by Western blotting (Fig. 3A) and by a 4-fold increase in DGK activity (Fig. 3B). Overexpressed DGK almost completely blocked insulin signaling, as shown by the absence of insulin-stimulated Akt phosphorylation at serine 473 and threonine 308 (Fig. 3C). Overexpressing DGK did not alter the phosphorylation of IRS1 (Ser-612) (Fig. 3, D and E), suggesting that the lipid signal derived from DGK had disrupted the insulin signaling pathway downstream of IR/IRS1.
Overexpressing DGK Increased the Cellular Content of Total PA and di16:0 PA, and Decreased Total DAG and di16:0 DAG-To learn which lipid signals produced by DGK were associated with impaired hepatic insulin signaling, we overexpressed DGK in mouse hepatocytes and examined the cellular content of PA and DAG. Compared with the GFP control, overexpressing DGK increased total PA, 16:0-containing PA species, and the remaining PA species by 2.3-fold and increased di16:0 PA 9.4-fold (Fig. 4, A and B, and Table 1). Overexpressing DGK decreased total DAG, 16:0-containing DAG, and other DAG species by 71, 91, and 27%, respectively, and decreased di16:0 DAG by 95% (Fig. 4, C and D, and Table 2). These results show a strong association between diminished insulin signaling FEBRUARY 6, 2015 • VOLUME 290 • NUMBER 6 and the hepatocyte content of PA, especially di16:0 PA, but not with DAG content or species. Overexpressing Gpat1 or Gpat4, but Not Gpat3, Impaired Insulin Signaling to Akt-In addition to PLD and DGK, increasing flux through the glycerolipid biosynthetic pathway can increase cellular PA content. Overexpressing GPAT1 in rat liver increases hepatic triacylglycerol content and induces hepatic insulin resistance (36), and overexpressing GPAT1 in cultured mouse hepatocytes diminishes insulin-mediated suppression of glucose production by disrupting the association between mTOR and rictor (12). Of the four GPAT isoforms found in mammals, GPAT1 and GPAT2 are located on the outer membrane of mitochondria, and GPAT3 and GPAT4 reside on the endoplasmic reticulum (26) and on lipid droplets (37). GPAT1 has a strong substrate preference for 16:0-CoA, and GPAT4 shows a mild preference for acyl-CoAs that contain 16 and 18 carbons, but GPAT2 and GPAT3 exhibit no acyl-CoA specificity (22,38). To investigate the roles of different GPAT isoforms in regulating insulin sensitivity, we overexpressed GPAT1, GPAT3, and GPAT4 in mouse primary hepatocytes by infecting the cells with adenovirus-GPAT isoforms that each contained a carboxyl-terminal Flag tag. Overexpression was confirmed by Western blotting (Fig. 5A) and by 5-fold (GPAT1), 4-fold (GPAT3), and 5.5-fold (GPAT4) increases in total GPAT activity (Fig. 5B), consistent with our previous studies (13). Overexpressing either GPAT3 or GPAT4 increased NEM-sensitive GPAT activity, whereas overexpressing GPAT1 increased NEM-resistant GPAT activity (Fig. 5B), consistent with the fact that GPAT1 is NEM-resistant and GPAT3 and GPAT4 are NEM-sensitive (26). Overexpressing GPAT1 or GPAT4 completely blocked insulin signaling to Akt as shown by absent Akt phosphorylation at serine 473 and threonine 308, the sites that are essential for full Akt activation. In contrast, overexpressing GPAT3 did not result in diminished Akt phosphorylation (Fig. 5C). These results replicate our previous report (13) and suggest (a) that signals from the GPAT-initiated pathway of lipid synthesis impair insulin signaling and (b) that the location of the GPAT isoform does not determine its effect on insulin sensitivity.
PA Correlates with Hepatic Insulin Resistance
Overexpressing GPAT4, but Not GPAT3, Increased the Cellular Content of Total PA and DAG, and di16:0 PA, but Not di16:0 DAG-In mouse hepatocytes that overexpressed GPAT1, inhibition of insulin signaling correlates with increases in the cellular content of both DAG and PA, particularly 16:0-containing PA and di16:0 PA (12). To learn whether this correlation exists for other GPAT isoforms, we examined the cellular content of PA and DAG in mouse hepatocytes that overexpressed GPAT3 or GPAT4. GPAT4 overexpression increased the content of total PA 4-fold, and di16:0-PA species 3.6-fold, and non-16: 0-PA species 4.9-fold ( Fig. 6A and Table 1), consistent with a previous study (13). In contrast, GPAT3 overexpression increased the content of total PA ϳ2.5-fold, but this increase was in non-16:0-PA species (Fig. 6, A and B, and Table 1). GPAT4 overexpression also increased the total DAG content and the content of DAG species containing 16:0, but GPAT3 overexpression did not alter the content of DAG (Fig. 6, C and D, and Table 2). Thus, the cellular content of both PA and DAG, especially the di16:0 PA and DAG species, correlated with diminished insulin signaling.
Overexpression of PLD1 or 2 or DGK Diminished the mTOR-Rictor Association and Inhibited mTORC2 Activity-To test the mechanism by which overexpressing PLD1/2 and DGK blocked insulin signaling, we immunoprecipitated rictor and assayed mTORC2 kinase using inactive Akt as the substrate. The expression of all three enzymes, PLD1, PLD2 and DGK, disrupted the mTOR-rictor association (Fig. 7, A and B) and inhibited mTORC2 activity (Fig. 7, A and C), indicating that similar to the overexpression of GPAT1 or GPAT4 in mouse FIGURE 5. Overexpressed GPAT1 and GPAT4 in mouse hepatocytes, but not GPAT3, impaired insulin signaling to Akt. Mouse primary hepatocytes were infected for 24 h with EGFP, Flag-GPAT1, Flag-GPAT3, or Flag-GPAT4 adenoviruses. Cells were lysed and subjected to Western blotting with anti-Flag antibodies (A); scraped from the dish, centrifuged to obtain total particulate preparations, and assayed for total and NEM-resistant (NEM-R) GPAT activity (B); or treated with or without insulin (100 nM) for 10 min, followed by cell lysate preparation for Western blotting (C). A and C show representative Western blots from three independent experiments; B shows data from three independent experiments, each done in triplicate. * and #, p Ͻ 0.05 compared with EGFP total and NEM-resistant GPAT activity, respectively. hepatocytes (12,13), the overexpression of PLD1, PLD2, or DGK blocked insulin signaling by disrupting the mTOR-rictor association and inhibiting mTORC2 activity.
Overexpression of GPAT1 and GPAT4 Increased the Protein Ratio of Membrane to Cytosolic PKC⑀ but Did Not Alter PKC Activity-DAG is believed to inhibit insulin signaling in liver by recruiting and activating PKC⑀ at the membrane (5). However, as with a previous report that focused on GPAT1 (12), expressing the GPAT4 isoform was not helpful in distinguishing between the roles of PA and DAG as regulators of insulin signaling. Therefore, to determine whether the increased DAG content present in GPAT4 overexpressing cells had interfered with insulin signaling, we asked whether the location and activity of PKC⑀ had changed. The locations of PKC␣ and PKC did not change, but as previously reported for overexpression of GPAT1 in rat liver (36), overexpressing either GPAT1 or GPAT4 increased the ratio of membrane to cytosolic PKC⑀ protein (Fig. 8A), a ratio that is often accepted as a proxy for PKC activation. However, similar to a previous report (39), this change resulted from a reduction in the cytosolic content of PKC⑀, although the membrane PKC⑀ content did not change (Fig. 8B). More importantly, PKC activity itself did not increase, in contrast to the 2.5-fold increase in activity produced by the PMA control (Fig. 8C). The decrease in cytosolic DAG may have resulted from sequestration of DAG in lipid droplets where it would not be accessible to membrane-associated PKC⑀ (11). These data suggest that when GPAT is overexpressed in cultured mouse hepatocytes to induce increased glycerolipid synthesis, PA molecules that contain 16:0 acyl groups are the lipid intermediates most likely to inhibit the insulin signaling pathway.
DISCUSSION
In the current study, we manipulated the cellular content of PA and DAG by overexpressing key enzymes that catalyze the production of these two lipid intermediates and demonstrated that the cellular content of PA, but not DAG, correlates closely with the inhibition of insulin signaling in mouse hepatocytes. Lipin, which synthesizes DAG from PA, was not included in this study because we previously reported that overexpressing Lipin2 does not inhibit insulin signaling in mouse hepatocytes; the overexpression of Lipin2 results in only a 1.24-fold increase in total PA with no change in di16:0 PA or total DAG (12).
PLD-derived PA appears to promote mTOR-raptor assembly, thereby activating mTORC1, which inhibits insulin signaling by enhancing IRS1 phosphorylation at serine sites (14,16). We have reported that GPAT1-derived PA disrupts the assembly of mTOR and rictor, thereby diminishing phosphorylation of phospho-Akt (Ser-473) and inhibiting insulin signaling (12). Others, however, have shown that PLD-derived PA promotes the assembly of both mTOR-raptor and mTOR-rictor (16,33). How can chemically identical molecules have opposite functions? It was proposed that because different PA-producing enzymes are located on or within specific subcellular organelles, the PA molecules produced by these enzymes might function differently because of their cellular compartmentalization (1,26,41). PA derived from GPAT (via AGPAT) is produced at the outer mitochondrial membrane, at the lipid droplet, or at the ER surface (26,42,43); the PA produced by PLD1 is present in the Golgi and the nucleus (44); and DGK produces PA at the plasma membrane, ER, and nucleus (45). However, because our previous and current data show that each of three different catalytic pathways, GPAT/AGPAT, PLD, and DGK, provides PA that inhibits insulin signaling to Akt, the ability of PA to disrupt insulin signaling suggests that the effect of the PA may be unrelated to its source. Instead, the ability of PA to stabilize or disrupt mTORC may depend on the length and saturation of its fatty acid chains (1,41). Our work has shown that overexpressing any of five enzymes in mouse hepatocytes, GPAT4, PLD1, PLD2, DGK, or GPAT1, impaired insulin signaling (Figs. 1, 3, and 5) and significantly increased the content of PA, particularly di16:0 PA (Figs. 2, 4, and 6 and Table 1). Although the hepatic PA content produced by GPAT3 was similar to that produced by DGK, it did not inhibit Akt phosphorylation, supporting the idea that the individual PA species are more critical than the amount of PA. This result is consistent with our previous report that of 10 lysoPA, DAG, and PA species tested, only di16:0 PA interfered with mTORC2 assembly (13).
Although many studies have concluded that DAG-mediated activation of PKC inhibits insulin signaling (2), divergent work has been reported. In both animal and human studies, elevated DAG content caused by lipid infusion or overload does not lead to suppressed insulin action or to insulin resistance, and insulin resistance is not always accompanied by increases in DAG content (1,(7)(8)(9). The current study provides data from cultured primary mouse hepatocytes showing that insulin resistance does not always correlate with DAG content and that the ratio of membrane to cytosolic PKC protein is not invariably a proxy for PKC activation.
Of the nine PKC isoforms, PKC⑀ has been most widely reported as activated by DAG in liver to suppress insulin action (5). However, the different DAG species vary in their ability to activate PKC (41). Activation of PKC requires stereospecific 1,2-sn-diacylglycerols (46). The 1,2-sn-di18:1 DAG and 1,2-sn-di20:4 DAG show the strongest stimulatory effect on PKC activity, and sn-1-18:0 -2-18:1 DAG, sn-1-18:0 -2-18:2 DAG, and sn-1,2-di18:2 DAG are also effective (47). However, most studies in which an increase in tissue content of DAG has been reported to activate PKC and cause insulin resistance have not distinguished among DAG species, making it unclear whether the activation was caused by an increase in total DAG or by a particular DAG species. We previously reported that overexpressing GPAT1 or AGPAT2 in mouse hepatocytes inhibited insulin signaling; this inhibition was accompanied by increases in the cellular content of both PA and DAG, suggesting that both these lipids might impair insulin signaling (12). When we identified the lipid species, we determined that overexpressing GPAT1 or AGPAT2 did not increase the reportedly effective DAG species, di18:1 DAG and 18:1-18:2 DAG. Our data suggested that although total DAG content increased 1.5-2.2-fold when GPAT1 or AGPAT2 was overexpressed, DAG might not have been the major inhibitor of insulin signaling (12). In the current study, both overexpressed PLD1 and PLD2 impaired insulin signaling in mouse hepatocytes but did not increase total DAG content, while only minimally increasing di18:1 DAG and 18:2-18:2 DAG (Figs. 1 and 2 and Table 2). In addition, overexpressing DGK impaired insulin signaling in mouse hepatocytes and decreased, rather than increased, total DAG and di18:1 DAG (Figs. 3 and 4 and Table 2). These results dissociate inhibited insulin signaling from hepatocyte DAG content. Similarly, although overexpressing GPAT4 in mouse hepatocytes impaired insulin signaling, PKC was not activated despite a 2.1-fold increase in total DAG content, perhaps because the DAG was sequestered in lipid droplets or because di18:1 DAG content increased only 20% and di18:2 DAG decreased.
Although some early studies of hepatic insulin resistance directly measured the activity of PKC, these studies did not show a correlation between hepatic insulin resistance, PKC activation, and increased DAG content (48,49). Except for a single paper that measured PKC activity (50), the majority of recent studies of PKC activity and hepatic insulin resistance measured only the ratio of membrane to cytosolic PKC⑀ protein. The current study shows that the increase in this ratio does not necessarily represent a change in PKC activity; in the present case, the ratio changed because the amount of cytosolic PKC⑀ protein consistently decreased in numerous preparations. A similar discrepancy in the ratio of membrane to cytosolic PKC⑀ and ␦ proteins and PKC activity was reported in mice with deficient hepatic MGAT1 (39). Notably, overexpressing any of five enzymes did not change IRS1 phosphorylation at serine 612 (Figs. 1, D and E, and 3, D and E), the site that is specifically phosphorylated by PKC (3), providing additional evidence that DAG does not mediate the inhibition of insulin signaling by these enzymes. The most direct evidence for DAG activation of PKC comes from in vitro studies in which purified PKC protein was added to a reaction system in which a synthesized peptide substrate was present together with phospholipids, Ca 2ϩ , and ATP, in the presence or absence of DAG. Results of in vitro studies showed 2-6-fold increases in PKC activity when particular species of DAG were present (40,47). Because conventional and novel PKC isoforms are activated by DAG, many commercial kits for PKC assay include DAG in the activation or coactivation buffer, which makes it unclear as to whether changes observed in PKC activity are caused by endogenous or exogenous DAG. Acute changes in DAG may not play the same role in live cells in which PKC might have already interacted with cellular DAG and might not be further affected by the DAG provided in commercial kits.
Our study has two potential limitations. Although the enzyme activities are within normal ranges for mammalian tissues, they do result from the overexpression of enzymes whose subcellular locations are not known, so it is possible that PA produced endogenously by each of these pathways would not have the same effect. A second caveat is that the primary hepatocytes were cultured in FIGURE 8. Overexpressing GPAT1 or GPAT4 increased the protein ratio of membrane to cytosolic PKC⑀ but did not alter PKC activity. Membrane and cytosolic preparations from mouse primary hepatocytes overexpressing EGFP, Flag-GPAT1, Flag-GPAT3, or Flag-GPAT4 were subjected to Western blotting (A) and protein quantification (B) or assayed for PKC activity (C). A and B show representative Western blots and protein quantifications from three independent experiments; C shows data from three independent experiments, each done in triplicate. *, p Ͻ 0.05 compared with EGFP.
Williams's medium E, which provides 5.5 mM glucose as the major energy source. The glucose may have enhanced ChREBP-controlled de novo lipogenesis, thereby increasing the production of 16:0 fatty acids that were then incorporated into PA via the GPAT pathway and into phospholipids available for PLD hydrolysis.
Finding a relationship between an increased cellular content of distinct PA species and impaired insulin signaling provides a new therapeutic target for treatment of insulin resistance. However, the mechanism for PA action in regulating insulin signaling requires further study. Although we have reported that di16:0 PA disrupts mTOR/rictor assembly in vitro, the mechanisms that underlie this disruption and the interaction of di16:0 PA with mTORC2 are not known. Likewise, a full understanding of the molecular mechanism of lipid intermediatemediated insulin resistance may also require the identification of particular DAG species that effectively activate PKC. | 7,490 | 2014-12-15T00:00:00.000 | [
"Biology",
"Medicine",
"Computer Science"
] |
Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework
We present an upgrade to the particle-in-cell ion beam simulation code OPAL that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron. This upgrade includes a new geometry class and field solver that can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV=amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results. We find that OPAL can now handle arbitrary boundary geometries with relative ease. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
I. INTRODUCTION
OPAL [1] is a particle-in-cell (PIC) code developed for the simulation of particle accelerators.It is highly parallel and comes in two distinct flavors: OPAL-CYCL (specific to cyclotrons and rings) and OPAL-T (general purpose).For the presented application, we focus on OPAL-CYCL which has been used very successfully to simulate existing high intensity cyclotrons like the PSI Injector II [2], and PSI Ring Cyclotron [2,3] as well as to design new cyclotrons like CYCIAE [4], DAEδALUS [5,6], and IsoDAR [7,8].
However, one piece has been missing so far: the axial injection using a spiral inflector or an electrostatic mirror.Both are electrostatic devices that bend the beam from the axial direction into the midplane of the cyclotron where it is subsequently accelerated.A schematic view of this type of injection for a spiral inflector is shown in Fig. 1.
In order to enable OPAL-CYCL to track particles through a complicated set of electrodes like a spiral inflector, the following additions have been made: (i) The Smooth Aggregation Algebraic Multi Grid (SAAMG) solver [9] has been extended to include arbitrarily shaped boundaries.(ii) A geometry class has been implemented that provides boundary conditions to the field solver and handles particle termination should their trajectories intersect with the electrodes.(iii) A number of improvements have been made to the internal coordinate transformations and the FIG. 1. Schematic of a spiral inflector with particle trajectories from an OPAL simulation.The beam enters axially (from the top) through an aperture (grey) and is bent into the midplane by a combination of the electrostatic field generated by the spiral electrodes (green and blue, voltage typically symmetric at þV spiral and −V spiral ) and the cyclotron's main magnetic field.Then it is accelerated by the two Dees (copper, Dummy-Dees not shown).Color online.
handling of beam rotations in OPAL-CYCL in order to accommodate the injection off-midplane.These additions will be discussed in Sec.II and references therein.At the end of this section, a benchmark against the analytical solution of a coasting beam in a grounded beam pipe with variable axial offset will be presented.
A specific example of the usefulness of realistic spiral inflector simulations is the ongoing R&D effort for the DAEδALUS and IsoDAR experiments (described briefly in Sec.III).In both cases, a very high intensity beam (≈10-30 mA average) needs to be injected, which is higher than current state-of-the-art cyclotrons have demonstrated.Part of the R&D for these projects was an experiment in collaboration with Best Cyclotron Systems, Inc. (BCS) in Vancouver, Canada to test a flat-field ECR ion source, transport through the Low Energy Beam Transport System (LEBT) and finally injection into a small test cyclotron through a spiral inflector.The results of this campaign are described in much detail in [10] and the most important points will be reiterated in Sec.III, before OPAL simulation results are benchmarked against the experimental results.
II. THE PARTICLE-IN-CELL CODE OPAL
For this discussion we briefly introduce OPAL-CYCL [2], one of the four flavors of OPAL.OPAL uses the PIC method to solve the collisionless Vlasov equation (cf.Sec.II A) in the presence of external and self-fields (cf.Sec.II B and Sec.II C).Particles are tracked using a 4th order Runge-Kutta (RK) integrator, in which the external fields are evaluated four times in each time step.Self-fields are assumed to be constant during one time step, because their variation is typically much slower than that of external fields.Boundary conditions for the field solver and particle termination are provided by the geometry class (cf.Sec.II D).
A. Governing equation
The collision between particles can be neglected in the cyclotron under consideration, because the typical bunch densities are low.The general equations of motion of charged particles in electromagnetic fields can be expressed in the time domain by with p ¼ m 0 cγβ the mechanical momentum of the particle and m 0 , q; γ; c, the rest mass, charge, relativistic factor, and speed of light, respectively β ¼ ðβ x ; β y ; β z Þ is the normalized velocity vector.The time and position dependent electric and magnetic vector fields Eðx; tÞ and Bðx; tÞ are written as E and B for brevity.
For M particles and the canonical variables: x j the position and P j ¼ p j − qA j the momentum of the jth particle, the evolution of the beam's distribution function fðx j ; P j ; tÞ∶ ðℜ 3M × ℜ 3M × ℜÞ → ℜ can be expressed by a collisionless Vlasov equation where A denotes the vector potential.In this particular case, the vector fields E and B, include both external (applied) fields and space-charge (self) fields, all other fields are neglected.
B. External fields
With respect to the external magnetic fields there are two options: (1) OPAL can calculate the off-median plane (z ≠ 0) components from a field map obtained for the cyclotron midplane, (2) a 3D fieldmap can be calculated externally and loaded into OPAL.For external electric fields only option two is available.
In option one, the vertical magnetic field component B z alone is known in the median plane(z ¼ 0) only.The reason for this mode is that often the cyclotron field is obtained by measurement and this simplifies the process.Since the magnetic field outside the median plane is required to compute trajectories with z ≠ 0, the field needs to be expanded in the z-direction.Using a magnetic potential and (measured) B z on the median plane at the point ðr; θ; zÞ in cylindrical coordinates, the 3rd order field can be written as with B z ≡ B z ðr; θ; 0Þ and All the partial differential coefficients are computed from the median plane data by interpolation using Lagrange's 5-point formula.
In option two, a full 3D field map for the region of interest is calculated numerically by 3D CAD modeling and finite elements analysis (FEA) using commercial software.In this case the calculated field will be more accurate, especially at large distances from the median plane, but not necessarily correspond as well to the field of an existing machine which might include manufacturing errors.For all considerations in this paper, we use the second method.Fields (where applicable) were generated as 3D field maps with VectorFields OPERA [11].
In case of RF fields, OPAL-CYCL varies the field with a cosine function where t is the time of flight, ω rf the rf frequency, and ϕ S the starting phase of the particle.
C. Space-charge fields
The space-charge fields can be obtained by a quasistatic approximation.In this approach, the relative motion of the particles is nonrelativistic in the beam rest frame, so the self-induced magnetic field is practically absent and the electric field can be computed by solving Poisson's equation in the computational domain Ω with (arbitrary) boundary surface ∂Ω.Here, ϕ and ρ are the electrostatic potential and the spatial charge density in the beam rest frame.The electric field can then be calculated by and back transformed to yield both the electric and the magnetic fields (E sc , B sc ) in the lab frame, required in Eq. (1), by means of a Lorentz transformation.As beams through a spiral inflector are typically nonrelativistic, this step is omitted in the SPIRAL mode of OPAL-CYCL by setting γ ¼ 1.
A parallel 3D Poisson solver for the electrostatic potential computation was presented in [9] for the special case of Ω and ∂Ω being a round beam pipe with open-end boundary conditions.In this section, we present a method for solving Eq. ( 2) in more complex geometries by taking into account arbitrary boundary shapes ∂Ω during the discretization of the problem and the computation of the space-charge forces within Ω.
Spatial discretization
To solve the Poisson problem in Eq. (2) in Ω, we use a cell-centered discretization for the Laplacian.A grid point is called interior point if all its neighbors are in Ω, or near-boundary point otherwise.The discrete Laplacian with mesh spacing h ¼ h i ¼ ðh x ; h y ; h z Þ for interior points is defined as the regular seven-point finite difference approximation of −∇ 2 , where êi is the ith coordinate vector and h i denotes the mesh spacing in the ith dimension [12].At a near-boundary point (e.g., x in FIG. 2) where not all its neighbors are in the domain, we use an approximation for the Laplacian and the value of ϕ at the near-boundary point is obtained by one of the extrapolation methods described in [9].With s L=R;i denoting the six possible cases for a boundary near x ([Left, Right] × [x, y, z]) and 0 < s L=R;i ≤ h i , we have where the value of ϕ at x L=R;i is defined through one of the following extrapolation methods:
Implementation
The query class, an interface to search for the points inside of the irregular domain was implemented.Once the points inside the irregular domain are detected, their intersection values in six different directions are stored in containers.The coordinate values are mapped onto their intersection values to be used as a fast look-up table.The distances between the near-boundary point and its intersection values are used for the linear extrapolation.The finite difference approximation of the Poisson problem requires solving a system of linear equations to compute the electrostatic potential.The resulting linear system is solved using the preconditioned conjugate gradient algorithm complemented by an algebraic multigrid preconditioner using the TRILINOS framework [13].TRILINOS is a collection of software packages that support parallel linear algebra computations on distributed memory architectures, in particular the solution of linear systems of equations.EPETRA provides the data structures that are needed in the linear algebra libraries.AMESOS, AZTECOO, and BELOS are packages providing direct and iterative solvers.ML is the multilevel package, that constructs and applies the smoothed aggregation-based multigrid preconditioners.
D. Geometry
For the precise simulation of beam dynamics, exact modeling of the accelerator geometry is essential.Usually a CAD model of the accelerator (or part of it) is already available [see Fig. 3(a)].From these models we need to create a mesh, to specify the boundary, ∂Ω in Eq. ( 2).In case of the spiral inflector, we have to add a cylinder as an outer boundary [see Fig. 3(b)].This modified CAD model can then be used to create a triangle mesh T modeling the inner surface of the system [see Fig. 3(c)].
Provided with a closed, meshed boundary, OPAL is able to model arbitrary accelerator geometries and provides methods for (1) testing whether a particle will collide with the inner surface of the geometry (boundary, ∂Ω) in the next time step (2) computing the distance d ¼ jX 0 − Ij from a given point X 0 ∈ Ω to the boundary intersection point I ∈ ∂Ω (see FIG. 4) (3) testing whether a given point X 0 ∈ Ω is inside the geometry.Only points inside the geometry are in the computational domain Ω.
The geometry can consist of multiple parts.Each part must be modeled as a 3D closed volume.The used methods are based on well-known methods in computer graphics, especially ray tracing [15].
Initializing the geometry
For testing whether a particle will collide with the boundary in then next time step, we can run a line segment/triangle intersection test for all triangles in the mesh.Even in the case of very simple structures, triangle meshes with thousands of elements are required.Applying a brute force algorithm, we have to run this test each timestep for all particles, rendering the naive approach as not feasible due to performance reasons.
In computer graphics this problem is efficiently solved by using voxel meshes.A voxel is a volume pixel representing a value on a regular 3D grid.Voxel meshes are used to render and model 3D objects.
To reduce the number of required line segment/triangle intersection tests, a voxel mesh V covering the triangle mesh T is created during the initialization of OPAL.In this step, all triangles are assigned to their intersecting voxels, whereby a triangle usually intersects with more than one voxel.
For the line segment/triangle intersection test we can now reduce the required tests to the triangles assigned to the voxels intersecting the line segment.The particle boundary collision test can be improved further by comparing the particle momentum and the inward pointing normal n of the triangles.
In the following, we use the following definitions: T: represents the set of triangles in the triangulated mesh.V: represents the set of voxels modeling the voxelized triangle mesh.L: represents a closed line segment bounded by points X 0 and X 1 (see FIG. 4).R: represents a ray defined by the starting point X 0 passing through X 1 .T v ⊂ T: represents the subset of triangles t ∈ T which have intersections with v ∈ V. V L ⊂ V: represents the subset of voxels v ∈ V which have intersections with the line segment L. I t;L : represents an intersection point of a line segment L with a triangle t ∈ T. T L : represents the set of tuples ðt L ; I t;L Þ where t L ∈ T intersects with L.
Basic ray/line-segment boundary intersection test
In the first step, we have to compute V L , which is the set of voxels in the voxel mesh V which have intersections with the given line-segment or ray L. With T v ¼ ft ∈ Tjt intersects with vg we can compute a small subset of triangles which might have intersections with L. We have to run the ray/line-segment triangle test only for all triangles in T V;L .
Particle boundary collision test
In the collision test we use a slightly modified intersection test.To test whether a particle will collide with the boundary we have to test whether the line-segment given by the particle position X 0 at time step s and the expected particle position X 1 at time s þ 1 intersects with the boundary.The closed line segment L given by X 0 and X 1 is used as input parameter for the boundary collision test.However, if the particle moves away from a given triangle t, we do not have to run the line-segment triangle intersection test for t.We know the inward pointing normals n for all triangles, hence we can compute the dot product of the normal n and the vector defined by the current particle position X 1 and the position in the next time step X 1 .If n • ðX 1 − X 0 Þ > 0, both vectors point in the same direction, so there cannot be a collision of the particle with the boundary.
Compute distance from point to boundary
To compute the distance from a point P inside the geometry to the boundary, given a directional vector d, the same algorithm as for the particle boundary collision test can be used with the ray L ¼ ðP; dÞ.
Inside test
Test whether a given point P is inside the geometry.
E. Bunch rotations
In OPAL-CYCL, the coordinate system is different from OPAL-T.The x and y coordinates are the horizontal coordinates (the cyclotron mid-plane) and z is the vertical coordinate.Internally, both Cartesian (x, y, z) and cylindrical (r, Θ, z) coordinate systems are used.For simplicity, in the past the injection of bunches in OPAL-CYCL had to happen on the cyclotron midplane with the option to include an offset in z direction within the particle distribution itself.The global coordinates of the beam centroid were thus restricted to r and Θ.In order to accommodate the injection of a bunch far away from the mid-plane and a mean momentum aligned with the z-axis, the handling of rotations in OPAL-CYCL was updated from 2D rotations in the mid-plane to arbitrary rotations/translations in three-dimensional (3D) space.Quaternions [16] were chosen to avoid gimbal-lock [17] and a set of rotation functions was implemented.These are now used throughout OPAL-CYCL.Outside of the new SPIRAL mode, quaternions are also used to align the mean momentum of the bunch with the y axis before solving for self-fields in order to simplify the inclusion of relativistic effects through Lorentz backtransformation into the lab frame.In the SPIRAL mode, no relativistic effects are taken into account due to the low injection energy (typically < 100 keV).
F. Simple test case
In order to test the proper functionality of the SAAMG fieldsolver in combination with an external geometry file, we compared the calculated fields and potentials of a FLATTOP distribution (uniformly populated cylinder) inside a geometry generated according to the previous subsection with the analytical solution of a uniformly charged cylindrical beam inside a conducting pipe.The geometry file we used contained a 1 m long beam pipe with 0.1 m radius.For a concentric beam, a regular fast Fourier transform (FFT) field solver is sufficient.However, if the beam is off-centered by an amount ξ (see Fig. 5) and especially when it is close to the conducting walls of the beam pipe, the electric field calculated by the FFT solver does no longer reproduce reality.This is even more true for complicated geometries like a spiral inflector.To compare the OPAL results with the analytical solution presented below, all simulations were run in a way where the bunch frequency f b was adjusted such that for a given bunch length l b and beam velocity v b the given beam current I corresponded to the equivalent DC beam current (i.e., subsequent bunches are head-to-tail) The bunch radius r b was chosen to be 0.01 m and the length l b to be 0.6 m so that in the center of the bunch to very good approximation the conditions of an infinitely long beam hold.
For such an infinitely long beam, E and ϕ are independent of z, E z ðx; yÞ ¼ 0, and E x;y ðx; yÞ and ϕðx; yÞ can be calculated from Poisson's equation using the method of image charges.With (where I is the beam current and v b the beam velocity), the resulting expressions for ϕ and E inside and outside (superscript "in" and "out") of the beam envelope are and A wide parameter space was mapped in terms of mesh size, number of particles, beam length, and position of the beam inside the beam pipe.As can be expected, the comparison between theory and simulation gets better with higher resolution (i.e., higher number of mesh cells) and larger number of particles.The reduced χ-square was chosen to compare the simulated results to the theoretical prediction and a plot of χ 2 red for variation of mesh size and particle number is shown in Fig. 6.It can be seen that the agreement is better for a centered beam and so it is especially important to choose a high enough resolution when the beam is close to the beam pipe (or other electrodes in the system).For this particular case, it was found that a total number of mesh cells of 256 in x-direction (≈25 across the beam diameter) together with ≈2 × 10 6 particles gave excellent agreement even when the beam was touching the pipe, with only slight or no further improvement at larger numbers.
As another representative example, the OPAL results of a 0.6 m long beam in a 10 cm diameter beam pipe, using 2048000 particles and a mesh of dimensions 256 × 128 × 512, are plotted together with the analytical solution from Eqs. (3) for a beam with varying offset ξ in x-direction from the center of the beam pipe in FIG. 7.
In summary, the SAAMG solver performed as expected when tested with the simple test-case of a quasi-infinite uniform beam in a conducting pipe.In the next section, the solver will be applied to a real world problem and results will be compared to measurements.
III. BENCHMARKING AGAINST EXPERIMENTS
An important step in benchmarking new simulation software is the comparison with experiments.During the summers of 2013 and 2014, a measurement campaign was held at Best Cyclotron Systems Inc. (BCS), to test the production of a high intensity H þ 2 beam in an offresonance ECR ion source and its injection into a compact (test) cyclotron through a spiral inflector.These tests were performed within the ongoing R&D effort for the DAEδALUS and IsoDAR experiments (cf.next section) and provided a good opportunity to compare results of injected beam measurements with FIG. 6. χ 2 red of the calculated ϕ compared to the simulated values.It can be seen that the results are slightly worse for a beam close to the beam pipe.Also, for number of mesh cells in x-direction > 256, only marginal improvement can be seen.These simulations were performed with a total number of particles NP ¼ 8000 • MX where MX denotes the number of mesh cells in x-direction (the independent variable in the plot).
OPAL simulations using the new spiral inflector capability.
A. DAEδALUS and IsoDAR
The decay at-rest experiment for δ CP studies at a laboratory for underground science (DAEδALUS) [5,6] is a proposed experiment to measure CP violation in the neutrino sector.A schematic view of one DAEδALUS complex is shown in Fig. 8. H þ 2 is produced in an ion source, transported to the DAEδALUS Injector Cyclotrons (DIC), and accelerated to 60 MeV=amu.The reason for using H þ 2 instead of protons is to overcome space charge limitations due to the required high beam intensity of 10 emA of protons on target.H þ 2 gives 2 protons for each unit of charge transported, thus mitigating the risk.The ions are subsequently extracted from the cyclotron and injected into the DAEδALUS Superconducting Ring Cyclotron (DSRC) where they are accelerated to 800 MeV=amu.During the highly efficient stripping extraction, the 5 emA of H þ 2 become 10 emA of protons which impinge on the neutrino production target (carbon) producing a neutrino beam virtually devoid of νe .In a large detector, one can then look for νe appearance through neutrino oscillations.The injector stage (ion source and DIC) of DAEδALUS can be used for another experiment: The Isotope Decay At Rest experiment IsoDAR [7,8].In IsoDAR, the 60 MeV=amu H þ 2 will impinge on a beryllium target creating a high neutron flux.The neutrons are captured on 7 Li surrounding the target.The resulting 8 Li beta-decays producing a very pure, isotropic νe source which can be used for νe disappearance experiments.IsoDAR is a definitive search for socalled "sterile neutrinos," proposed new fundamental particles that could explain anomalies seen in previous neutrino oscillation experiments.
At the moment, OPAL-CYCL is used for the simulation of three very important parts of the DAEδALUS and IsoDAR systems: (1) The spiral inflector (2) The DAEδALUS Injector Cyclotron (DIC), which is identical to the IsoDAR cyclotron.(3) The DAEδALUS Superconducting Ring Cyclotron (DSRC) for final acceleration.For the topic FIG. 7. ϕ and E x along the x-axis for different offsets ξ.Dots are values calculated by OPAL, while solid lines are the analytical solution.Excellent agreement can be seen with 0.01 < χ 2 red < 0.02 for the potential and 0.03 < χ 2 red < 1.3 for the field. of benchmarking, we will restrict ourselves to item 1, the injection through the spiral inflector.
B. The teststand
As mentioned before, the results of the injection tests are reported in detail in [10].Here, we will summarize the items pertinent to a comparison to OPAL, specifically, how we obtain the particle distribution at the end of the LEBT (entrance of the cyclotron) used as initial beam in the subsequent injection simulations with the SAAMG solver.
The test stand was comprised of the following parts: (1) Versatile ion source (VIS) [19].An off-resonance electron cyclotron resonance (ECR) ion source.( 2 During the experiment, it was possible to transport up to 8 mA of H þ 2 as a DC beam along the LEBT to the cyclotron and transfer 95% of it through the spiral inflector onto a paddle probe.The 4-rms normalized emittances stayed below 1.25 π-mm-mrad.Capture efficiency into the rf "bucket" was 1%-2% because of reduced dee voltage (V dee ) due to an underperforming rf amplifier (cf.discussion in [10]).
C. Initial conditions
The quality of any simulation result depends on the initial conditions.In the case of the OPAL SAAMG simulations of the injection through the spiral inflector, the initial particle distribution consisted of 66021 particles obtained by carefully simulating the ion source extraction (using KOBRA-INP [20]) and the subsequent LEBT (using WARP [21]) and comparing the simulation results to measurements, with good agreement as reported in [10].During the WARP simulations of the LEBT, the "xy-slice-mode" was used in which the self-fields are calculated only for the transverse direction (assuming only very slow changes in beam diameter along the zaxis compared to the length of each simulation step) and neglecting longitudinal self-fields (which is a sensible approach for DC beams).Space charge compensation played a big role in order to obtain good agreement and was taken into account using a semianalytical formula [22].The final particle distribution that was obtained for the set of parameters recorded during the measurements is shown in Fig. 9.It should be noted that the bunch was generated from the xy-slice at a position 13 cm away from the cyclotron midplane and coaxial with the cyclotron center by randomly backward and forward-projecting particles according to their respective momenta.It can be seen that this beam enters the spiral inflector converging, which has been found experimentally to give the best injection efficiency.The important parameters of the injected beam are listed in Table I.
D. Results
The beam described in the previous section was then transported through the spiral inflector using the standard FFT and the new SAAMG field solvers described in Sec.II C, and putting in place the geometry seen in Fig. 3. Inside the cyclotron, 45°after the exit of the spiral inflector, a radial probe was placed which had 5 fingers of ≈5 mm vertical extent, and ≈1 mm radial extent each.On these fingers, the electrical beam current was measured.10.Measurement and simulation of a 5-finger probe (for description cf.text) in the first turn of the BCS test cyclotron central region, 45°after the exit of the spiral inflector.Each finger has a main peak and a tail toward lower radius.These are the particles that are not sufficiently accelerated.To guide the eye and to compare with the simulations, the data for each finger is fitted with two Gaussians.
The probe was slowly moved from a position blocking the beam completely, to just outside of the radial extent of first turn, thereby giving the beam current distribution shown in the top plot of Fig. 10.In the same plot, the results from OPAL simulations using the same parameters as recorded during the measurement (see Table I) are plotted.Good qualitative agreement can be seen for both the FFT and the SAAMG solver.Due to the tail towards low radius, two Gaussians are used to fit each peak in the left column.The full widths at half maximum (FWHM) of the dominant Gaussians are listed in Table II.There seems to be a slight shift towards higher vertical position that is better reproduced in the SAAMG solver, but this is well within the systematic errors of the measurement and how well the initial parameters like magnetic field, spiral inflector voltage and beam distribution were known, hence the conclusion is that both FFT and SAAMG reproduce the measured radial-vertical beam distribution equally well for a 6 mA beam.This shows that the SAAMG solver is working as expected.
Higher beam currents
For initial beam currents up to 36 mA, the results of the FFT and SAAMG solvers start to show stronger (but still fairly subtle) discrepancies which can be attributed to the image charges on the electrodes only included with the SAAMG solver.An example is shown in Fig. 11 where the expected spreading of the peak positions is accompanied by a noticeable overall shift toward smaller radii in case of the SAAMG solver.
IV. CONCLUSION A. Summary and discussion
A comprehensive spiral inflector model within the particle-in-cell framework OPAL has been presented that allows injection from the axial direction through the combined 3D electric and magnetic fields of the spiral inflector (or mirror) and the cyclotron main field, respectively.Key features of the model are the flexible handling of the complex geometry (for boundary conditions and particle termination) and the field solvers for space charge calculation.In comparison with first measurements, both the FFT and the SAAMG solver perform well, with hints that image charge effects become more important at higher currents, where use of the SAAMG solver allows including the complicated boundary conditions posed by the electrode system.These ingredients-geometry and field solvers (FFT and SAAMG)-are now included with OPAL.Validation of the model included simple test cases and comparison to measurements from a dedicated cyclotron test stand, injecting a DC beam of 7 mA of H þ 2 .Both yielded good agreement.The level of detail available in this model now allows us to obtain a better understanding of, and predict the complicated beam dynamics in, the high current compact IsoDAR cyclotron.
B. Outlook
Given this benchmarked model, a full start-to-end simulation of the IsoDAR cyclotron is ongoing, using first the detailed geometry for acceleration up to 1 MeV=amu, and then a simplified model with FFT only for the subsequent acceleration up to 60 MeV=amu.Recently, a proposal was put forward to test direct injection into the compact IsoDAR cyclotron using a radio frequency quadrupole (RFQ) [23].For the design of this device, OPAL-CYCL and the new SPIRAL mode will play an essential role.
ACKNOWLEDGMENTS
FIG. 2. Representative example of a boundary left of x inx-direction.x L;x ¼ x − h x • êx , x B;x ¼ x − s L;x • êx , x and x R;x ¼ x þ h x • êxare the point outside of Ω, boundary point, nearboundary point, and interior point, respectively.Similar for the y and z-directions.
FIG. 3 .
FIG.3.The four stages of OPAL geometry preparation: (a) Initial CAD model of the electrodes in the system.(b) An inverted solid is created by subtraction of all elements in (a) from a "master volume" (in this case a cylindrical chamber).(c) The inverted geometry is saved in stp format and a mesh is generated using GMSH[14].(d) During initialization, OPAL creates a voxel mesh to speed up the tests that have to be performed every time-step (cf.text).
FIG. 4 .
FIG. 4. The boundary ∂Ω is discretized by a set of triangles T. Shown is the line segment/triangle intersection test with the line segment ðX 0 ; X 1 Þ, the triangle t ∈ T and the intersection at I ∈ ∂Ω.
FIG. 8 .
FIG. 8. Schematic of the DAEδALUS facility.H þ 2 is produced in the ion source, transported to the DAEδALUS Injector Cyclotron (DIC) and accelerated to 60 MeV=amu.Ions are subsequently extracted from the cyclotron and injected into the DAEδALUS Superconducting Ring Cyclotron (DSRC), where they are accelerated to 800 MeV=amu.During the highly efficient stripping extraction, 5 emA of H þ 2 becomes 10 emA of protons which impinge on the neutrino production target.
FIG. 9 .
FIG.9.Initial distribution for injection through the spiral inflector, obtained by carefully simulating the LEBT.The beam is slightly hollow and has a clearly visible halo.Both are caused by overfocused protons, as discussed in[10].The length of the bunch corresponds to one full rf period at 49.2 MHz and injection energy of 62.7 keV, centered at the synchronous phase, thus, to first order, representing a DC beam.
This work was supported by the U.S. National Science Foundation under Grant No. 1505858 and the corresponding author was partly supported by the Bose Foundation.The research at PSI leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant No. 290605 (PSI-FELLOW/COFUND).Furthermore, the authors would like to express their gratitude to Best Cyclotron Systems, Inc. in Vancouver, for hosting the 1 MeV cyclotron injection tests, and the INFN-LNS ion source group in Catania, for the loan of the VIS.
TABLE I .
Beam and cyclotron parameters for inflection and acceleration studies.
TABLE II .
Full width at half maximum (FWHM) for the measured distribution and the OPAL results using the FFT and the SAAMG solver. | 7,658 | 2017-12-11T00:00:00.000 | [
"Physics"
] |
The Effectiveness of Reading Aloud Application on the Ability to Read the QS. Al-Ma’un and Translations in English
This research was conducted to find out how effective the application of the reading-aloud method is on the ability to read the Q.S Al Ma'un and its translation in English. This study aims to determine whether the reading-aloud method affects students' ability to read Q.S Al-Ma'un of the Quran and its English translation in the second semester of the Management Study Program at Muhammadiyah University of Pringsewu. This study employs a quasi-experimental methodology. The results showed that the experimental class's post-test score was 69.1, with a standard deviation of 8.22, and the practical class's pre-test score was 56.5, with a standard deviation of 8.53. The control class's post-test score was 60.40, with a standard deviation 9.68. The outcomes of the two tests are dissimilar. Additionally, both classes’ -test results are 0.00. This indicates that the hypothesis is correct and that the reading-aloud strategy can help students become more proficient readers.
INTRODUCTION
One of the current issues that are significant for the Muslim community is concern about the decreasing number of young Muslims who can read the Qur'an, resulting in a decline in their interest in studying and memorizing the holy verses of the Qur'an (Nasirudin et al., 2021).
Reading and loving the Qur'an is an essential prerequisite for achieving a deep understanding of the Qur'an.Therefore, we cannot ignore the message of Allah SWT in the 4th verse of Surah Al-Muzammil, which emphasizes the importance of reading the Al-Qur'an.Through this verse, Allah SWT ordered the Prophet Muhammad SAW to read the Qur'an carefully and think about its meaning (Ali Mahfud & Sobar Al Ghazal, 2022).This command emphasizes the importance of reading the Qur'an well and correctly, listening to each word slowly and clearly, and understanding the meaning of each verse (Syafei et al., 2020).This command can also be used as a guide for all Muslims in reading the Qur'an properly and correctly.
As the Word of God, the Qur'an has many teachings and advantages (Marwati, 2021).
There are several advantages obtained when we read, study, and teach the Qur'an, including: 1. Get a reward for those who read it, like the words of the Prophet Muhammad, which reads.
ُھل ِھب ح
"Whoever reads one letter from the book of Allah (Al-Quran), then for his goodness, and one kindness is rewarded with tenfold."(At Tarmidzi) 2. Get a reward for people who study, memorize, and are good at reading the Qur'an.
3. Get a reward for those who gather to read and study it.
4. Obtain a high level of heaven.
Reading is a complex process that includes many factors in and out of the reader.Reading is increasingly important in increasingly complex community life (Syamsir et al., 2021).In learning to read, students not only have to be able to So many benefits can be obtained by reading the Al-Quran and understanding its translation.Therefore, it is highly recommended for Muslims to read the Al-Quran because apart from being a guide in the world and the hereafter, it is also a means to shape Muslim personalities and increase faith.Translation of the Al-Qur'an is a simple way to understand its contents (Hanafi, 2011).One of the obligations in Islam is to read the Al-Quran and its translation.When reading the Qur'an, we must understand the meaning of each verse in the translation.By reading translations in English, students to be able to increase their vocabulary.
Various learning strategies were born to make it easier for students to improve their ability to read the Qur'an, one of which can be developed to realize active learning.One such strategy is reading aloud (reading aloud).This strategy can help students concentrate.In this strategy, students are instructed to read aloud one of the letters in the Al-Quran, the teacher will pay attention to the incorrect reading and provide examples of correct reading so that students can spread the knowledge of recitation well (Marwati, 2021).The strategies teachers The Effectiveness of Reading Aloud Application on the Ability to Read the QS.Al-Ma'un...... use in the learning process play a very important role.This is recognized because teachers are special in developing children's learning abilities.(Zainuddin et al., 2022) Therefore, if the teacher plays a good role in learning, the results will also be good.
In the educational context, Reading Aloud is a learning technique in which teachers or students read texts aloud.Reading aloud is a method of reading aloud.Meanwhile, according to (Trelease, 2017), read aloud is the most effective method of teaching reading for children because with this method we can condition children's brains to associate reading with a fun activity.Both teachers and parents can do this so that it can create a bond with students.The Reading Aloud strategy helps students concentrate, ask questions, and trigger discussions, which can increase student participation in understanding the material being taught (Mufid, 2016).Thus, this strategy can increase student involvement in the learning process.
Based on the author's observations of This condition is not due to the low absorption of students but more factors that influence it.It could be because the method used is inappropriate for learning to read the Qur'an and translate it in English.However, from some of these factors, based on the initial observations that the researchers made, there was a tendency towards learning method factors that had to be improved, where the methods used previously were limited to theory, the active role of students was not paid attention to so that learning outcomes were less than optimal.
The reading-aloud method is a method that can help students concentrate and ask questions.This method can stimulate student activity; besides that, it can increase student
Discussion
According This result is in line with the observation that (Huda et al., 2015) the process of learning reading strategies seems to be working.This indicates that students' success, including their work, atmosphere, and quality, will be enhanced using this learning process.Furthermore, read but also have to like what they read.That way, teachers must have a unique learning method, and this method can develop children's creativity, especially motivation, curiosity, and memory development.Reading the Al-Quran is essential in a child's learning process because this is a basic ability that children must have (Abdul Rauf, 2012) .Reading the Quran is a very important form of worship.Therefore, reading the Al-Quran begins with studying the letters is an obligation and must be accompanied by understanding the translation of the Al-Quran itself.This is because the ability and love to read the Qur'an is the first step in efforts to understand and practice the contents of the Qur'an in everyday life.
semester 2
students of the T.A 2022/2023 Management Program Study, students' ability to read the Qur'an is still low, and their ability to read translations of the Qur'an in English is still not fluent.The classical learning proficiency learning outcomes from the previous year were not met with an average value of 6.5.At the same time, the expected completeness is 76%.
enthusiasm.Therefore, semester 2 students of the Management Program Study are expected to be able to read the Qur'an and translate it into English fluently and well.One is to find out how the ability to read the Qur'an and translation in English semester 2 students of the Management Program Study, especially in practicing reading verses from the Q.S Al-Ma'un as well as reading the correct and good translation and using the reading aloud method.METHODS This study uses a research tool with two class groups in a non-control group design.The experimental group was one group, and the control group was another.In both classes, the researchers used pre-and post-testing methods.The research population consisted of semester 2 students of the Management Program Study.There are 40 students, both girls and boys.The researchers then conducted this research by reading aloud.The research applied Q.S Al Ma'un of the Quran in English translation as a research tool.This examination was used to determine the ability of the students to comprehend the reading test of the Q.S Surah Al Ma'un with English translation.In this case, the researchers used a reading method to administer a reading test to the students.(HJ Wahid & A. Thais, 2020) SPSS-16 was used in the data analysis technique to continue observing how to interpret and analyze data from the output of the data tabulation, which was in the final stage.The researchers attempt to explain the procedures involved in data collection to conduct the investigation.The researchers use triangulation of data analysis in this data analysis technique: (a) the mean formula; the mean formula is used to calculate mean scores.It is an effective method of determining control proclivity.(b) The variation in scores around the mean is averaged out using the standard deviation.(c) Categorization; includes pre-and post-tests.Before the test, a pre-test was administered to assess the students' reading abilities, and a posttest was administered to assess the students' reading abilities after the researchers completed the process.
Figure 1 .
Figure 1.Pre-test Diagram In The Experiment Class
Figure 2 .Figure 2
Figure 2. The Post-Test Diagram In The Experiment Class Figure 2 shows that students made significant progress after receiving the materials from the teacher.Teachers describe read-aloud strategies for improving reading and comprehension.
Figure 3 .
Figure 3. Pre-test diagram in the control class Figure 3 shows that students continue to achieve mean reading scores.Students should use reading strategies to improve their reading skills.
Figure 4 .
Figure 4.The post-test diagram in the Control class teachers and students saying the words aloud in front of the class and obtaining the information.Because the words are spoken aloud and stored in the student's memory, this strategy aids students in concentrating on the text's content.The findings above show that students can comprehend, analyze, and interpret texts using reading strategies.The experimental class's pretest score of 61, with an average standard deviation of 8.02, and post-test score of 69.1, with an average standard deviation of 8.22, serve as evidence.On the other hand, the control group had a pre-test score of 56.05, with an 8.53 standard deviation, and a post-test score of 60.4, with an average deviation of 9.68.Therefore, the t-test result is 0.00 0.05.Reading aloud in class is a valuable and effective teaching and learning strategy.
Table 1 .
Pre-Test Scores In Experimental Classes
Table 2 .
Post-Test Scores In Experimental Classes Effectiveness of Reading Aloud Application on the Ability to Read the QS.Al-Ma'un...... The
Table 3 .
Pre-Test Scores In The Control Class
Table 4 .
Post-Test Scores In The Control Class
Table 5 .
Results of the total T-test | 2,436 | 2023-12-22T00:00:00.000 | [
"Education",
"Linguistics"
] |
Simulated Annealing Approach onto VLSI Circuit Partitioning
Decompositions of inter-connected components, to achieve modular independence, poses the major problem in VLSI circuit partitioning. This problem is intractable in nature, Solutions of these problem in computational science is possible through appropriate heuristics. Reduction of cost that occurs due to interconnectivity between several VLSI components is referred in this paper. Modification of results derived by classical iterative procedures with probabilistic methods is attempted. Verification has been done on ISCAS-85 benchmark circuits. The proposed design tool shows remarkable improvement result in comparison to the traditional one, when applied to the standard benchmark circuits like ISCAS-85.
INTRODUCTION
T he technology of VLSI consists of fabrication of large number of components within a tiny space.Complex systems like this raise the challenge to get designed in an efficient way.So it is required for the systems to partition into smaller sub-systems to be configured properly.Decompositions of inter-connected components, to achieve modular independence, create the major problem of circuit partitioning.In VLSI design, this problem is intractable in nature.The major objective of partitioning a circuit is to achieve concurrency in VLSI system design, preserving the original functionality of the system.In VLSI circuit partitioning, large circuits are partitioned into smaller components to attain minimal connectivity amongst them.Cost of intermodule connectivity and the relevant circuital delay are the key parameters in designing complex VLSI system.The interconnections between different modules are referred as cutsizes.The goal of partitioning is to minimize the cutsize.
To solve this kind of problem is inapplicable in nature.As the number of permutation would reveal an exponential function, the problem is computationally hard and hence required some heuristics to come across a near optimal solution.
Earlier on Kernighan and Lin (1970) proposed an iterative solution to solve bipartitioning.The approach includes swapping of most suitable pair of components per iteration, until no improvements shown.This ensures local optimum solutions for the standard benchmark circuits.
After a few years, Fiduccia and Mattheyses (1982) modified the earlier approach by adjusting the algorithmic complexity.It was also an iterative method that selects a single vertex to swap at one instance.In this method the most suitable vertex of a partition is chosen, the most suitable one from the other partition is chosen next.The method continues until no more vertex is left.
Apart from these two well known heuristics, a number of heuristics have been applied to solve the partitioning problem.Genetic Algorithm, Ant Colony Optimization, Simulated Annealing are some of them.
In this work, a probabilistic version of simulated annealing is used.The FM method is used to obtain feasible solutions needed in this approach.When applied to the standard benchmark circuits (like ISCAS-85) the heuristic is found to give better results compare to other heuristics.
PROBLEM FORMULATION
The problem is related to reorientation of components in a circuit.It is a tedious task to do the same physically.Mapping the problem into the domain of graph theory makes it easier to deal with.To represent a circuit in a graph, the components and the inter-connections between them are considered as Vertices and the Edges.A graph G = (V, E), consists of V as the set of vertices and E as the set of edges.Here in this problem the components {v 1 ,V 2 ...,v n } ∈ V and their interconnections as set of edges {e 1 ,e 2 ,....,e n } ∈ E. Thus dividing the elements of V to create disjoint subsets, i.e.V V i j ∩ =∅, where i j; , to obtain reduced cutsize, essentially becomes the issue.The total cutcost is sum of C ij where i≠j and C ij stands for the cutcost between partition V i and V j .This approach obtains a biequipartition through FM algorithm, starting with a random partition.Then it starts modifying that solution by Simulated Annealing with various initial temperatures, cooling rates, and equilibrium conditions.The final outcome is the best among them.
METHODOLOGY
Simulated Annealing is a probabilistic search technique that maps the chemical annealing process to algorithmic domain.The annealing process initiates with a high temperature and drastically cooled down to form crystals.Identically, an entity from the solution space of the partitioning problem is taken as an initial state and some amount of iterations are carried through.At every temperature, the process continues to reach the equilibrium at that very temperature.In course of the process, new solutions are generated and checked if a better solution has arrived or not, if the better solution is found, it is kept otherwise a selection procedure takes place.The solution procedure depends on probabilistic criteria.The solutions which fails to match the criteria, are rejected otherwise, they are given a scope to reorient themselves.Thus as the swapping events are carried on at each step there is a high possibility to jump over the local optima and step into another.The entire procedure is highly dependent on the parameters such as initial temperature, initial permutation, cooling rate, equilibrium condition.
Initial temperature: It relates to the value to which the object would be heated in annealing process.Basically, it is the number of iterations of the outer loop in the heuristic.This parameter has crucial effects in case of acceptance of solutions.So it should be maintained carefully.The value of initial temperature must be large enough to enable the algorithm to move off local minima.In this work multiple values have been taken to start off, considering the nature of benchmark, under observation.
Figure 1: Refiner Flowchart
Initial permutation: It defines an instance from the solution space, with which the procedure commences.Modifications made on that instance leads to the arrival of a technically good solution.In our work, initial permutation is the solution derived from FM algorithm.
Cooling rate: Decrement of temperature per iteration is controlled by the cooling rate.This is also very significant to perform a successful search.It is implemented by the following rule; t , where s < 1.In our work, the choice of s has been made in three levels: low, medium and high and ultimately fixed after a series of observations.Equilibrium condition: At a particular temperature the process attains an equilibrium state.In SA, it is the iteration number of inner loop at every specific temperature.It can be constant or it can be varied with the size of the search space.
The idea of the approach is represented in where r is a random number in [0, 1] andt is the temperature; is made to check the potential of the newly arrived solution
RESULTS AND DISCUSSION
Combinational circuits with reasonable size from ISCAS-85 benchmark family has been put under observation.Files with even number of components have been tested in as we are searching bipartitions of equal size.Final temperature i.e. the stopping criterion of SA procedure has been changed as per the cardinality of the circuits to control the number of outer loop iterations.
Other parameters remained constant in this experiment for now.As the process progresses we have noticed that the initial permutation, the outcome of FM, has been modified to produce better cutsizes.The result Figure 2 for the efficiency of our concept.Initial cutsize and the final one have been compared after each round of study.The parameters and an approximated time of convergence have been tracked in as well.In each case we can see there is a successful refinement of FM output.The slope depicted in the graphs conveys the working principle of S A at a glance.Cutsizes are increased at first phases and then they are reduced.Horizontal axis signifies time and vertical is representing cutsize.
Converging time of this SA approach is relatively a bit higher than its counterparts.Other parameters like equilibrium condition, cooling rate could be also changed for carrying on the process.In fact, being an empirical study, these approaches demand several values of different aspects to be tried on.However, with a motivation to check more and more entities from solution space we have achieved some satisfactory results which were intended in this venture.
CONCLUSION
In this work, circuit partitioning in VLSI has been mentioned.The results show that on average 31.027percent savings are made, which is delightful to quite an extent.Further improvements can be made by changing the parameters.We would like to extend this project to suggest a good set of parameters for approaches of these kinds of metaheuristics.A comparison and add-on of existing heuristics for the addressed problem can also be a good one to examine.
Figure-1.The important feature of S A is to move from local optimum based on the acceptance rule by probable tions.If the current solution minimizes the cutsize,then it is accepted otherwise, a probabilistic calculation as: r e trial score current score t table in Figure-2 stands | 1,986.6 | 2014-03-03T00:00:00.000 | [
"Computer Science",
"Engineering"
] |