text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Toxoplasma type II effector GRA15 has limited influence in vivo Toxoplasma gondii is an intracellular parasite that establishes a long-term infection in the brain of many warm-blooded hosts, including humans and rodents. Like all obligate intracellular microbes, Toxoplasma uses many effector proteins to manipulate the host cell to ensure parasite survival. While some of these effector proteins are universal to all Toxoplasma strains, some are polymorphic between Toxoplasma strains. One such polymorphic effector is GRA15. The gra15 allele carried by type II strains activates host NF-κB signaling, leading to the release of cytokines such as IL-12, TNF, and IL-1β from immune cells infected with type II parasites. Prior work also suggested that GRA15 promotes early host control of parasites in vivo, but the effect of GRA15 on parasite persistence in the brain and the peripheral immune response has not been well defined. For this reason, we sought to address this gap by generating a new IIΔgra15 strain and comparing outcomes at 3 weeks post infection between WT and IIΔgra15 infected mice. We found that the brain parasite burden and the number of macrophages/microglia and T cells in the brain did not differ between WT and IIΔgra15 infected mice. In addition, while IIΔgra15 infected mice had a lower number and frequency of splenic M1-like macrophages and frequency of PD-1+ CTLA-4+ CD4+ T cells and NK cells compared to WT infected mice, the IFN-γ+ CD4 and CD8 T cell populations were equivalent. In summary, our results suggest that in vivo GRA15 may have a subtle effect on the peripheral immune response, but this effect is not strong enough to alter brain parasite burden or parenchymal immune cell number at 3 weeks post infection. Introduction To successfully establish a persistent infection, a microbe must take a "Goldilocks" route.The microbe must evade host defenses enough to avoid microbial elimination while also preventing host death from an overwhelming microbial burden or immune response.Thus, successful persistent microbes evolve mechanisms for provoking "the right" amount of a host response [1,2].Toxoplasma gondii is an eukaryotic intracellular parasite that persistently infects many warm blooded animals-from birds to humans-including approximately 10-15% of the United States population [3].Toxoplasma has achieved such success, in part, by manipulating host cell signaling pathways through a variety of secreted effector proteins.These secreted effector proteins are often known as ROPs and GRAs and are delivered by specialized secretory organelles.Different ROPs and GRAs directly block immune clearance, alter the host cell cycle, drive cytoskeletal remodeling, and alter apoptotic pathways [4][5][6][7][8][9][10][11].While many of these effector proteins are the same in all Toxoplasma strains, some are polymorphic and show Toxoplasma strain-specific effects [12,13].One such polymorphic effector protein is GRA15 [14]. During in vitro infection with type II Toxoplasma strains-but not with type I or type III strains-GRA15 activates the NF-κB pathway, which leads to IL-12, IL-1β, and TNF release by macrophages [15][16][17][18].GRA15 also limits parasite growth in IFN-γ stimulated human and murine fibroblasts in vitro by recruiting host defense proteins to the parasite's intracellular niche [15].Consistent with GRA15 stimulating pro-inflammatory host responses that limit parasite expansion, during acute infection, mice inoculated with type II parasites that lack GRA15 have lower local IFN-γ levels and higher parasite burdens compared to mice infected with wild-type type II parasites [14]. While such findings might be expected to result in a higher systemic and brain parasite burden during later stages of infection, the data are mixed.One paper found that IIΔgra15 parasites showed no difference in cyst counts at 21 days post infection (dpi) compared to parental parasites, while another paper found that IIΔgra15 parasites showed a trend toward a decrease in cyst count compared to WT parasites [19,20].Given these discrepant studies, we sought to re-address the role of GRA15 in outcomes of type II infection, including assessing the systemic and brain immune response as well as the brain parasite burden. Ethics statement All procedures and experiments were carried out in accordance with the Public Health Service Policy on Human Care and Use of Laboratory Animals and approved by the University of Arizona's Institutional Animal Care and Use Committee (#12-391).All mice were bred and housed in specific-pathogen-free University of Arizona Animal Care facilities. Parasite maintenance and generation of IIΔgra15 and IIΔgra15::GRA15 All parasite strains were maintained through serial passage in human foreskin fibroblasts (HFFs) in DMEM supplemented with 10% FBS, 100 I.U./ml penicillin/streptomycin, and 2 mM glutagro.All parasite strains were generated from a type II strain Pruginaud (PruΔhpt) in which the endogenous hypoxanthine xanthine guanine phosphoribosyl transferase gene is deleted.The wild type (WT) strain used throughout the paper expresses a Cre fusion protein that is injected into host cells prior to parasite invasion [21].The IIΔgra15 used throughout this work also expresses the Cre fusion protein. To disrupt GRA15 in IIΔhpt parasites, GRA15 targeting CRISPR plasmids (sgGRA15Up and sgGRA15down) were generated from a sgUPRT plasmid (plasmid #54464) using a Q5 mutagenesis protocol [22].To generate a plasmid to insert hpt and toxofilin:cre into the GRA15 locus, upstream (500-bp) and downstream (500-bp) adjacent to the sgGRA15Up and sgGRA15Down target sequences were used to flank hpt and toxofilin:cre.We then transfected the IIΔhpt parasites with the 1) sgGRA15Up CRISPR, 2) sgGRA15Down CRISPR, and 3) pTKO plasmid [23] with GRA15 homology regions flanking hpt and toxofilin:cre.These parasites underwent selection using media containing 25 mg/ml mycophenolic acid and 50 mg/ml of xanthine prior to dilution to individual clones [24].Single clones were then screened for disruption of the gra15 locus and confirmed to have lost NF-κB activation by immunofluorescence.Clones were also confirmed to express toxofilin:cre by causing Cre-mediated recombination as previously described [25]. A complemented IIΔgra15::GRA15 strain was made by inserting the GRA15 coding sequence with 1000 bp upstream of the GRA15 TSS into a plasmid containing the selectable marker bleomycin [26].The plasmid was linearized and transfected into IIΔgra15 parasites.These parasites were placed under selection in complete DMEM supplemented with 5 μg/ml zeocin until lysing out.Lysed out parasites were incubated in 50 μg/ml zeocin media for 4 hours before being transferred to HFFs containing the 5 μg/ml zeocin media.This process was repeated three times prior to cloning by limiting dilution.Single clones were then screened for expression of gra15 by Q-PCR and the ability to activate NF-κB pathway by immunofluorescence. Mice Unless specifically noted, mice used in this study are Cre-reporter mice in a C57Bl/6J background.Cells in these mice express GFP when cells undergo Cre-mediated recombination [27].These mice were purchased from Jackson labs and bred in the University of Arizona Animal Center (stock # 007906).BALB/cJ mice (Strain #:000651) were used for one experiment.Male and female mice were intraperitoneally inoculated with 10,000 syringed lysed parasites resuspended in 200 μl of UPS grade PBS.Unless otherwise stated, two cohorts were used for each experiment.For 3 week post infection (wpi) studies, cohort one included 4-5 mice per infection, aged 12-16 weeks, with initial weights between 18 and 33 grams.Cohort two included 9-12 mice per infection, aged 6-10 weeks, with initial weights between 16 and 32 grams.For acute time points of 2-and 5-days post infection, each cohort contained 4-5 mice per infection.Mice were given food and water ad libitum and provided moist chow to alleviate suffering. Tissue preparation for histology and DNA extraction At the appropriate time points, mice were euthanized with CO 2, without use of anesthesia, and transcardially perfused with 20 ml cold PBS.Brains were removed and divided into two hemispheres.The left hemisphere was drop fixed by placement in 4% paraformaldehyde (PFA).The next day, PFA was removed and replaced with 30% sucrose.After sucrose embedding, brains were sagittally sectioned to 40 μm sections using a microtome (Microm HM 430) and stored in cryoprotectant media at -20˚C until staining.The anterior ¼ of the right half of the brain was sectioned coronally, placed in an Eppendorf tube, and flash frozen until used for DNA extraction. NF-κB activation assay Syringe lysed parasites were added to confluent HFF monolayers grown on glass coverslips at an MOI of 7.5 and spun down at 300 rpm for 1 minute.24 hours post infection, cells were washed and fixed for 15 minutes with 4% PFA followed by 5 min in ice cold methanol.Cells were then blocked in 3% goat serum for 1 hour at room temperature and incubated in mouse anti-Sag1 [28] [DG52] (gift from John Boothroyd, 1:5000) and anti-NF-κB (p65) (Santa Cruz Biotechnology, sc-372, 1:1000) antibodies overnight at 4˚C.The next day, cells were washed to remove excess antibody and incubated in goat anti-mouse secondary antibody AF568 (Thermo Fischer Scientific, A-11004, 1:500) and goat anti-rabbit AF488 (Life technologies, A-11008, 1:500) for one hour.Coverslips were then washed 3 times in PBS, with the first wash containing Hoechst (1:5000) to stain for host cell and parasite nuclei.Images were then obtained on an ECHO Revolve fluorescent microscope to analyze nuclear NF-κB localization. To measure NF-κB activation at early time points, syringe lysed parasites were filtered and washed in 40 ml of cDMEM prior to addition to confluent HFF monolayers grown on glass coverslips at an MOI of 5. Cells were fixed in 4% PFA at 1, 3, or 24 hrs post infection, blocked in 3% goat serum for 1 hour at room temperature and incubated in mouse anti-Sag1 [28] [DG52] and anti-NF-κB (Cell Signaling Technology, 8242S, 1:1000 antibodies) overnight at 4˚C.The subsequent steps followed the protocol described above. Growth assay Syringe lysed parasites were added to confluent HFF monolayers grown on glass coverslips at an MOI of 1 and spun down at 300 rpm for 1 minute.24 hours post infection, cells were washed in PBS and fixed for 20 minutes in 4% PFA.Cells were then permeabilized, blocked, and stained using an anti-Toxoplasma antibody (Thermo Fischer, PA17252, 1:5000, goat antirabbit 568 Thermo Fischer, A11011, 1:500).To enumerate the number of parasites per vacuole, coverslips were analyzed using an ECHO Revolve fluorescent microscope. Plaque assay Confluent monolayers of HFF cells were infected with 250 parasites of the indicated strains in cDMEM.After 10 days, media was removed, cultures were washed with PBS, and the monolayers were fixed in ice cold methanol for 10 minutes.Fixed monolayers were then stained with crystal violet for 10 minutes at room temperature. Immunofluorescence For identification of T cells, free floating brain sections were washed in TBS, blocked with 3% goat serum diluted in TBS for 1 hour, and then incubated overnight with hamster anti-CD3ε antibody diluted in 1% goat serum/0.3%Triton-X100/TBS (BD Biosciences, 550277).The next day samples were washed in TBS and incubated at room temperature in goat anti-hamster 647 (Life Technologies, A-21451).After a 4-hour incubation in secondary antibody, samples were washed for 5 minutes in TBS/Hoechst (1:5000) followed by 2 subsequent washes in TBS.Brain sections were then mounted, cover slipped with Fluoromount-G™ (Southern Biotech, 0100-01), and z-stacks were obtained on an ECHO Revolve microscope using a 10x objective. Iba-1+ cell quantification To quantify Iba-1+ cells in brain sections, stained sections were imaged using light microscopy.Eight images were obtained in a stereotyped pattern within the cortex of the brain section using a 20x objective.Three matched sections were imaged per mouse (24 images/ mouse).Cells were quantified manually using FIJI software.Individuals quantifying cells were blinded to infection status of mice. T cell quantification Imaris software was used to quantify number of T cells within each 40 μm confocal image.The spots tool was used to generate a threshold of detectable T cells and quantified by the program.Individuals quantifying cells were blinded to infection status of mice. Quantitative PCR To quantify parasite burden, genomic DNA was isolated from the anterior quarter of the right hemisphere (brain), the left lobe of the liver, or the distal quarter of the spleen using DNeasy Blood and Tissue kit (Qiagen, 69504), following the manufacturer's protocol.The Toxoplasma B1 gene was amplified using SYBR Green on the Eppendorf Mastercycler et realplex 2.2 system.Gapdh was used to normalize parasite DNA levels. Cyst stain Sagittal brain sections were blocked in 3% goat serum diluted in 0.3% TritonX-100/TBS for 1 hour.These sections were then incubated with biotinylated Dolichos Biflorus Agglutinin (DBA) (Vector laboratories 1031, 1:500) and a polyclonal rabbit anti-Toxoplasma antibody (Thermo Fisher Scientific, PA17252, 1:5000) overnight at 4˚C.Samples were then washed and incubated with Streptavidin Cy5 (Life technologies, S21374, 1:500) and goat anti-rabbit 568 secondary (Thermo Fisher Scientific, A11011, 1:500) for 4 hours at room temperature, after which samples were washed to remove residual antibody.Hoechst (Thermo Fisher Scientific, H3570, 1:5000) was added to the first TBS wash for 5 minutes to stain for nuclei.Sections were then washed two more times, mounted on slides, and cover slipped using Fluoromount-G™.The number of cysts (DBA+, anti-Toxoplasma antibody+) was enumerated using an Echo Revolve fluorescent microscope. Single cell suspension for flow cytometry At appropriate time points, mice were euthanized by CO 2 and intracardially perfused with 20 ml cold PBS.Spleens were then harvested for flow cytometry, maintained in complete RPMI (86% RPMI, 10% FBS, 1% penicillin/streptomycin, 1% L-glutamine, 1% NEAA, 1% sodium pyruvate, and <0.01%β-mercaptoethanol) and processed to generate single cell suspensions.For single cell suspension, spleens were passed through a 40 μm strainer and centrifuged at 1200 rpm, 4˚C, for 5 minutes.After removal of supernatant, red blood cells were lysed by addition of 1 ml ammonium chloride-potassium carbonate (ACK) lysis buffer (Life Technologies, A1049201).ACK was neutralized by the addition of cRPMI, centrifuged at 1200 rpm, 4˚C, for 5 minutes.The supernatant was removed and the pellet resuspended in cRPMI.The number of viable cells was quantified by diluting 10 μl of the single cell suspension in 90 μl trypan blue and counting on a hemocytometer.T cell panels to be quantified for IFN-γ were treated with PMA, Ionomycin, and Brefeldin for 4 hours in 37˚C incubator prior to washing, blocking, and staining. Staining for flow cytometry One million live cells of each sample were plated into a 96 well plate, washed in FACS buffer (1% FBS/PBS), and blocked with Fc block (Biolegend, 101302) to prevent nonspecific staining. Peritoneal exudate cells isolation Cre reporter mice were inoculated intraperitoneally with saline or 10,000 WT or IIΔgra15 parasites.At 2 and 5 dpi, peritoneal exudate cells were collected by injecting 5 ml of cold PBS into the exposed peritoneal cavity, massaging the cavity, and recollecting PBS/cellular suspension.PECS were then incubated in Fc block, stained for CD45, and run on the LSRII. Parasite RNA isolation Confluent human foreskin fibroblasts were infected with indicated strains for 48 hours.Monolayers were scraped, syringe lysed, and resuspended in TRIzol™; RNA was extracted per manufacturer's instructions (Thermo Fisher Scientific, 15596026).One μg of isolated RNA was converted to cDNA using High-Capacity cDNA Reverse Transcriptase kit (Thermo Fisher Scientific, 4368814).Q-PCR was performed on cDNA using GRA15 and TgActin specific primers. Statistics Graphs were generated and statistical tests were run using Prism software version 9.4.1.All in vivo experiments in C57BL/6 mice were repeated with two independent cohorts; unless otherwise noted the data were analyzed with a two-way analysis of variance (ANOVA) with uncorrected Fisher's LSD.Infection of BALB/c mice was done once; data were analyzed with a Ttest.For intracellular growth assays and plaque assays, experiments were repeated three times and statistical analysis was conducted on the composite data.For intracellular growth assay, a two-way ANOVA with uncorrected Fisher's LSD was used, for plaque assays a one-way ANOVA was used.Analysis of parasite genomes in the liver at 5 dpi showed one mouse from one cohort to be an outlier as determined by the ROUT outlier test.Therefore, that mouse was removed for statistical analysis. GRA15 does not influence parasite burden or macrophage/microglia and T cell abundance in the brain at 3 wpi To probe the influence of GRA15 during early chronic infection, we generated a type II strain (Prugniaud or Pru) that lacked gra15 (IIΔgra15) and the appropriate complemented strain (IIΔgra15::GRA15) using previously described CRISPR-Cas9 methodology [29].As the IIΔgra15 and IIΔgra15::GRA15 strains express a rhoptry::Cre recombinase fusion protein, for the wild-type (WT)/control strain, we used a Pru strain that has been engineered to express the same rhoptry::Cre fusion protein [21].The IIΔgra15 was confirmed to lack NF-κB activation and the complemented strain restored NF-κB activity at 24 hours post infection (hpi) (S1A and S1B Fig) .At 1 and 3 hpi, none of the strains induced NF-κB nuclear localization, regardless of GRA15 expression (S1C Fig) .To assess GRA15's effect in vivo, Cre reporter mice that express GFP only after Cre-mediated recombination [27] were inoculated with saline or WT, IIΔgra15, or IIΔgra15::GRA15 parasites.At 3 weeks post infection (wpi), spleen and brain were harvested.To assess overall brain parasite burden, we performed Q-PCR for a Toxoplasma specific gene (B1) on genomic DNA isolated from the brain [23,30].We found no statistical difference between WT and IIΔgra15 infected brain though the IIΔgra15::GRA15 infected brain consistently showed a lower parasite burden (Fig 1A).As a second mechanism for assessing brain parasite burden, we quantified the number of cysts by staining brain sections with Dolichos biflorous agglutinin (DBA), a lectin that stains sugar moieties on components of the cyst wall (Fig 1B) [31].Consistent with the Q-PCR data, cyst counts from WT and IIΔgra15 infected brain were not statistically different while cysts counts from IIΔgra15:: GRA15 infected brain were lower (Fig 1C).Given that the IIΔgra15::GRA15 strain consistently appeared to be less capable of establishing an in vivo infection in multiple cohorts of mice, we performed in vitro studies to determine if this strain had a growth defect and/or had an unusual expression of gra15.Indeed, the IIΔgra15::GRA15 strain showed a replication defect at 24 hours post infection (S1D Fig), though this difference did not translate into a defect in plaque formation (S1E-S1G Fig) .In addition, we determined that the complemented strain expressed approximately 5 fold more gra15 compared to the WT strain (S1H Fig) .Given the lytic cycle defect-which we expect would be exacerbated in vivo-and the increased expression of gra15 in the IIΔgra15::GRA15 strain, we decided to move forward without the complement, as these phenotypes introduce variables for which we cannot control. As GRA15 influences macrophage phenotypes in vitro and a change in macrophage skewing might affect the neuroinflammatory response without altering brain parasite burden, we next sought to evaluate the brain immune response.We focused on macrophages/microglia and T cells because these are the primary immune cells to infiltrate and/or be activated in the brain upon Toxoplasma infection [29].To quantify the number of macrophages/microglia, we stained tissue sections with anti-Iba1 antibodies, which stains a cytoskeletal protein (Iba1) expressed by both macrophages and microglia.We then quantified the number of Iba1+ cells manually [29] finding no difference in the number of Iba1+ cells in brain sections from WT and IIΔgra15 infected mice (Fig 1D and 1E).To quantify T cells, we performed immunofluorescent assays for T cells using an anti-CD3ε antibody.We then imaged the stained tissue sections and analyzed the images with Imaris software, which is capable of segregating and counting the stained T cells in an automated manner (Fig 1F and 1G).We found no difference in the number of CD3ε+ cells in brain sections from WT and IIΔgra15 infected mice.Collectively, these data suggest that GRA15 does not affect Toxoplasma's dissemination to or persistence in the brain at 3 wpi.GRA15 also does not appear to alter the number of macrophage/ microglia or T cells present in the brain at 3 wpi. GRA15 may influence M1-like polarization of macrophages at 3 wpi While IHC allows us to quantify infiltrating immune cells, it cannot assess the polarization state of immune cells, which can be done by flow cytometry.Given that GRA15 induces an M1-like phenotype in infected macrophages in vitro [17], we were interested in determining how this gene influences macrophage phenotypes in vivo.As prior data from our lab has shown that the immune response within the spleen mirrors the immune response found in the brain at 3 wpi [29], we used splenocytes for our analyses.To that end, we used the following markers to segregate macrophages into pro-inflammatory macrophages (M1-like): CD45+, F4/80+/ CD11b hi CD11c lo/int / CD80+ CD86+ and wound-healing macrophages (M2): CD45+, F4/80+/ CD11b hi CD11c lo/int / CD206+/F4/80+ (gating scheme shown in S2 Fig) .Given that we did not use iNOS staining which is required to identify a true M1 macrophage, we refer to GRA15 does not affect IFN-γ producing T cell populations at 3 wpi M1/M1-like macrophages are expected to produce IL-12 [17,18,29].As IL-12 is one of many signals that polarizes naïve CD4+ T cells to be T-bet+, IFN-γ producing Th1 cells, we hypothesized that the lower number of M1-like macrophages provoked by IIΔgra15 parasites might result in decreases in IFN-γ production by T cells [32,33].To test this possibility, we profiled the splenic T cell compartment, assessing CD4 and CD8 numbers as well as their capabilities to produce IFN-γ (gating scheme is shown in S3 Fig) .We found no differences between the groups in terms of the number or frequency of Th1, Th2, or Treg T cells (Fig 3A-3F).The number of IFN-γ producing CD4 and CD8 T cells was also not different ( these data suggest that, at 3 wpi, GRA15 does not influence IFN-γ production in CD4 or CD8 T cells, despite potentially influencing M1-like macrophage number and frequency. GRA15 may influence the frequency of peripheral "exhausted" T cells and NK cells at 3 weeks post infection in C57BL/6 mice As work from other labs have identified T cell exhaustion during chronic time points of Toxoplasma infection [34,35] and because such analysis has not been done with Δgra15 strains, we assessed the T cell compartment for exhausted T cells by looking for co-expression of inhibitory markers PD-1 and CTLA-4 (FMO shown in S4 Fig) .We found that mice infected with IIΔgra15 parasites generated a lower frequency of exhausted CD4+ T cells, though the total number of exhausted CD4+ T cells only trended down in IIΔgra15 infected mice (Fig 5A and 5B).As NK cells have been shown to contribute to T cell exhaustion in the chronic phase of disease [36], we also quantified NK cell number and frequency, finding a lower frequency of GRA15 may influence parasite dissemination during acute infection Given the published data suggesting a difference in parasite burden between WT and IIΔgra15 parasites at 5 dpi [14], we were surprised that we did not see a difference in parasite burden in the brain at 3 wpi (Fig 1A and 1C).Therefore, we wondered if the previously reported GRA15-associated phenotypes could only be seen early in infection.To address this question, we inoculated Cre reporter mice intraperitoneally with saline or WT or IIΔgra15 parasites and collected peritoneal exudate cells (PECS) and peritoneal fluid.Following the protocol from the previously published report [14], we measured IFN-γ in the peritoneal fluid, finding no difference in IFN-γ levels at 2 dpi (Fig 6A).While the prior study used bioluminescent imaging to quantify parasite burden, our parasites were not compatible with such measurements (i.e., our parasites do not express luciferase).Instead, as our parasite strains express a rhoptry::Cre fusion protein and in Cre reporter mice the number of GFP+ cells correlates with the parasite burden [29], we used the number of green fluorescent protein-expressing (GFP+) PECs as an indirect measure of peritoneal parasite burden.Unlike the prior work, at 2 and 5 dpi, we found no difference in the frequency of GFP+ CD45+ PECs between the two groups (Fig 6B).Though the GFP+ PEC number were equivalent between WT and IIΔgra15 infections at 2 and 5 dpi, Q-PCR for Toxoplasma B1 on genomic DNA isolated from liver and spleen at 5 dpi was lower in the IIΔgra15 infected mice (Fig 6C and 6D).In summary, unlike previously published data, we did not find a decrease in IFN-γ within the peritoneal cavity at 2 dpi, nor did we find evidence of an increase in the number of IIΔgra15 parasites compared to WT parasites at 2 or 5 dpi.On the contrary, if anything, our Q-PCR data suggest the opposite. Given that our findings were inconsistent with the prior work, we speculated that these discrepancies arose from our using C57BL/6 mice while the prior work used BALB/c mice.We were particularly interested in this possibility because BALB/c mice and C57BL/6 mice are known to generate very different immune responses, with BALB/c mice being predisposed to a Th2 response and C57BL/6 being predisposed to a Th1 response [37][38][39].To determine if differences in mouse strain explained the discrepancy between our work and the prior work, we Discussion As GRA15 acutely modulates the secretion of IL-12 by infected macrophages in vitro and has been reported to affect parasite growth and local IFN-γ levels very early in vivo [15,18,40], here we sought to understand the biological relevance of these changes beyond the earliest days of infection by assessing brain outcomes at 3 wpi.We found that the brain parasite burden, the number of macrophages/microglia and T cells in the brain, and splenic CD4 and CD8 IFN-γ + T cells did not differ between WT and IIΔgra15 strains.We did find several subtle differences in splenocytes from WT and IIΔgra15 infected mice (decreased M1-like macrophages and frequency of PD-1+ CTLA-4+ CD4+ T cells and NK cells), but the biological significance of these findings is unclear given the other equivalent outcomes.In summary, the work presented here suggests that despite GRA15's well documented effects in vitro [14,15,17,18], for the outcomes we measured, GRA15 has little effect on cerebral toxoplasmosis and peripheral immune cell polarization in C57BL/6 mice at 3 wpi. Our finding that GRA15 does not influence brain parasite burden, at least early in brain infection, is consistent with a prior publication that also used an independently generated IIΔgra15 strain [19].Conversely, a different publication that used strains generated by the lab that originally identified the link between GRA15 and NF-κB found a trend (p>0.05)toward a lower cyst burden at 4 wpi in mice infected with that IIΔgra15 strain [14,20].Collectively, these data suggest that GRA15 likely does not influence cyst burden early in brain infection, though variation can be seen with knockouts from different labs. Though we did not find GRA15-related differences in the brain parasite burden or immune cells in the brain parenchyma, our identification of mice infected with IIΔgra15 as having a lower number and frequency of M1-like macrophages (Fig 2A and 2B) is consistent with the in vitro data suggesting GRA15 plays a role in polarizing macrophages to an M1-like phenotype [14].However, the rest of the results indicate that this difference in the M1-like compartment is not sufficient to alter parasite abundance in the brain at 3 wpi.Our finding that IIΔgra15 infected mice have a lower frequency of "exhausted" CD4+ T cells is novel and interesting, especially when viewed in the context that IIΔgra15 infected mice had the same number of IFN-γ producing CD4 and CD8 T cells as WT infected mice (i.e., no IIΔgra15 effect on these populations).While several possibilities might explain this discrepancy, one possibility is that these PD-1+ CTLA-4+ CD4 T cells are not exhausted.Recent work suggests the identification of exhausted cells using surface markers only is likely inadequate as PD-1 hi cells that also express other inhibitory markers (e.g.TIM-3, CTLA-4) can be highly activated effector cells (i.e., express IFN-γ) that have not yet fully differentiated [41].As our flow panel that included PD-1 and CTLA-4 did not include IFN-γ, we cannot determine if these cells were truly exhausted or maintain effector function.Future studies will be required to definitively determine the status of these cells. The major limitation of this study, and every study that has examined type II gra15 knockout strains in mice [14,[18][19][20], is the lack of an appropriate complemented strain in which GRA15 has been ectopically expressed at the same level as in wild-type parasites.While a complemented strain is not necessary for negative results (i.e., no difference between the wild-type and KO strain), complemented strains are important for phenotypes that differ between wildtype and KO strains or between studies of independently generated KOs.For example, as noted above, we found differences between mice infected with the wild-type and IIΔgra15 strains in the M1-like, "exhausted" CD4 T cell, and NK cell populations.Similarly, unlike prior work, we did not find a decrease in peritoneal supernatant IFN-γ at 2 dpi in C57Bl/6 or BALB/ c mice infected with IIΔgra15 parasites compared to mice infected with WT parasites (Fig 6A and 6E).While several possibilities might explain these differences, appropriate complemented strains would help distinguish between GRA15 driven effects and effects driven by idiosyncratic differences of individual knockout strains. Why does GRA15-which has clear and consistent phenotypes in vitro (e.g., NF-κB activation, IL-1β production)-have such a limited phenotype in mice?This discrepancy may relate to Toxoplasma having evolved to survive across a range of intermediate hosts, leading to redundancies in mechanisms to manipulate host signaling pathways.For example, Toxoplasma proteins GRA83, GRA24, profilin, GRA7, and GRA15 are all linked to IL-12 production from infected murine DCs and macrophages and are initiated through different ligandreceptor interactions [6,14,15,18,[42][43][44].Therefore, these redundancies in how parasites trigger IL-12 production in mice may compensate when parasites lack GRA15.On the other hand, species that lack the receptors that murine cells use to detect Toxoplasma protein (e.g., humans lack TLR11/12 which recognize the Toxoplasma protein profilin) may have a stronger dependency on GRA15 signaling to generate IL-12 during infection.Thus, while GRA15 may not play an essential role in mice up to 3 wpi, in a different host, GRA15 may be the difference between type II parasite survival and clearance. Fig 1 . Fig 1. GRA15 does not influence brain parasite burden in the brain at 3 weeks post infection.Mice were intraperitoneally (i.p.) inoculated with saline (control) or 10,000 WT, IIΔgra15, or IIΔgra15:GRA15 parasites.Brains and spleens were harvested at 3 weeks post infection (wpi).Mice from these infections were used in Figs 1-5. A. Graph of Toxoplasma brain burden as assessed by Q-PCR for the Toxoplasma-specific B1 gene.B. Representative images of a brain tissue cyst stained with Dolichos biflorus agglutinin (DBA).Top image is DBA staining, middle image is staining Fig 6 . Fig 6.GRA15 may increase parasite dissemination during acute infection.A-D.Cre reporter mice were inoculated with 10,000 WT or IIΔgra15 parasites.At denoted time points peritoneal lavage was done to isolate peritoneal exudate cells (PECs).At 5 dpi liver and spleen tissue was also collected for B1 analysis.A. ELISA of IFN-γ found in the peritoneal cavity at 2 dpi.B. Frequency of GFP+ CD45+ cells found within the peritoneal cavity at 2 and 5 dpi.C. Q-PCR of parasite genomes on DNA isolated from liver at 5 dpi.D. Q-PCR of parasite genomes on DNA isolated from spleen at 5 dpi.E. BALB/c mice were intraperitoneally inoculated with 5,000 WT or IIΔgra15 parasites.Graph shows levels of IFN-γ detected by ELISA using peritoneal lavage fluid at 2 dpi.Bars, mean ± SEM.N = 4-5 mice/infection strain.B-D.Each dot represents a mouse.A-D.Data are representative of two independent experiments, 4-5 mice/ infection strain/cohort.Statistics: Two-way ANOVA, Fisher's LSD multiple comparisons test.E. Statistics: T-test.N = 5 mice per group, one experiment.https://doi.org/10.1371/journal.pone.0300764.g006
7,093.8
2024-02-24T00:00:00.000
[ "Biology", "Medicine" ]
The Investment in Scent: Time-Resolved Metabolic Processes in Developing Volatile-Producing Nigella sativa L. Seeds The interplay of processes in central and specialized metabolisms during seed development of Nigella sativa L. was studied by using a high-throughput metabolomics technology and network-based analysis. Two major metabolic shifts were identified during seed development: the first was characterized by the accumulation of storage lipids (estimated as total fatty acids) and N-compounds, and the second by the biosynthesis of volatile organic compounds (VOCs) and a 30% average decrease in total fatty acids. Network-based analysis identified coordinated metabolic processes during development and demonstrated the presence of five network communities. Enrichment analysis indicated that different compound classes, such as sugars, amino acids, and fatty acids, are largely separated and over-represented in certain communities. One community displayed several terpenoids and the central metabolites, shikimate derived amino acids, raffinose, xylitol and glycerol–3-phosphate. The latter are related to precursors of the mevalonate-independent pathway for VOC production in the plastid; also plastidial fatty acid 18∶3n-3 abundant in “green” seeds grouped with several major terpenes. The findings highlight the interplay between the components of central metabolism and the VOCs. The developmental regulation of Nigella seed metabolism during seed maturation suggests a substantial re-allocation of carbon from the breakdown of fatty acids and from N-compounds, probably towards the biosynthesis of VOCs. Introduction During seed development, carbon metabolism is committed to three different directions, namely, accumulation of storage reserves, preparation for germination, and acquisition of desiccation tolerance [1][2][3]. In parallel to driving the development of seeds on the mother plant, central carbon metabolism provides the building blocks for the production of specialized metabolites, including: fatty acids, pigments, phenolic compounds, and alkaloids, as well as volatile organic compounds (VOCs) in VOC-producing seeds. The seeds of most species do not commonly accumulate volatiles, but those of the Brassicaceae and of some other families do accumulate non-volatile glucosinolates, the precursors of sulfur volatiles, which are degraded into volatile compounds upon tissue disruption [4]. The so-called ''seeds'' that accumulate essential oils in species like fennel, caraway and anise are in fact mericarps (fruits) [5]. In contrast, Nigella sativa L. (Ranunculaceae), popularly known as black cumin, accumulates essential oil in its true seeds, thus providing a model system to study the inter-regulation between the production of VOCs and the accumulation of the storage reserves that are characteristic of seed development and maturation [6]. Nigella seeds have been widely used since antiquity both as a medicine and as a spice in the Middle East, India and Europe [7]. These seeds contain major pharmacoactive components, including the monoterpene thymoquinone, the saponin a-hederin, and unique alkaloids [7]. The seeds also contain relatively high levels of fixed oil, triacylglycerols composed mainly of unsaturated fatty acids (oleic and linoleic acids), palmitic acid and, unusually, eicosadienoic acid (20:2n-6), which rarely accumulates in seeds [7][8][9][10]. Although the genetics underlying the production of the VOCs has been documented, knowledge of the biochemistry of the volatiles, which include more than 30,000 compounds, remains fragmented [11,12]. Moreover, profiles of volatiles can change swiftly as a consequence of environmental and herbivorous pressure or in response to developmental cues [13,14]. Therefore, it is likely that there is a highly dynamic interplay between central metabolism and the biosynthesis of volatiles. We hypothesize that during the period of seed development, which generally requires tight regulation of metabolic processes [15][16][17], there is probably a balance between incorporation of carbon and nitrogen into storage reserves and production of volatiles. The recent development of advanced analytic tools enables comprehensive phenotyping of plant tissue and molecular characterization of developmental processes [1]. The metabolic phenotyping of seeds has aided in the description and identification of the processes central to seed physiology [17][18][19]. To understand the organization of relational ties between metabolites, reflecting not only substrate-product relationships but also regulatory effects, one may apply various similarity measures to (normalized) metabolic profiles. The resulting similarity matrices can, in turn, be effectively used to generate hypotheses and descriptive analyses of metabolism [20,21]. The analysis of the relationships between time-resolved profiles is usually performed by applying symmetric similarity measures (e.g., Pearson, Spearman, and partial correlation), eventually extracting undirected relationships [22]. However, cellular networks spanning different molecular levels (e.g., gene regulation, signaling, and metabolism) are in fact inherently directed, implying the existence of driving and responding biochemical entities (e.g., feedback regulation, transcription factors and signaling proteins). The use of similarity measures for determining directed (causal) relationships is precluded by the need for very long time-series data, exceeding 100 time points [23]. Here, we gathered and analyzed, using different analytical platforms, metabolite data sets from developing VOC-producing seeds of Nigella. We then integrated the time series metabolite dataset via a network-based analysis to study the coordinated interplay between the metabolisms of volatile and non-volatile products during seed development. To this end, we employed a recently introduced similarity measure to identify directed coordinated patterns of change between metabolites during seed development [24]. The results are discussed against the background of the current understanding of seed metabolism and of the biosynthesis of volatiles. Chemicals All chemicals were purchased from Sigma-Aldrich Israel Ltd. Extraction and Analysis of Total Protein and Chlorophylla Content Total protein was determined by the Bradford method [25] using Protein Assay reagent (Sigma-Aldrich Israel Ltd., Jerusalem, Israel). Protein extraction was performed by crushing the seeds with 0.1 M NaOH at 95uC for 1 h. The samples were mixed with the reagent and measured at 595 nm after incubation at room temperature. To measure the amount of chlorophyll-a (Chl-a) seeds were crushed, placed in 80% ethanol, and then held at 4uC for two days in tightly closed tubes in the dark. The chlorophyll-a content was estimated from the measurement of the supernatant absorbance at 665 nm by using the equation Chl-a, in ug/ mg = (OD 665 *13.9)*2/weight [26]. Extraction, Derivatization and Analysis of Primary Metabolites Using GC-MS Material collected as described above was extracted according to the protocol described in Lisec et al. [27] and analyzed using a sqGC-MS (Thermo Scientific Ltd) by adjusting the extraction protocol to seed material, as described in Fait et al. [17]. Relative metabolite content was calculated as described in Roessner et al. [28] following peak identification using Xcalibur software. Metabolites were annotated by comparison to mass spectra in the NIST library and the Golm database [29,30]. Extraction and Analysis of Fatty Acids Seeds were ground and transmethylated with 2% H 2 SO 4 in dry methanol (v/v) at 70uC for 1 h. Heptadecanoic acid (C17:0) was added as the internal standard. Gas chromatographic analysis was performed according to Cohen et al. [31]. Fatty acid methyl esters were identified by co-chromatography with authentic standards. Multivariate and Statistical Analysis Principal component analysis (PCA) was performed on the data sets obtained from metabolite profiling with the software package tMEV [32]. Prior to the analysis, data were log-transformed and normalized to the median of the entire sample set for each metabolite. Differences between means were tested for significance by the sum of squares simultaneous test procedure (SS-STP) [33] to reduce the number of multiple tests. Hypothesis testing was carried out at significance level of 0.05. Model-based clustering was conducted by using the ''cluster'' package in R version 2.15.1. The Bayes information criterion was used to determine the number of clusters. Directed Network Generation For directed network generation, we used the recently introduced asymmetric measure, known as iota (denoted by i). Iota is a permutation-based measure, which relies on sorting a time series in increasing order and quantifying how the implied order affects the monotonicity of the remaining time series [24,34]. The monotonicity in a re-ordered time series, based on the orderinducing permutation of the other, is quantified by the normalized number of crossing points. For instance, two time series, illustrated in the upper left corner Figure S2, are denoted as red and black. Sorting of the black time series in increasing order induces a reordering of the red time series, which results in 4 crossing points and a value for iota of 4/10 (where 10 is the maximum number of crossings in a time series on 6 time points). If this value is statistically significant, then a directed edge is established originating in the node described by the black time series and ending in the node whose behavior is characterized by the red time series. To reconstruct the network, we first determined the threshold value for i to ensure a q-value of 0.05. For a threshold value t i , the q-value is defined as the minimum false discovery rate (FDR) attained at or above the given threshold score. The q-value can be readily determined from an empirically estimated nulldistribution. Here, the null distribution is obtained by 500 shuffling of the profiles independently of each other, followed by re-estimation of the iota values. For the case of the Nigella metabolomics data set, there are 100 metabolites, for which the threshold value t i = 0.966 implies a q-value of 0.05. Finally, the network in which the nodes represent metabolites includes directed edges only for those pairs whose i value is above the threshold t i . The directed edges can be regarded as capturing putative substrate-product and regulatory relationships as well as the dependence between biochemical pathways. To further reveal the clustering structure of this network, communities were determined by performing short random walks on the network and imposing a limit of k = 20 on the number of nodes in each of the identified communities. The idea behind this procedure is that short random walks tend to remain in the same community. The robustness of the community results was established by varying the parameter k in the intervalk [ 10,30 ½ . The resulting communities were visualized using Cytoscape version 2.8.3. Results Nigella seeds remain green up to 46 DAA; at 55 DAA the color turns gradually to black ( Figure 1A). The content of chlorophyll a at late maturation ( Figure 1B) suggests that light reactions are taking place, probably reaching a maximum at 43 DAA and subsequently decreasing gradually to the minimum observed at 70 DAA. After this time, seeds lose chlorophyll and acquire black pigmentation ( Figure 1B). Metabolic Profiling Analysis of Nigella Seeds Identifies Distinct Developmental Milestones in Central and Specialized Metabolism To investigate the regulation between central metabolites during seed development, we utilized an established gas chromatography-mass spectrometry (GC-MS)-based protocol [28]. The relative contents of over 70 annotated metabolites from seeds at 14 different time points from early, through mid to late development were quantified (Materials and Methods). The resulting data set is presented in Table S1. The Bayes information criterion in combination with modelbased clustering of the developmental time series (Materials and Methods) was used to estimate the number of clusters, with each cluster exhibiting similar metabolite profiles distinct from the other samples occupying different clusters. This resulted in three clusters ( Figure S1), in line with the dispersion suggested by the PCA (Figure 2, Table S3), which shows that samples from early, mid, and late maturation belong to different clusters. The three clusters corresponding to the different developmental periods are characterized by two distinct shifts in metabolite abundance. By using the conservative SS-STP [33] to test for differences between groups of samples, we identified the metabolites that contribute significantly to the shift between the determined clusters (Table S3). The first shift occurs at 35 DAA, and it is characterized by a significant drop in the content of sugars and glycolysis intermediates, of the triacylglycerol precursor glycerol-3-phosphate, and of myo-inositol, malate and nicotinate. Exceptions among the sugars were galactinol, raffinose and sucrose, whose abundance increased significantly. Among the N-compounds, Asn (at 35 DAA) followed by dopamine (at 35-40 DAA), displayed an exceptionally high, but transient, accumulation during this period [10 and 1000-fold change, respectively, Figure 3A-C)], although the majority of amino acids had increased transiently earlier in development (25 DAA). The second shift from 55 to 60 DAA was characterized by decreased contents not only of the hexoses and sugar alcohols, but also of shikimate and the shikimate-pathway-related metabolites 3,4-dihydroxyphenyl-acetate, dopamine and beta-Ala. In addition, an increase in raffinose, a dehydration-associated sugar, was prominent at this stage (Table S3). Metabolites contributing to the overall distribution of the samples on the principal components (PCs) were derived from the loadings of the first three components and include the sugars galactinol, raffinose, glucopyranose and cellobiose, together with 3,4-dihydroxyphenyl-acetate, glycerate, threonate 1,4-lactone, succinate, nicotinate, malate, fumarate, and the amino acids Ser, Pro, Gln, GABA, and Tyr and the related catecholamine, dopamine (Table S2). Analysis of the pattern of changes of the individual metabolites across the different developmental stages resulted in the trends depicted in Figure 3A-C, as described in the following sections. Glycolysis and sugar metabolism. Most glycolytic intermediates and other sugars displayed an increase between day 10 and day 25-30, followed by an abrupt decrease. The above patterns of change was characterized by a sharp increase in abundance of these sugars as well as in arabinose, lyxose and xylose between 20 and 30 DAA. The general decrease in sugars was coupled to the accumulation of galactinol and raffinose, which are desiccation-associated sugars. Raffinose increased 100-fold, a finding that emphasizes the functional relevance of this sugar in seed late maturation as suggested for Arabidopsis [16,17,35]. TCA cycle and amino acid metabolism. TCA cycle intermediates such as succinate, fumarate and, to some extent, malate, as well as the associated metabolite GABA where shown to increase markedly between 25 and 35 DAA. In contrast, the levels of itaconate, a precursor of aconitate, dropped dramatically following anthesis and then increased transiently at 25 DAA and between 35 and 43 DAA. The increased activity of the TCA cycle, reflected by the abundance of TCA intermediates, was associated with the production of pyruvate-derived 2-isopropyl malate, a precursor of the amino acids Val, Leu and Ile. Similarly to itaconate, 2-isopropyl malate was shown to transiently but dramatically increase at 25 DAA and later at 43 DAA, a time point that was also characterized by major changes in Ncompounds (see below). Amino acids. Rapid changes characterized the level of amino acids in the Nigella seeds during development. The contents of the amino acids Val, Leu, Ile, Thr and particularly of Tyr and Ser, increased transiently by more than 10-fold between 20 and 30 DAA. The majority of the amino acids, including Asp, Glu, Ala, b-Ala, Ser and homo-Ser, accumulated at 25 DAA. Stages subsequent to 30 DAA were characterized by a gradual but steady decrease in the content of all amino acids, with the exception of Asp, Asn, Ala, Val, and b-Ala and to a lesser extent Glu and Gln. The latter showed a second transient increase in content around 43-46 DAA ( Figure 3A). A succession of changes in the content of different amino acids during seed development could be indicative of inter-convertibility between amino acids. For example, Pro showed the most pronounced change of a 20-fold decrease in abundance between the first and second sampling dates, probably as a result of its catabolism to Glu. The content of Pro continued to decrease significantly for the next 2 time points. Increases in Asn and particularly in Gln preceded the first wave of general increases in amino acid content; at day 20, the abundances of Gln and Asn increased transiently (by 5-fold and 2-fold, respectively) and then decreased to initial levels at the subsequent time points (25 and 30 DAA). For the N-containing compounds, the content of Aspderived nicotinate doubled at 20 DAA and dropped by fivefold at 35 DAA, when comparatively sharp increases in dopamine, Asn, Ala-CO 2 and Trp were measured ( Figure 3A). Moreover, during this period, the nicotinate-derived compound 6 hydroxy nicotinate accumulated by about 25-fold in comparison to its level during early development. At the very end of the developmental period under investigation, namely, between 70 to 82 DAA, significant accumulation of the amino acids Glu, Asp, and Ala-CO 2 was found in the dry seeds. Shikimate-associated Changes An increase in the abundance of Trp followed that of Phe and Tyr at 20-30 DAA. As seed development proceeded, Trp showed additional peaks in relative content at 46 DAA. Major changes were also observed for shikimate-derived dopamine and for Phederived 3,4-dihydroxyphenyl-acetate from 40-43 DAA to 55 DAA. Shikimate, a precursor of chorismate, accumulated transiently during early seed development at 0-25 DAA and later between 46 and 55 DAA. Chorismate-derived 4-amino-benzoate (anthranilate) was decreased gradually throughout seed development. The cinnamate derived 4-hydroxy-benzoate (from Phe metabolism) displayed a 10-fold increase at the end of seed maturation (70 DAA). Fatty Acids Analysis of Nigella Seeds The composition of fatty acids was determined in mature Nigella seeds from the representative accession (EH), and the relative proportions [expressed as percentage of total fatty acids (TFA)] and absolute concentrations of fatty acids (mg per gram of seeds) were determined in Nigella seeds at different stages of their development ( Figure 4, Table 1). Both the polyunsaturated fatty acid a-linolenic acid (18:3n-3), a substrate of lipoxygenase [LOX (linoleate:oxygen oxidoreductase, EC 1.13.11.12)], and C 16 unsaturated fatty acids were present during early stages of seed maturation (''green'' stage) when the immature seeds were still rich in photosynthetic pigments ( Figure 1A). These unsaturated fatty acids are components of the chloroplast membrane lipids of flowers and of ''green developing'' seeds. Of note was the sharp decrease in the concentration of the 18:3n-3 fatty acid between 20 and 25 DAA and its continued low levels throughout maturation. In contrast, the rapid increase in TFA content, which had occurred by 35 DAA, was accompanied by the accumulation of the polyunsaturated fatty acids 18:1n-9 and 18:2n-6, and the long-chain fatty acids 20:1 and 20:2, which increased to more than 10-fold of their initial level in accord with oil deposition. The above processes were associated with the first rise in VOCs production ( Figure S3), in accordance with the recently revealed VOC patterns of developmental change for the same material as that used in this study [7]. The content of TFA increased between 35-50 DAA, reaching an apex at 50 DAA indicating the accumulation of storage Table 1). The here-described trend was observed for the absolute contents of the major fatty acids (16:0, 18:0, 18:2, 18:1n-9). A different pattern of change was shown for C 16 unsaturated fatty acids and for 18:3n-3, which were present in low levels in mature seeds. A coordinated decrease in TAG-associated fatty acids [36] occurred after 50 DAA, particularly in linoleic acid (18:2), with this decrease being associated with the accumulation of VOCs ( Figure S3). Directed Network Analysis Highlights Coordinated Metabolic Shifts To understand the relationships between the metabolic processes occurring during the development of Nigella seeds, time-resolved profiles of volatiles, of central metabolism compounds, and of fatty acids were subjected to network-based analysis. To create directed network edges, we used the asymmetric similarity measure iota [24] (see Materials and Methods) in combination with a threshold value ensuring statistical soundness; here, the threshold was selected to guarantee a false discovery rate of 5%, corresponding to a threshold value of 0.995. A directed edge is indicative of the dependence of the originating on the receiving node, which may indicate either a regulation-associated association or a product-substrate relationship. In both cases, monotonous changes in the time series profile attributed to the receiver node are expected to relate to a monotonous change in the profile of the originator node. Following this approach, the resulting network contains 107 nodes, corresponding to the annotated metabolites, connected by 1087 directed edges. The relative densities of this directed network (compared to all possible edges that could be established on the given number of nodes) was 0.0958. The diameter of the network, i.e., the length of the longest of all the shortest paths connecting any two nodes, was equal to 7, suggesting the presence of denser subnetworks. To characterize the locally dense parts of the obtained network suggesting coordinated changes in metabolic content, we next identified network communities. A network community corresponds to a sub-network of nodes that are more connected between each other than with other nodes in the network ( Figure 5). We used the walk-trap community algorithm, and found five communities by bounding their size between 2 to 20 nodes. Robustness analysis was conducted to test the stability of the identified communities (see Materials and Methods). Generally, the network communities highlighted both the tight coordination between metabolite classes and the crosstalk between central and specialized metabolisms. Figure 5 shows the communities containing more than one node. Community 1 is enriched with amino acids and organic acids, most of which show crosstalk (i.e., in-coming and out-going edges) with each other. Fumarate and GABA have an increased number of in-coming edges, while 4-OH-benzoate displays a high number of out-going edges. Thymoquinone, the only volatile in this community, has mostly in-coming edges. Community 2 is predominantly characterized by sugars; sucrose has mainly out-going edges, while the other sugars have mainly incoming edges, suggesting that sucrose is the source and probable regulator of the biosynthesis of other sugars. The rest of the community is composed of a mixture of different compound classes of the central metabolism, i.e., amino acids, organic acids, polyols, N-compounds, polyhydroxy compounds and a 16:00 fatty acid. This fatty acid has only in-coming edges. Community 2 does not contain any volatiles. Community 3 is enriched with fatty acids. Community 4 represents the transitory stage between central metabolism and VOCs, reflected by a balanced distribution of central metabolites and volatiles. In this community, the central metabolites, proline, lyxonate, phosphoric acid, sinapate and glutarate-3-hydroxy, are clearly receiving nodes. Community 5 is characterized by the predominant presence of terpenes and terpenoids, all showing a balanced crosstalk with each other. Similar to Community 4, in Community 5 xylitol and glycerol-3-phosphate are sink nodes, having only in-coming edges, a finding that highlights the interplay between the components of central metabolism and the VOCs. Raffinose appears to occupy a transitory position in this Community, with a balanced number of in-and out-going edges. Discussion To date, studies of the metabolic processes occurring during seed development and maturation have largely been dedicated to the understanding of the accumulation of storage reserves of proteins, starch or TAGs [6], to the imposition of dormancy, and to the acquisition of desiccation tolerance, processes to which the maturation of orthodox seeds is indeed dedicated [37]. Nonetheless, during desiccation seeds can recycle a significant percentage of storage reserves [38] and accumulate unbound metabolites [17] to sustain long-term storage of reserves and in preparation for the early events of germination [2]. Against this background, the phenomenon of orthodox true seeds accumulating volatiles during maturation has not received adequate attention. By using developing Nigella seeds as a model system, we investigated the central metabolic processes in VOC-producing seeds and the interaction between VOCs and central metabolites during seed development. Nigella Seed Development is Characterized by Two Key Metabolic Shifts The integration of the data from the current analysis of central metabolites and acyl moieties of complex lipids [determined as fatty acid methyl esters (FAMEs)] with volatile profiles measured in our earlier study [7] revealed that Nigella metabolism undergoes two metabolic shifts during the transition of the seed from development to photosynthetic maturation and pigmented desiccation. PCA analysis and model-based clustering suggest that, from a metabolic standpoint, Nigella development can be divided into three phases during which the seeds exhibit considerable differences in metabolism, as controlled by two shifts in C-N metabolism. The first shift during the maturation of the seed was marked by significant changes in the levels of metabolites such as sugars and sugar alcohols and precursors of TAG metabolism; these findings suggest intensive activity in storage resource accumulation and glycolysis to support the production of fatty acids ( Figure 4) and their incorporation in to TAGs. Outstanding among the amino acids, Asn and the N-containing compound dopamine were found to significantly, but transiently, increase during this period, reflecting their role as key sources of N nutrition for the developing seeds. For Asn, these findings are in keeping with the peak intake from the phloem to the soluble nitrogen pool in white lupin developing seeds [39]. The second shift was characterized by a reduction in hexoses, sugar alcohols and fatty acids. Shikimate and precursors of the secondary shikimate metabolism 3,4-dihydroxyphenyl-acetate and dopamine, were also decreased at this stage, probably reflecting their incorporation into downstream secondary metabolic pathways. Initiation of VOC biosynthesis at this stage partly supports this suggestion. Raffinose accumulation follows the earlier accumulation of galactinol, the galactosyl donor for the biosynthesis of raffinose family oligosaccharides (RFO) [40]. Raffinose and galactinol -along with sucrose -are desiccation-related compounds, which have long been known to be involved (by contributing to the formation of a glassy matrix) in the structural acquisition of desiccation tolerance [41][42][43] in orthodox seeds of different plant species [17,44]. These sugars provide also a carbon pool at the beginning of germination [16]. In Nigella, amino acids initially accumulated during early development of the seed, possibly reflecting input of N-containing compounds from the phloem; thereafter they decreased, either gradually or abruptly, probably due to their incorporation into storage proteins. Our results show fluctuations of Phe and Trp concentrations during seed development, with the values increasing slightly above their median contents at 20-25 DAA, 43-46 DAA and during desiccation. Such ''waves'' of accumulation are common for most amino acids, albeit to different extents. In this study, we found the accumulation of amino acids between 25 and 30 DAA to be rather general (although a marked 10-to 100fold change was observed in the contents of Tyr, Ser, Ile and Leu, Ala and beta-Ala) and was probably associated with the earlier increase in Gln and Asn, and/or the recycling of N from ornithine and Pro, the content of the latter decreasing 100 times between 0 and 20 DAA. During later waves of increases in the concentrations of N-compounds, a sequential accumulation of Phe/Trp and an associated increase of the chorismate-derivative 3,4-dihydroxyphenyl-acetate were observed. In parallel to increased shikimate metabolism, dopamine increased transiently between 35 and 55 DAA. Dopamine is the product of decarboxylation of L-DOPA (3,4-dihydroxy-phenylalanine), which is governed by Tyr decarboxylase (TYDC) downstream of the shikimate pathway. Noteworthy, directed network analysis generated communities including shikimate amino acids Phe, Tyr together with terpenes and terpenoids. The analysis also linked between Leu derived kaempferol-3-o-glucopyranoside-6-(also known as 3-hydroxy-3methylglutarate) and terpenes. These lines of evidence strongly call for more work on the contribution of amino acids to Nigella seed volatiles. C Partitioning Toward VOC Biosynthesis Profiling and quantification of fatty acids in Nigella seeds showed that TFA accumulation occurred during early-mid maturation, probably due to biosynthesis and deposition of storage TAGs in the developing seeds. Later in development, the levels of TFA decreased by about 30% of their maximum accumulation ( Figure 4). Malate, a major precursor of fatty acid biosynthesis in the heterotrophic plastid of the developing seed [45], increased during early maturation (25 DAA), preceding the rise in TFA content, and decreased abruptly at mid maturation (35 DAA) and again at the onset of desiccation (60 DAA). These significant changes in pattern of fatty acids during maturation could be representative of C repartitioning toward the biosynthesis of secondary metabolites. Indeed, polyunsaturated fatty acid catabolism can lead to the direct production of a series of volatiles, including the volatile aldehydes. However, an indirect relation between TAG and volatiles is more likely in Nigella seeds. For example, terpenoids, such as carvacrol, are expensive (in carbon ''currency'') to produce due to their chemical reduction and the need for dedicated enzymes [46]. Moreover, the terpenoids are synthesized from acetyl CoA units hence providing a competitive metabolic process to TAG production. Among the fatty acids measured, we observed a continuous stepwise reduction of linolenic acid 18:3n-3, a substrate of LOX, which in the network analysis clustered tightly within the volatile module (community 4). LOXes catalyze the regio-and stereo-specific dioxygenation of PUFAs (18:2 and 18:3) and are involved in many different developmental processes, including the production of plant specific volatiles [47]. LOX has been shown to initiate the mobilization of TAGs in germinating cucumber seeds and to initiate the production of volatile aldehydes [48]. This supposedly dual role of fatty acids during the development of Nigella seeds is reflected in: (i) the developmentally alternated accumulation of fatty acid and VOCs, and (ii) the association between fatty acids and VOCs in the community analysis of the directed network. Directed Network Analysis Identifies Coordinated Processes during Nigella Seed Development and Possible Metabolic Dependencies When analyzing the data via directed network analysis, we identified significant relationships between metabolites closely related to precursors of known biochemical pathway for VOC biosynthesis. In the communities enriched with VOCs, the metabolites xylitol and glycerol-3-phosphate were characterized by incoming edges. These metabolites are closely related forms of precursors of the chloroplastic deoxyxylulose phosphate pathway, i.e. the mevalonate-independent pathway in the plastid, for VOC biosynthesis [12]. The analysis also confirmed the coordinated pattern of change between amino acids, sugars, fatty acids and volatiles, each enriching a different community. That having been said, the directionality of the edges, being based on a mathematical measure, is not trivially interpretable. In-coming edges suggest a regulatory dependence of the originator node on the receiver node or, on a temporal scale, a product and precursor relationship, respectively. However, such relationships cannot be obtained by the classically used measures, such as correlations, which usually fall in the category of symmetric measures. For example, the incoming edges on sucrose suggest the dependence of the other sugars on the sucrose pool, similar to the example given above for the chloroplastic deoxyxylulose phosphate pathway for VOC biosynthesis. However, more difficult to interpret are the numerous incoming links from amino acids to thymoquinone. Importantly, within communities shared between central metabolites and VOCs, the relation usually involved central metabolites acting as receiving nodes, suggesting a dependence of VOCs on central intermediates. Nevertheless, beyond this generalized statement, a more specific interpretation of these results is not possible and would require further methodology-related developments. Finally, Figure 6 presents a schematic comparison of the broad developmental patterns of metabolite changes in abundance between current knowledge of Arabidopsis seeds and the current data for Nigella seeds. A striking difference is evident in the patterns of change and the major reductions in sucrose, fatty acids and proteins during Nigella seed maturation. While to some extent speculative, it is tempting to suggest a link between these differences and the metabolic investment in VOC production. Future work could test this hypothesis by using metabolic flux analysis during Nigella seed development. In conclusion, the results of metabolite profiling and directed network analysis presented here suggest that in Nigella, major degradation of fatty acids and N-compounds provides the building blocks for the biosynthesis of volatiles, as is known to be the case in several other plant species [11]. Amino acids, especially the aromatic amino acids, branched chain amino acids and methionine serve as precursors for many aroma volatiles in fruits [49,50]. While these volatiles are generally absent in Nigella, an indirect metabolic link probably exists, e.g. via Leu derived kaemperol glycoside (glutarate 3 hydroxy 3 methyl in community 4). Network analysis inferred a link between fatty acids and fatty-acid-derived volatile caproic acid (2-ethylhexanoate) [51]. Moreover, the analysis supported the involvement of the LOX substrate fatty acid 18:3n-3 and metabolites of the central metabolism closely related to terpenoid precursors [52] in the biosynthesis of volatiles during Nigella seed maturation. Future work should explore the biological meaning of directionality in iota-based networks, however it is safe to suggest that the direction of the edge is at least to some extent a representation of metabolic dependence, e.g. carbon metabolism is largely dependent upon sucrose pools as suggested by the edges directed toward the latter metabolite in community 2. Finally, seed VOCs have been associated with the regulation of germination [53], with plant/pathogen interactions [54][55][56], and with structure of pest communities [57]. Yet, our understanding of the role of VOC role in seeds is limited. The present study shows that VOC-producing seeds probably repartition their C-N metabolism during the stage of VOC production. Further, functional research on VOC-producing seeds is required to address the open questions still remaining. Total amino acid is the sum of detected free amino acids. The unit for total protein content is mg/seed, all other compositions use ng/seed as unit. Data came from reported paper [16].
7,543.2
2013-09-03T00:00:00.000
[ "Biology" ]
Liquidity and dynamic leverage: the moderating impacts of leverage deviation and target instability Purpose – We explore the impact of equity liquidity on a firm ’ s dynamic leverage adjustments and the moderating impacts of leverage deviation and target instability on the link between equity liquidity and dynamic leverage in the UK market. Design/methodology/approach – In applying the two-step system GMM, we estimate our model by exploring suitable instruments for the dynamic variable(s), i.e. lagged values of the dynamic term(s). Findings – Our analyses document that a firm ’ s equity liquidity has a positive impact on the speed of adjustment (SOA) of its leverage ratio back to the target ratio in the UK market. We also demonstrate that the positive relationship between liquidity and SOA is more pronounced for firms whose current position is relatively close to their target leverage ratio and whose target ratio is relatively stable. Practical implications – This study provides important implications for both firms ’ managers and investors.Particularly,firms ’ managerswhowishtoincreasetheleverageSOAtoenhancefirms ’ valueneed to give great attention to their equity liquidity. Investors who want to evaluate firms ’ performance could also consider their equity liquidity and leverage SOA. Originality/value – We are the first to enrich the literature on leverage adjustments by identifying equity liquidityasanewdeterminantofSOAinasingledevelopedcountrywithmanydifferencesinthestructureanddevelopmentofcapitalmarkets,ownershipconcentrationandinstitutionalcharacteristics.Wealsoprovidenewempiricalevidenceofthejointeffectofequityliquidity,leveragedeviationandtargetinstabilityon leverageSOA. Introduction The managerial decision on corporate capital structure is one of the most debated topics by modern finance scholars and practitioners around the world.While the static trade-off theory of capital structure suggests that the value of a firm can be maximized by targeting a leverage ratio that minimizes its cost of capital (Fischer et al., 1989), more recently, dynamic trade-off models argue that firms have incentives to adjust their actual debt/equity ratio towards the optimal (target) ratio (Hovakimian and Li, 2011).However, if the adjustment is costly, then the speed of adjustment (hereafter SOA) tends to be slowed.Myers (1984) points out that where the costs of leverage adjustment are high, one might expect to see firms to deviate from their target debt-equity ratios by large amounts for extended periods.Hence, an essential task is to explain the cross-sectional differences in the dynamics of corporate capital structure decisions, rather than only concentrating on purifying the traditional static trade-off models (Graham and Leary, 2011).In this paper, we investigate the impact of equity liquidity on leverage SOA in the UK equity market. Previous literature provides evidence that firms with greater liquidity face lower transaction costs, lower levels of information asymmetry, stronger corporate governance, lower costs of issuing both debt and equity financing, and ultimately lower costs of adjustment to the target leverage (Berkman and Nguyen, 2010;Dang et al., 2015).Stoll and Whaley (1983) and Amihud and Mendelson (1986) first suggest that illiquid firms have higher stock transaction costs, and thus a higher required rate of return from investors.Butler et al. (2005) show that investment banking fees are lower for more liquid firms.Hennessy and Whited (2005) confirm that firms with high liquidity are more likely to have lower transaction costs, and thus lower cost of equity.Cheung et al. (2019) indicate that firms with high liquidity not only have easier access to the equity market, but also have lower costs of debt financing.Hence, one might expect that equity liquidity would reduce the cost of leverage adjustment, resulting in a faster SOA. Consistent with this argument, a recent study by Ho et al. (2021) investigates that firms with high liquidity have significantly higher leverage SOA.The results of this study are based on an international sample that mixes firms from developed and emerging markets.It is not obvious whether or not these results can be applied to a single country with differences in the structure and development of capital markets, ownership concentration, and the severe of information asymmetry.In particular, emerging countries have less developed capital market financing, less sophisticated bond markets, higher concentrated corporate ownership, and higher asymmetric information than developed markets, that significantly affect liquidity (Saleh et al., 2020(Saleh et al., , 2022)).These differences potentially enhance or moderate the role of liquidity in leverage SOA decisions.The differing market structure of the UK and other countries also leads to large differences in liquidity characteristics (Huang and Stoll, 2001).For these reasons, it is not clear whether the results based on international studies can be readily applied to firms in a single country.Furthermore, the UK is considered a major worldwide economic market.It is large and has grown rapidly in recent years (IMF, 2011).The London Stock Exchange has a huge daily volume of transactions, competing with the major US stock exchanges, such as the NYSE and NASDAQ (Charitou et al., 2004).The UK provides a financial environment "ideal" for the examination of issues of equity liquidity and corporate capital structure decision-making.Therefore, in this study, we take a step in this direction by investigating the impact of liquidity on leverage SOA in the UK. The findings of our study contribute to corporate finance literature.First, while a prior study has documented the impact of equity liquidity on firms' leverage SOA using international data that mixes firms from countries with different market structures, market development, and national institutions (Ho et al., 2021), we focus on a single country that is one of the most developed economies outside the US, that is the UK While the UK has a developed capital market that pronounces the positive impact of liquidity on leverage SOA, it has a low-leverage policy that may moderate this relationship.The UK also has good institutional characteristics with better governance that may reduce the role of firm-level determinants of SOA including liquidity.Corporate ownership is much less concentrated in the UK than in emerging markets that also have significant impact on liquidity (Heflin and Shaw, 2000;Rubin, 2007).For these reasons, it is not clear whether or not the results of an international study can be applied to a single country such as the UK. Second, our study sheds new light on the literature to explain firms' financial policy and provides the first evidence on the association between equity liquidity and firms' capital structure adjustment in the UK market.Although several studies have examined the capital structure choices of UK firms, for example Bevan andDanbolt (2002, 2004) suggest the determinants of capital structures, Dang (2013) examine the zero-leverage phenomenon, Ezeani et al. (2023) suggest the association between corporate board and capital structure, they do not investigate the association between liquidity and dynamic capital structure.Our study thus fills this important gap in the literature by examining the important role of equity liquidity in dynamic leverage adjustments in the UK. Third, our study contributes to the empirical literature on the joint relationship among equity liquidity, leverage deviation, target stability and leverage SOA.Prior literature suggests that firms with greater leverage deviation or target instability confront higher financial risks, pay even higher costs of equity and have low equity liquidity (Ippolito et al., 2012;Zhou et al., 2016), while equity liquidity has been documented to have impacts on leverage adjustments.Given that there is evidence that equity liquidity, leverage deviation, target stability and leverage adjustments are associated, how the first three factors jointly influence leverage adjustments is still unexplored.Our study unveils this gap. The paper proceeds as follows.Section 2 provides the literature and hypotheses development.Section 3 describes the sample, data collection and variable construction.The empirical methods are reported in Section 4 and the results are presented in Section 5.The study is concluded in Section 6. Literature review and hypotheses development Previous literature has shown the important role of liquidity in making corporate finance decisions.For example, the prior studies examine the impacts of stock liquidity on firm value (Batten and Vo, 2019;Pham et al., 2020) and various corporate policies, such as innovation (Fang et al., 2014), payout policy (Jiang et al., 2017;Nguyen, 2020), stock repurchase (Brockman et al., 2008), trade credit (Shang, 2020), risk-taking (Hsu et al., 2020) and corporate governance (Edmans et al., 2013). Meanwhile, there has been a stream of literature that documents the role of equity liquidity in firms' capital structure decisions.Brennan and Subrahmanyam (1996) and Brennan et al. (1998) provide important evidence of the negative relationship between equity liquidity and the cost of capital, that is, higher equity liquidity means lower cost of capital.Market microstructure literature shows that stock liquidity can alleviate agency problems (Edmans et al., 2013) and information asymmetry (Subrahmanyam and Titman, 2001).Companies with high stock liquidity would have better credit ratings and lower credit risk compared to illiquid firms (Brogaard et al., 2017).Cheung et al. (2019) further highlight that more liquid firms are more likely to access debt financing and have lower debt costs compared to their counterparties.In sum, firms with higher liquidity have lower capital costs and are easier to access external financing sources. Liquidity can also influence the transaction costs associated with raising new external equity capital.First, an illiquid firm has to offer a discount on the current share price to attract the capital that it requires.This discount is reflected by the magnitude of the bid-ask spread and price impact of issuing new equity (Bundgaard and Ahm, 2012).Thereby, illiquid stocks tend to be traded at a discount.Second, when a firm raises new equity capital, it incurs the issuance fees that an issuer will have to pay institutions that assist it in the fund-raising process (Butler et al., 2005).The bottom line is that firms with higher equity liquidity will have lower transaction costs associated with issuing new equity and thus have greater incentives to rapidly correct any deviation of their actual leverage level from their target. In addition, information is likely to be another important channel between equity liquidity and leverage SOA.This argument suggests that greater liquidity facilitates more informed trading and produces more information about the firm (Friewald et al., 2016;Fulghieri and Lukin, 2001).Consequently, stock liquidity helps to reduce adverse selection and equity mispricing, thus lowering the agency costs, and thereby, reducing leverage adjustment costs and increasing the speed of leverage adjustment ( € Oztekin, 2015;€ Oztekin and Flannery, 2012).We propose the first hypothesis as follows: Liquidity and dynamic leverage H1.Equity liquidity has a positive impact on leverage SOA. It is argued that due to the far deviation from or high instability of its target leverage ratio, it is possible that a firm should pay a penalty in the form of higher cost of equity capital.Specifically, a firm with higher deviation from or higher instability in its target level will confront higher financial risks, which influence the required rate of return on corporate equity capital, and hence, leave greater costs of equity capital and lower equity liquidity for the firm.Consistent with this argument, Zhou et al. (2016) derive a theoretical link between leverage deviation and costs of equity and confirm that the firm's cost of equity positively relates to the deviation from its target level of leverage.Ippolito et al. (2012) also suggest a significantly positive association between the deviation from target and the expected equity return (then, cost of equity capital).Investors require a higher expected equity return for firms that deviate further from the target leverage.These firms consequently confront greater cost of equity that leads to lower equity liquidity.Accordingly, the question that we raise here is whether the magnitude of the positive relationship between equity liquidity and leverage SOA will be impacted by the extent of the deviation between the actual and the target ratios and/or the stability of the target ratios of firms.Building on the above discussion, we investigate the following hypotheses: H2.The positive impact of equity liquidity on leverage SOA is less pronounced for firms that deviate further from target ratios. H3.The positive impact of equity liquidity on leverage SOA is less pronounced for firms that have higher instability in target ratios. Data and variable construction 3.1 Data The annual firm-level and industry-level accounting data are retrieved from World scope via the Datastream database.To estimate liquidity measures, we collect daily data (e.g.bid/ask price, trading volume and stock return) from this database.Only data for firms with common securities are collected, whereas those with distinct characters, for instance warrants, trusts, funds, and non-equity stocks, are excluded.Financial and utility corporations are also eliminated from the sample since these corporations are subject to special regulations on financing policies.The final sample contains 20,090 firm-year observations for the UK market during the period from 1996 to 2016.Finally, to reduce the possible impacts of extreme values, we minorized both the dependent and independent variables at the 1st and 99th percentiles. 3.2 Variable construction 3.2.1 Leverage measurements.Based on existing studies (An et al., 2015;Halling et al., 2016), we use both the book ratio ðBLEV Þ and the market ratio ðMLEV Þ of leverage as dependent variables. Equity liquidity. In the main analysis, we use the Amihud illiquidity score, which is the most popular measure of liquidity (Nadarajah et al., 2018).Specifically, the Amihud (2002) illiquidity measure is defined as the average ratio of the daily absolute stock return divided by the dollar value of volume: where R itd is the stock return of firm i on day d in year t, DVOL itd is the daily volume in dollars of firm i on day d in year t. JED In this study, we use the annual average of this daily liquidity measure for each stock i: where D i;t is the number of days for which the volume of stock i in year t is positive: We also employ other three measures of liquidity including zero-return proportion (Propzero i;t ) (Goyenko et al., 2009), daily closing percent quoted spread (Spread i;t ) (Fong et al., 2017), and turnover (Turnover i;t ) (Berkman and Nguyen, 2010) [1]. 3.2.3Target leverage.The current literature on capital structure suggests that the target level of a firm's leverage is a function of time-varying firm characteristics and industrial elements (An et al., 2015;Devos et al., 2017): where each firm is indexed by i and time by t.X i;t is a vector of firm and industry variables associated with the operation costs and benefits with different leverage levels including SIZE, TANG, MTB, PROF, DEP, RD, RDDum, and INDMED [2].The trade-off hypothesis predicts that β ≠ 0, and the variation in LEV i;tþ1 is nontrivial.We also note that by modeling optimal capital structure in period tþ1 as a function of determinants observed in period t, then the endogeneity concerns are somewhat mitigated. We measure the target leverage ratio of each firm as the fitted value obtained from Equation ( 3): 3.2.4Leverage deviation.The deviation from the target level is measured as the absolute difference between the target and the observed leverage ratio: where LEV * i;t is the target leverage ratio defined above and LEV i;t is the observed leverage ratio of firm i at time t. 3.2.5 Target instability.Based on Kayhan and Titman (2007), the instability in the target ratio of leverage is measured as where LEV * i;t and LEV * i;t−1 are the target leverage ratios of firm i at time t and t-1, respectively.The higher level of ΔTarget i;t is, the more unstable the target leverage is. Empirical methods The standard partial adjustment model measures the rate at which the firm converges its leverage to the target ratio: where v is a measure of aggregate leverage SOA of firms that diverge away from the target of next period.The target leverage estimated from Eq. ( 4) is substituted into Eq.( 7) and rearranged to yield the model as follows: Liquidity and dynamic leverage We follow previous literature (e.g.Devos et al., 2017;Zhou et al., 2016) and augment Eq. ( 8) with an equity liquidity variable ðLIQ i;t Þ and an interaction term to test the significance of LIQ i;t on the leverage SOA (H1).In particular, LIQ i;t is proxied by Amihud illiquidity measure.The interaction term is the product of LIQ i;t and the first lag of the firm's actual leverage ratio. We model this economic relation as follows: In Eq. ( 9), our main focus is the coefficient of the interaction term LIQ i;t x LEV i;t .Since we hypothesize that equity liquidity has a positive impact on the SOA (H1), and the variable LIQ i;t is proxied by the Amihud illiquidity measure, we expect the coefficient on the interaction term, β 2 , to be positive [3].This implies that the coefficient on the lagged leverage is smaller for firms with higher equity liquidity and hence, they exhibit a faster SOA ðvÞ. Our next hypotheses (H2 and H3) relate to how the relationship between equity liquidity and SOA is conditional on leverage deviation and target stability.To examine this issue, following Devos et al. (2017), we include the triple interaction terms among equity liquidity, actual leverage ratio and leverage deviation/target stability in the SOA regression (Eq.( 9)).Specifically, the augmented models take the following forms: where Eq. ( 10) is used to examine the hypothesis H2, Eq. ( 11) is used to examine the hypothesis H3, and Eq. ( 12) is the combination of both hypotheses. We propose that firms with greater leverage deviation and/or target instability would have higher financial risks and pay penalties in the form of higher costs of equity capital and thus have lower equity liquidity.Hence, we might expect a positive sign on the interaction term LIQ i;t x LEV i;t and negative signs on the triple interaction terms LIQ i;t x LEV i;t x LevDev i;t and LIQ i;t x LEV i;t x ΔTarget i;t .We use leverage deviation and target instability as dummy variables by assigning "1" for high leverage deviation (high target instability), and "0" for low leverage deviation (low target instability) based on the median value.To further confirm the results, we also examine these relationships for over-and underlevered firms by re-estimating Eq. ( 12) for the two sub-samples. Econometric method Since all the main specifications in this paper are dynamic panel data models, traditional pooled OLS or firm fixed effects estimators would result in biased and inconsistent estimates (Baltagi and Baltagi, 2008).Specifically, whereas the pooled OLS estimator is likely to overestimate the coefficient of the dynamic variable ð1 − vÞ and thus underestimating the level of SOA (v), the firm fixed effects model underestimates the coefficient of the dynamic variable, hence, overestimates the SOA (Nickell, 1981).The inconsistence is more likely to occur in the case of relatively short period of sample data (Flannery and Hankins, 2013). Due to the limitations of the pooled OLS and firm fixed effects models and the dynamic nature of our panel models, we follow the recent research and use Blundell and Bond (1998)'s two-step system GMM.This is the most reliable method to estimate the dynamic short panels with the lagged-dependent variable and endogenous independent variables (Zhou et al., 2016).In applying the two-step system GMM, we estimate our model by exploring suitable instruments for the dynamic variable(s) (e.g.leverage ratios, interaction terms between leverage ratios and main variables), i.e., lagged values of the dynamic term(s). Descriptive statistics The summary statistics for the entire sample are presented in Table 1, which includes descriptive statistics (Panel A) and correlation coefficients of the determinants of the target leverage (Panel B).In our sample, the mean book leverage ratio is 0.1793, and the mean market leverage is 0.1981.The extent of the cross-sectional variation is illustrated by the difference between the first quartile of the book (market) leverage ratio of 0.0211 (0.0138) and the third quartile at 0.2805 (0.3099).In terms of the liquidity measure, the means of Amihud, zero-return day's proportion, turnover and daily quoted spread measures are 23.1187,0.3495, 0.2820 and 0.0531, respectively.The mean book leverage deviation (0.1102) is lower than the mean market leverage deviation (0.1349).On average, the absolute change in target market leverage (0.0085) is higher than that in target book leverage (0.0031).In our sample, the average value of asset tangibility-total assets ratio is 27.35%, market-to-book ratio is 2.528, profitability-total assets ratio is 6.55%, depreciation-total assets ratio is 4.57% and R&D-total assets ratio is 2.22%.Panel B reports the correlations among the determinants of the target leverage ratio.We see that these correlations are low, suggesting that there is little concern with multicollinearity. Equity liquidity and SOA: baseline results We present the results from the baseline regression (Eq.9), which determines the equity liquidity -SOA relationship (H1), in Table 2.All these regressions were estimated using the two-step system GMM method.The results are presented for both BLEV i;t and MLEV i;t separately.The variables of interest in this regression are the interaction terms between LEV i;t and LIQ i;t (Columns 1-2). The coefficient on LIQ i;t * LEV i;t are positive and highly significant at the 1% level for both book and market leverage regressions.This suggests that firms with high (low) liquidity have lower (higher) overall adjustment costs, which results in higher (lower) SOA.Regarding the economic significance, a standard deviation increase of one in liquidity increases the SOA by 1.18-4.11%,compared with an average adjustment speed of 24.1% for book leverage and 17.9% for market leverage [4].In other words, an average firm takes about 2.5-3.5 years to adjust half of the deviation between the actual and the target leverage.This duration decreases to about 2-3 years for firms with high liquidity [5].In general, the results support our first hypothesis that liquidity boosts the leverage SOA.Firms with high liquidity are charged lower transaction costs in issuing financial capital and have lower asymmetric (2021) suggesting that stock liquidity helps firms to reduce transaction costs, lower agency costs, easier access to external financing sources and thereby, reduce leverage adjustment costs and increase the speed of leverage adjustment. We also present results of two diagnostic tests, including the AR( 2) second-order serial correlation test and the Hansen J test of over-identifying restriction.Specifically, AR( 2) tests show the p-values of 0.4406 and 0.6778 for the book and market leverage regressions, respectively.These results imply that our system GMM specifications do not suffer from the second-order serial correlation.Further, the p-values of Hansen J tests of 0.3796 and 0.3121 for book and market leverage regressions, respectively, confirm the validity of all our instruments.In sum, the results of these specifications imply that the dynamic system GMM model specification is appropriate [6]. 5.3 Robustness checks 5.3.1 Two-step approach.In baseline regression, following previous literature (Devos et al., 2017;Zhou et al., 2016), we use an interaction term between liquidity and leverage ratio to test Liquidity and dynamic leverage the significance of liquidity on SOA.However, given that both liquidity and the first lag of the firm's actual leverage ratio have highly significant impacts on the leverage ratio, this method may not fully assess whether including the interaction variable improves the model.In this session, we check the robustness of our baseline results using the two-step approach (Çolak et al., 2018;Dang et al., 2019). To examine the relationship between liquidity and leverage SOA, we include liquidity in the regression which determines a firm's SOA.€ Oztekin and Flannery (2012) also suggest that firm accounting variables may affect both target leverage and SOA.We use a set of covariates that are used in the target leverage estimation (vector X i;t;j ).Thus, v varies with liquidity and control variables: Substituting Eq. ( 13) back to Eq. ( 7) yields the equation for a partial adjustment model with heterogeneity in the leverage SOA: where ΔLEV i;tþ1;j 5 LEV i;tþ1;j − LEV i;t;j . Eq. ( 14) includes a pooled OLS regression of leverage changes on the product of Dist i;t;j and liquidity and control variables with bootstrapped standard errors to account for the generated regressors (Çolak et al., 2018;Faulkender et al., 2012;Pagan, 1984). Table 3 reports the results.The coefficients of interaction between liquidity and distance from the target are positive and statistically significant across models, implying a positive relationship between liquidity and leverage SOA.This is consistent with our baseline findings. 5.3.2Alternative measures of leverage.We test the robustness of our key findings by including two other definitions of corporate leverage ratio: long-term debt to the book value of assets (LDA) and long-term debt to market value of assets (LDM) (Devos et al., 2017;Zhou et al., 2016). We tabulate the robustness test for our baseline results in Table 4.For brevity, only the main coefficients of interest in examining our hypotheses are presented.With various measures of financial leverage, Table 4 presents the regression results in the association between equity liquidity and leverage SOA (H1).Compared with the key findings from Table 2, the regression results in Table 4 confirm the significantly positive relationship (at the 1% level) between equity liquidity and leverage SOA for book leverage regression, but insignificant for the market leverage model. Alternative measures of liquidity. In this subsection, we examine the robustness of our main finding using alternative measures of equity liquidity, including zero return proportion (PropZero), turnover (Turnover) and daily closing percent quoted spread (Spread).Results are reported in Table 5. Columns 1-2, 3-4 and 5-6 report the results for PropZero, Turnover, and Spread, respectively.We find consistent results as in Table 2. Specifically, in columns 1-2, the coefficients of the interaction term LEV i;t 3 PropZero i;t are positive and statistically significant at the 1% level for both book and market leverage regressions.As PropZero is an illiquidity measure, these results confirm that liquidity has a positive impact on leverage SOA.Next, as a liquidity measure, the negative coefficients of the interaction term LEV i;t 3 Turnover i;t also suggest a statistically significant relationship at the 1% level between equity liquidity and leverage SOA (columns 3-4) for both book and market leverage models.The results on Spread are similar, which indicate a significantly positive liquidityleverage SOA relation (columns [5][6] at the 1% level.These results further support our baseline finding (H1). Effect of liquidity on SOA: conditional on leverage deviation and target change Next, we investigate whether the positive relationship between equity liquidity and leverage SOA varies conditional on the low and high levels of leverage deviation (H2), and low and high levels of target instability (H3).Estimation results for Eq.(10-12) are reported in Liquidity and dynamic leverage Panel A presents the results for the full sample.The coefficients of the interaction term (LIQ i;t 3 LEV i;t ) are positive and highly significantly (at the 1% level) in the case of both book and market leverages in all regressions (columns 1-6), implying that the equity liquidity has a positive effect on the leverage SOA.Columns 1 and 2 test hypothesis 2 by including the triple interaction term LIQ i;t 3 LEV i;t 3 LevDev i;t .The results show that the coefficients on this triple interaction term are negative and highly significant at the 1% level, indicating that leverage deviation has a negative impact on the positive association between equity liquidity and SOA.Hypothesis 3 is tested in columns 3 and 4. Specifically, the coefficients of the triple interaction term LIQ i;t 3 LEV i;t 3 ΔTarget i;t are negative and statistically significant at the 1% level, which implies that the positive relation between equity liquidity and leverage SOA is less pronounced for firms with higher target instability.To further confirm these findings, we include both triple interaction terms, LIQ i;t 3 LEV i;t 3 LevDev i;t and LIQ i;t 3 LEV i;t 3 ΔTarget i;t , in columns 5 and 6.The results confirm that both coefficients are significantly negative, suggesting that the impact of equity liquidity on SOA is greater for firms with a smaller deviation from the target and a more stable target leverage ratio.These results are consistent with previous literature suggesting that larger leverage deviation and greater target instability result in higher adjustment costs and higher uncertainty associated with adjusting back to the target and consequently lower the leverage speed of adjustment (Zhou et al., 2016) [7]. Conclusion In this study, we investigate how equity liquidity, along with the deviation from the target leverage ratio and the instability in that target, affects the behavior of a firm's SOA.Based on a sample of more than 2,000 UK firms over the period from 1996 to 2016, we find a positive JED association between equity liquidity and leverage SOA, indicating that firms with high equity liquidity adjust more quickly to their targets.This important finding proves to be robust to a battery of checks, including alternative empirical methods, alternative samples without data adjustment, and alternative proxies for leverage ratios and equity liquidity.We further observe that both the leverage deviation and the target instability have a negative impact on the strength of the relationship between equity liquidity and the SOA.Indeed, for firms with both a large leverage deviation and a large target change, any positive impact that equity liquidity has on their SOA is almost eliminated. We contribute to the existing literature in several ways.First, given the theoretical prediction and empirical evidence on the relationship between liquidity and leverage SOA, we are the first to enrich the literature on leverage adjustments by identifying equity liquidity as a new determinant of SOA in a single developed country with many differences on the structure and development of capital market, the ownership concentration, and institutional characteristics that vary the relationship between liquidity and a firm's dynamic capital structure decisions.Moreover, Liquidity and dynamic leverage although several studies have examined the capital structure choices of UK firms but do not investigate the dynamic leverage adjustments, our study contributes to the empirical literature on the association between equity liquidity and firms' capital structure decisions in the UK Next, we provide new empirical evidence of the joint effect of equity liquidity, leverage deviation and target instability on leverage SOA.The positive impact on equity liquidity on the SOA is greater for firms that are relatively close to their target and whose target is relatively stable. Our study has important implications at both firm and country levels.Specifically, firm managers who wish to access easier to various financing sources and fasten the speed of adjustment toward the target capital structure to enhance firm value need to pay more attention to drive up equity liquidity.From the policy makers perspective, when establishing regulation frameworks, policy makers should consider the impact of stock liquidity and financial market development on firms' financial policy, especially during the periods of high uncertainty and volatility; while firms with high liquidity could have a higher chance to access to external sources, low liquidity firms experience more financial difficulties.In such cases, policy makers should consider multiple assistant programs for these constrained firms.Furthermore, investors need to take into account the significant impacts of liquidity on firm financial policy.This might assist investors in choosing the proper investment strategies. Our current research has a potential limitation with regard to the data period.The sample period is 2002-2016 in this study [8].It is interesting to know whether our documented results would still remain valid in recent years, especially after the COVID-19 pandemic.Future studies may extend our sample period to more recent years and examine whether the pandemic irregularity has any impact on the relationship between CSP and equity liquidity.This interesting question awaits further examination. Table 1 . This table reports the descriptive statistics including the mean, standard deviation, minimum, maximum, first quantile, median, and third quantile of firm-level and industry-level variables for the entire sample in Panel A, and correlation coefficients in Panel B. The study period is from 1996 to 2016.The variable definitions are in Appendix Source(s): The table is created by authors Summary statistics Cheung et al. (2019)ds to lower agency costs.Consequently, such firms have a higher leverage SOA.This result is consistent with € Oztekin (2015),Cheung et al. (2019)and Ho et al. Table 2 . This table reports the regression results for the effect of liquidity on the speed of adjustment using the two-step system GMM estimator for the baseline model.The variable definitions are contained in Appendix.***, **, * indicate significance at the 1, 5, and 10% levels, respectively.The p-values are in parenthesis Source(s): The table is created by authors Effects Table 6 . This table reports the regression results for the effect of liquidity on the speed of adjustment using the two-step system GMM estimator.The variable definition are in Appendix.***, **, * indicate significance at the 1, 5, and 10% levels, respectively.The p-values are in parenthesis Source(s): The table is created by authors This table reports the regression results for the effect of liquidity on the leverage speed of adjustment using two-step approach.***, **, * indicate significance at the 1, 5, and 10% levels, respectively.Standard errors are bootstrapped.t-statistics are reported in parenthesis.The variable definitions are in Appendix Source(s): The table is created by authors Table 4 . Alternative leverage measures Table 5 . This tables reports the regression results for the effects of other liquidity measures including proportion of zero-return days, turnover, and daily quoted spread to test the association between equity liquidity and leverage using two-step system GMM The variable definition are in Appendix.***, **, * indicate significance at the 1, 5, and 10% levels, respectively.The p-values are in parenthesis Source(s): The table is created by authors Alternative liquidity measures Table 6 . This tables reports the regression results for the effects of liquidity measures on the leverage SOA in high and low leverage deviation firms, and high and low instability in target, based on whether the firm's leverage deviation position/instability in target is above or below the median for full sample using two-step system GMM.The variable definition are in Appendix.***, **, * indicate significance at the 1, 5, and 10% levels, respectively.The p-values are in parenthesis Source(s): The table is created by authors Effects
7,695.8
2024-04-01T00:00:00.000
[ "Economics", "Business" ]
Zeaxanthin dipalmitate-enriched wolfberry extract improves vision in a mouse model of photoreceptor degeneration Zeaxanthin dipalmitate (ZD) is a chemical extracted from wolfberry that protects degenerated photoreceptors in mouse retina. However, the pure ZD is expensive and hard to produce. In this study, we developed a method to enrich ZD from wolfberry on a production line and examined whether it may also protect the degenerated mouse retina. The ZD-enriched wolfberry extract (ZDE) was extracted from wolfberry by organic solvent method, and the concentration of ZD was identified by HPLC. The adult C57BL/6 mice were treated with ZDE or solvent by daily gavage for 2 weeks, at the end of the first week the animals were intraperitoneally injected with N-methyl-N-nitrosourea to induce photoreceptor degeneration. Then optomotor, electroretinogram, and immunostaining were used to test the visual behavior, retinal light responses, and structure. The final ZDE product contained ~30mg/g ZD, which was over 9 times higher than that from the dry fruit of wolfberry. Feeding degenerated mice with ZDE significantly improved the survival of photoreceptors, enhanced the retinal light responses and the visual acuity. Therefore, our ZDE product successfully alleviated retinal morphological and functional degeneration in mouse retina, which may provide a basis for further animal studies for possible applying ZDE as a supplement to treat degenerated photoreceptor in the clinic. Introduction Photoreceptor degenerative diseases are a group of retinal diseases including retinitis pigmentosa (RP), age-related macular degeneration (AMD), and other inherited retina dystrophies.In those diseases, photoreceptors degenerate and lose the ability to transfer light into electrical signals thus leading to blindness finally [1].Among those diseases, RP is one of the most common forms of inherited retinal disease that causes degeneration and death of cone and rod cells, affecting approximately 1 in 4,000 individuals [2].Many strategies have been applied to preserve or replace photoreceptors, including antioxidant or anti-inflammatory agents, gene therapy, stem cell therapy, retinal prosthesis therapy, etc. [3].Among them, some have entered clinical trials with promising results and FDA approved products like Luxturna as a gene therapy for Leber's congenital amaurosis type 2 [4], and electric transplant Argus II retinal prosthesis [5].While these strategies may offer effective long-term treatment, the expensive cost and risk of surgery limits the application, therefore new treatment option for degenerative retina is still in great need. Many researchers have explored the protective effect on the degenerated retina from plant extract, like curcumin, flavonoids including luteolin, Ginkgo Biloba extract and green tea extract, resveratrol, forskolin, saffron, and Lycium barbarum etc [6].Among them, lycium barbarum or wolfberry is a traditional Chinese herb that nourishes the kidney, liver, and eyes [7].Its extract has demonstrated a protective effect in retinal diseases, including animal models of glaucoma [8], retinitis pigmentosa [9,10], and diabetic retinopathy [11], as well as RP patients [12].Zeaxanthin dipalmitate (ZD) is a major carotenoid in wolfberry extract (structure shown in Fig 1A), which has a strong antioxidant function.Studies have shown that ZD has a strong ability to scavenge free radicals and protect liver cells against several liver diseases [13,14].Our previous study has proved that ZD can delay photoreceptor degeneration in rd10, a genetic mutation mouse model of RP by just one dose of intravitreous injection [15].And ZD reduced the expression of genes that are involved in inflammation, apoptosis and oxidative stress and inhibited the STAT3, CCL2 and MAPK pathways.Therefore ZD may serve as a potential candidate to treat RP.But intravitreous injection causes damage to the eye and can't be applied repetitively, oral feeding of ZD is preferred in the clinic.On the other hand, oral taking of ZD requires a large amount of ZD, purification which is time-consuming and expensive.Therefore, ZD-enriched agent may have advantages over pure ZD in the oral application due to the ability of mass production therefore lower price.In this study, we developed a method to enrich ZD from wolfberry, which enabled us to administrate the ZD agent by daily oral administration. We further explored the protective effect of ZD-enriched wolfberry extract (ZDE) on the degenerated retina of mice induced by N-methyl-N-nitrosourea (MNU).MNU is an alkylated toxic substance that induces specific apoptosis of photoreceptors within a week after injection.Unlike rd10 that the photoreceptors degenerate since postnatal 17 (P17) and most rods die by P25, which request the strict timing of treatment, MNU can induce a fast degeneration of photoreceptors in adult mice at any time after injection, therefore MNU-injured mouse model widely serves as a chemical-induced photoreceptor degenerative model [16]. Animals Male C57BL/6J (C57) mice were purchased from Liaoning Changsheng Biotechnology Co., LTD.All mice were maintained under standard laboratory conditions in the animal facility at Jinan University (room temperature range between 18˚C-23˚C, humidity between 40-65%, dark/light cycle of 12/12 hours), and the animals had free access to regular food and water.All animal experiments were conducted following the ARVO Statements for the Use of Animals in Ophthalmic and Vision Research and the animal study protocol was approved by the Laboratory Animal Ethics Committee of Jinan University (#IACUC-20210706-03 on July 6th, 2021).All efforts were made to minimize the number of animals used and their suffering, including operated by skillful researchers, careful design of the experiment and special animal care. Extraction of ZDE from wolfberry The wolfberry fruit (Lycium barbarum L.) was the No. 7 product collected from the farm located at N36˚45'-39˚30', E 105˚16'-106˚80' at Zhongning, Ningxia, China, with batch number WG20072815TY270C, and passed the standard inspection GB/18672 Gouqi.A certain amount of the dried wolfberry was soaked in water with 5 times its volume (W/V) for 2 hours, then crushed with a Cyclone universal pulverizer for 30 seconds, and the same volume of water was added again at room temperature.The mixture was ultrasonicated for 60 minutes, and centrifuged at 5000g for 10 minutes to obtain the polysaccharide extract (in the supernatant) and the precipitate. The precipitate was collected and dried at 40˚C and then crushed again to obtain the coarse residue of wolfberry.ZDE was extracted from the coarse residue of wolfberry by organic solvent method with a self-developed optimal process condition.Briefly, the coarse wolfberry residue was mixed with a mixed solvent (hexane: ethanol = 2.6:1, V: V, patent formula, Patent application No. CN 201911029950.6)with a solid-to-liquid ratio of 1:20 (W/V), and extracted at 50˚C for 1 hour.The extract was filtered through a 150-mesh sieve, the filtrate was collected, and the filter residue was extracted once more with the above protocol.The two filtrates were combined and concentrated by spinning at 60˚C with a negative pressure of 0.08 Mpa till no solvent came out.Finally, an oily orange solid was obtained, which was the product of ZDE. Measurement of ZD concentration by HPLC The concentration of ZD in the wolfberry extracts was measured by High-Performance Liquid Chromatography (HPLC) according to a group standard (T/NXFSA 004S-2020).Briefly, the ZD standard (with purity >95%) was dissolved in 1:1 (v/v) mobile phase A solution (methanol: acetonitrile: water, 81:14:5): mobile phase B solution (dichloromethane) to make the standard solution. To prepare the test sample from the dry fruit of wolfberry, 10g of frozen crushed wolfberry was soaked in water at 5 times its volume (W/V) for 2 hours, then crushed with a pulverizer and centrifuged at 3000r/min for 5 minutes.The precipitate was collected, dried, and crushed.Then 0.2g crushed sample was weighed (accurate to 0.0001g) and put in a mortar, added Quartz sand (1/4 of the sample weight), and grinding extracted with hexane: ethyl acetate: methanol (1:1:1, v/v/v) multiple times, with 1-3ml solution collected each time till the solution became transparent.All the extracted solution was then combined and fit to the volume of 50ml with 100% ethanol.After centrifuging the solution at 1000 r/min for 3 minutes, the precipitate was discarded and the supernatant was collected.The supernatant from the ground dry fruit of wolfberry then was solved in n-hexane: ethyl acetate: methanol (1:1:1, v/v/v) and collected for the HPLC test. To prepare the test sample from the ZDE product, 0.2g of the sample (accurate to 0.0001g) was dissolved in n-hexane: 100% ethanol (3:1, v/v) to a fixed volume of 50ml.The solution was centrifuged at 1000r/min for 3 min and the supernatant was collected for the HPLC test. To run the HPLC test, a 10ul sample was added into a C30 column (4.6mm inner diameter, 250mm column length, and the particle size of 5 mm) and the Chromatographic separation was performed at 30˚C.The mobile phase consisted of mobile phase A (methanol: acetonitrile: water, 81:14:5) and B solution (dichloromethane).The mobile phase composition started at 84% mobile phase A for 3 min, then 83% for 20 mins, then 45% for 15 mins, 25% for 18 mins, and back to 84% for A solution.The flow rate was 1.0 ml/min and the detection wavelength was set at 450 nm. Experimental design and drug application The final product of ZDE was dissolved in corn oil at the following: to make a 1mg/kg body weight ZDE solution, 0.1g extract (which contained 3mg ZD) was dissolved in 30 ml pure corn oil, and mouse was oral feed with the solution at the volume of 10ml/kg. For the treatment, animals were randomly assigned to two groups: MNU-injured mice treated with ZDE or solvent.Animals were pretreated with ZDE or solvent for a week by daily oral feeding, then MNU solution (40mg/kg in normal saline solution) was injected intraperitoneally to induce specific photoreceptor degeneration [17][18][19].Then animals were fed with ZDE for another week before visual behavior and ERG were tested.Then animals were sacrificed by cervical dislocation after ERG recording and retinas were collected.Normal mice without any treatment were included as control group.Normal mice treated with the solvent or ZDE were also tested to exclude the possible side effect on the normal retina.The detailed protocol is illustrated in Fig 1B .In a preliminary experiment, we screened for the safe dose of ZDE at 1, 3, 9, 27, 54, 100, and 200mg/kg body weight by daily oral feeding the animal for 2 weeks, and found that the body weight of animals remained stable for all doses (S1 Fig) .Then we tested the protective doses of ZDE at 1, 3, 9, and 27 mg/kg body weight by examining the thickness of ONL where photoreceptors are located.We identified the best protective effect of ZDE at 9 mg/kg.Then for the following experiments, we applied ZDE at 9mg/kg to examine its effect on the visual behavior and retinal light responses. Visual behavioral tests The visual behavior of mice was examined by both the dark-light transition test and the optomotor systems at the end of the treatment.The dark-light transition test measures the tendency of a mouse to stay in darkness rather than in an illuminated area and it was conducted as previously described [20].Briefly, the light and dark chambers were connected through an open door; a mouse was placed in the center of the illuminated white chamber and could move freely between the chambers.The movement was recorded by cameras installed in both chambers and connected to a recorder (Noldus, Wageningen, the Netherlands).The duration a mouse spent in the dark chamber during a 5-minute test was quantified automatically by the EthoVision XT 8.0 software (Noldus). The optomotor test measures visual acuity by observing the head turning in response to moving gratings as we described before [21].Briefly, the mice were placed freely on a highcenter platform surrounded by computer screens that displayed vertical rotating sine gratings.The gratings were programmed by Matlab (MATLAB 8.0, MathWorks, Natick, MA, USA) and had a 100% contrast and moving speed at 12 cycles/s, with increasing spatial frequencies ranging from 0.1 to 0.5 cycles/degree.Animals reflexively track the gratings by moving their head (i.e.optokinetic reflex) as long as they can see them.Head movements were videotaped and the maximal spatial frequency at which an optokinetic response could be observed was manually recorded to reflect the visual acuity of a mouse. Electroretinogram (ERG) To evaluate the retinal light responses, the mice were dark-adapted overnight and ERG experiments were performed by RETI-scan system (Roland Consult, Brandenburg, Germany) as we previously described [9].Briefly, mice were anesthetized with tribromoethanol (0.2 ml/10g of 1.25% solution) and placed on a heated platform (37˚C) under dim red light.Pupils were dilated with phenylephrine-HCl (0.5%) and tropicamide (0.5%).ERGs were recorded with gold-plated wire loop electrodes contacting the corneal surface as the active electrode.Stainless steel needle electrodes were inserted into the skin near the eye and the tail serving as the reference and ground leads, respectively.Dark-adapted mice were first stimulated by green flashes of 0.01, 0.1, and 3.0 cd.s/m 2 to record the scotopic responses.Then the mice were light-adapted for 5 mins using a green background (20 cd/m 2 ), and photopic responses to the green flash of 3.0 and 10.0 cds/m 2 were recorded.ERG data were collected using the amplifier of the RETIscan system at a sampling rate of 2 kHz, and subsequently analyzed with RETIport software (Roland Consult) after applying 50-Hz low-pass filtering.The a-wave amplitude was determined from the baseline to the first negative peak, and the b-wave amplitude was measured from the a-wave trough to the subsequent positive peak.For each mouse, the responses of the two eyes were averaged as a data point. Tissue processing After ERG testing, the mice were sacrificed using an overdose of anesthetic (intraperitoneal injection of 100 mg/kg pentobarbital sodium; R&D Systems, Minneapolis, MN, USA).Both eyes were removed and placed in 4% paraformaldehyde (PFA) at room temperature for 30 minutes.Tissues were washed 3 times with PBS buffer solution for 5 mins each time and cryo-protected overnight in PBS containing 30% sucrose solution.The tissues were then embedded in optimal cutting temperature compounds (Tissue Tek, Torrance, CA, USA) and cryosectioned on a microtome (Leica Microsystems, Wetzlar, Germany) through the optic disk longitudinally at a thickness of 14 μm.The retinal sections were then incubated at room temperature for 5 min with 4',6-diamidino-2-phenylindole (DAPI,1:1000, Electron Microscopy Sciences, Hatfield, PA, USA) and then washed, mounted on glass slides, and sealed. Image collection and processing DAPI-stained tissues were imaged with a fluorescence microscope (Carl Zeiss).To assess the photoreceptor survival, the thickness of the outer nuclear layer (ONL) where photoreceptor soma is located was measured by Image J software, and the number of nuclei layers in the ONL row was counted.Due to the uneven degeneration of photoreceptors from the central to peripheral [16], we measured the thickness and the number of cell layers of ONL at 400, 800, 1200 and 1600 μm away from the central point of the optic nerve on both sides and then averaged to get the values for each locations of the section.For the dose-dependent curve, the ONL thickness measured around 1000 μm was measured and compared among all doses.For each retina, the thickness and number of layers of ONL from 3-5 cryo-sections collected at different locations in the eye cup was averaged to obtain a data point for this animal, and these values were then averaged to obtain a mean value for the group. Statistical analysis All data were expressed as means ±SEMs and analyzed by Prism 7 (GraphPad Software, San Diego, CA, USA).One-way or two-way analysis of variance (ANOVA) followed by post hoc tests were performed.P value<0.05indicated a significant difference.N represents the total number of animals examined in each group. ZDE contained 9-fold higher ZD than raw fruit By HPLC test, we compared the concentration of pure ZD in dry raw fruit of wolfberry and our ZED product.The ZD component was identified by the peak arose around 28 mins, which was similar to the ZD standard (Fig 2A).The spectrum showed that the area of ZD peak was 9.34 times higher in ZED product than in wolfberry (Fig 2).Fitting the area to the standard curve set up with pure ZD showed that the concentration of ZD was 3.12 ± 0.06 mg/g in wolfberry and 28.61 ± 0.64 mg/g in ZDE product from 4 repetitive measurements.Therefore, the method developed by us enriched ZD with 9-fold higher concentration than the raw fruit. ZDE improves the survival of photoreceptors in MNU-injured retina To examine whether ZDE can protect the degenerated photoreceptors, we stained retinal sections with DAPI and measured the thickness of the outer nuclei layer (ONL) where photoreceptors are located and also counted the row of cells in ONL.Examples of an enlarged region at 1mm away from the optic nerve center of the C-cup of the retinal section from the solventtreated and 9mg/kg ZDE were shown in Fig 3A .ZDE enhanced the thickness of ONL of MNU-injured retina. Applying various doses of ZDE showed that with the increased dose of ZDE, the ONL thickness was also increased with a peak at 9 mg/kg of ZDE.The ONL thickness was increased from 7.2 ± 3.2 μm in the solvent group to 15.8 ± 5.4 μm at 1 mg/kg, then 21.2 ± 3.2 μm at 3 mg/kg (p <0.05 vs. solvent), and 29.4 ± 3.3 μm at 9 mg/kg (p<0.001 vs. solvent) then 21.2 ± 1.6 μm at 27 mg/kg.(Fig 3B).We further analyzed the ONL thickness and number of cell layers from center to peripheral for ZDE at 9mg/kg (Fig 3C).ZDE increased the number of cell layers as well as the ONL thickness in most regions along the c-cup, although still worse than those in normal mice.As 9 mg/kg gave the best protection for photoreceptor survival, we chose this dose for further experiments. We further tested the safety of the ZDE and the solvent corn oil on normal mice.Both the number of ONL nuclei layers and the ONL thickness were similar among normal mice and those treated with solvent or ZDE at 9mg/kg (S2A Fig). ZDE enhances the retinal light responses of MNU-injured mice To examine whether ZDE can further improve the visual function of MNU-injured mice, we first performed electroretinogram recording (ERG) to examine the retinal light responses.Compared with the normal control mice with large ERG responses, only weak ERG responses remained under both dark adaptation (i.e.scotopic) and light adaptation (i.e.photopic) conditions in the solvent-treated MNU injured group.In contrast, clear a-wave (group responses of photoreceptors) and b-waves (group responses of bipolar cells) could be observed in ZDE- and 4C) under scotopic condition, though much weaker than normal control.For example, at scotopic 3.0 cd.s/m 2 , the amplitude of a-wave was significantly improved from 7.5 ± 1.8 μV at solvent to 43.9 ± 15.7 μV at 9 mg/kg ZDE (p<0.001), and the b-wave amplitude was improved from 5.7 ± 1.8 μV at solvent to 90.4 ± 31.2 μV at 9 mg/kg ZDE (p<0.001).But ZDE hardly helped the photopic responses.This indicates that ZDE enhanced light responses from the rod pathway but not the cone pathway. ZDE enhances the visual acuity of MNU-injured mice As ZDE protected the degenerated photoreceptors, we then examined whether ZDE can protect the MNU-injured mice against vision loss.We first applied the dark-light-transition box which monitors the ability of animals to tell luminance (Fig 5A).Normal mice tend to spend a longer time in the dark box than the light box (with ~72% time in the dark box).The MNU-injured mice, however, spent ~65% time in the dark box (p = 0.096 vs. WT), and ZDE didn't improve the time in the dark box (Fig 5B). We further applied the optomotor test to monitor the visual acuity of the animals (Fig 5C).The mouse can track the direction of rotating gratings with the head movement if it can see it (i.e. the optokinetic reflex).The higher the spatial frequency of the grating (i.e. the thinner the grating) that could trigger the optokinetic reflex of the mouse, the higher the visual acuity of the animal [21].Normal mice have a visual acuity of 0.4 cycles/degree (cpd), MNU injured mice Discussion In the current study, we developed a method to enrich ZD from the dry fruit of wolfberry and then used MNU-injured mice as a model of RP and treated them with the enriched product.Our results showed that ZDE effectively improved the survival of photoreceptors and visual function of injured mice.This is a continuous work of our previous study, where one shot of ZD (95% purity) injection intravitreally rescued the degenerated retina of rd10 mice [15].Our study proved that similar to the purified ZD, our ZDE product also demonstrated a neuroprotective role in photoreceptor degenerated retina.Wolfberry extract has been shown to protect retinal neurons in various retinal diseases in both animal studies [8][9][10] and clinical studies [12].Its major antioxidant components include water soluble component Lycium barbarum polysaccharides (LBP), and water insoluble components flavonoids and carotenoids.A recent clinical study that used wolfberry extract (with a main active ingredient as LBP) to treat RP patients showed that both the ERG and the visual acuity of these patients have been remarkably improved [12].Lycium barbarum glycopeptide (LbGP), immunoreactive glycoproteins extracted from LBP have also been shown to protect MNU-injured photoreceptors in mouse models [22].The product of LbGP is on the market now as a supplement to improve vision. Another group of major constituents in wolfberry are carotenoids, which include zeaxanthin and its di-esters, ZD.ZD is much more abundant in wolfberries than zeaxanthin [23].It accounts for 31%-56% of carotenoids in wolfberry and 0.01-0.2% of the mature fruit of wolfberry [24].With multiple conjugated double bonds in its chemical structure, ZD has a strong anti-oxidant function as demonstrated by its strong ability to scavenge free radicals and protect liver cells against several liver diseases [13,14].The retinal protective effect of zeaxanthin (together with lutein) has been shown in many animal and clinical studies [25][26][27].Our recent work further showed that intravitreous injection of pure ZD can effectively protect degenerated photoreceptors [15].It not only improved the visual behavior of rd10 mice (a genetic mutation mouse model of RP) but also improved the light responses of photoreceptors, bipolar cells, and retinal ganglion cells.ZD also reduced the upregulated expression of genes that are involved in inflammation, apoptosis, and oxidative stress in the rd10 retina.ZD further reduced the activation of two key factors, STAT3 and CCL2, down-regulated the expression of the inflammatory factor GFAP, and inhibited ERK, and P38 pathways [15].This study suggests that ZD might be the other most important component of wolfberry in RP treatment besides LbGP. However, in the previous study, only one shot of intravitreous injection was applied to rd10 mice, a better protective effect would be expected if the animal could be treated with more frequent feedings.However, due to the long and expensive procedure of purifying ZD from wolfberry, the supply of pure ZD was limited.This was why we started this study by working with the industry to extract large amounts of ZD from wolfberry to enable enough supply of ZDE for oral administration.It's worth noting that the metabolic pathway is different between oral feeding and intravitreous injection, so the protective effect of ZDE may act through different active pathways than intravitreous injection. In this study, ZDE produced from the production line contained ~3% ZD in the extract, which is over 9 times more concentrated than the raw fruit.The protective dose of ZDE at 9mg/kg (of ZD) was within the range of other studies which showed that 2mg -10mg/kg oral feeding of pure ZD effectively alleviated hepatic injury [13,28].It is worth noting that besides ZD in the ZDE, there might be other beneficial components of wolfberry that help to protect retinal neurons, since the ZDE product contains Gouqi oil including mainly fatty acid like linolenic acid and linoleic acid, and a few others like palmitic acid, stearic acid and oleic acid.Therefore, ZDE may have advantages over the purified ZD to treat RP as a supplement, not only because of the much cheaper cost and production time but also because of the mixture of other potential beneficial components. The current research has several limitations to consider.First, we only evaluated the survival of photoreceptors by DAPI staining, the structure of photoreceptors can be better disclosed by immunostaining for the outer segment of rods or cone with rhodopsin or opsin.And H&E staining can also help to disclose the detailed structure of plexiform layer of the retina besides the soma.Second, the protective effect of ZDE on the visual behavior was not as significant as ERG responses.ZDE only slightly (though significantly) improved the visual acuity of the mice, but failed to improve the ability to tell luminance.This may be due to the limited effect of ZDE on the destructed inner retinal circuit by MNU (which processes visual information that lead to the visual behavioral responses), even though it enhanced the impaired outer retinal structure (which contributes to the ERG responses).Immunostaining of the inner retinal structure may be further carried out to explore the reason.Third, we only examined the effect of ZDE on MNU-induced photoreceptor degeneration for 14 days, and its protective effect in the long term is unknown.Also while we showed that daily oral given ZDE does not influence retinal functions, it's a short-term toxicity test.It's not clear whether there would be long-term toxicity or side effects since in the clinic RP is a chronic progression and the patient may need long-term treatment.Fourth, we pre-treated the animal with ZDE before the MNU injury, to get a good protective effect.However, in the clinic, treatment can only be applied after the diagnosis of the disease.Therefore, it's important to test whether the application of ZDE after MNU injury may also delay photoreceptor degeneration.Fifth, it is important to examine the pharmacokinetics of ZDE after oral feeding, since the large molecular weight of ZD (over 1000) may affect its ability to penetrate the blood-retina-barrier. Indeed, we have tried to collect the retina and vitreous of the mice at various time points after feeding them with ZDE but failed to get any good HPLC results from these samples.Experience in extracting ZD from the tissues is needed for the test, and a more sensitive method such as mass spectrum may be applied to detect the low concentration of ZD in the eye if it enters.We're still working on this. Conclusion We developed a way to enrich ZD from wolfberry in large amounts, and this ZDE product can protect injured retinal photoreceptors in mouse model. Fig 1 . Fig 1. Experiment protocol.(A).Chemical structure of ZD. (B).Adult C57 mice were fed with ZD-enriched wolfberry extract (ZDE) orally daily for two weeks, during which 40 mg/kg MNU was intraperitoneally injected at 1 week to induce retinal photoreceptor cell degeneration.Then the visual behavior and ERG were tested at 2 weeks after treatment before the animals were sacrificed and the retina collected for immunostaining.https://doi.org/10.1371/journal.pone.0302742.g001 Fig 3 . Fig 3. ZDE improves photoreceptor survival in MNU-injured retina.(A).Images of retinal sections in full length as C-cup from solvent treated and ZDE (at 9mg/kg) treated MNU-injured mice.Insets showed the enlarged region at 1mm away from the optic never center.(B).Average thickness of ONL of normal control retina and MNU-injured retina after treatment of solvent or ZDE at increasing doses.The number of animals tested is 4, 7, 4, 6, 6 and 3 for control, solvent and 1, 3, 9, 27 mg/kg ZDE respectively.(C).Average number of ONL cell layers (left) and thickness of ONL(right) of normal control retina and MNU-injured retina treated with solvent or 9 mg/kg ZDE at various distance away from the center of optic nerve.ONL, outer nuclei layer; INL, inner nuclei layer; GCL, ganglion cell layer.*, p<0.05, **, p<0.01, ***, p<0.001, one-way ANOVA followed by Dunnett's multiple comparison test for B and two-way ANOVA test followed by Tukey's multiple comparison test for C. https://doi.org/10.1371/journal.pone.0302742.g003 Fig 4 . Fig 4. ZDE enhances the retinal light responses of MNU-injured mice.(A).Example traces of ERG recording from normal control, solvent and ZDE-treated MNU-injured mice.Values on the left show the flash intensity with the unit of cd.s/m 2 under either scotopic (i.e.dark adaptation) or photopic (i.e.light adaptation) conditions.Peaks of a-and b-waves were shown in ZDE-treated animals.Note the different scales of control and MNU-injured groups.(B, C).Average amplitudes of a-wave (B) and b-wave (C) for all-flash conditions.*, p<0.05,**, p<0.01; ***, p<0.001, twoway ANOVA test of repetitive measurement followed by Tukey's multiple comparison test.https://doi.org/10.1371/journal.pone.0302742.g004
6,433
2024-05-20T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
libcdict: fast dictionaries in C A common requirement in science is to store and share large sets of simulation data in an efficient, nested, flexible and human-readable way. Such datasets contain number counts and distributions, i.e. histograms and maps, of arbitrary dimension and variable type, e.g. floating-point number, integer or character string. Modern high-level programming languages like Perl and Python have associated arrays, knowns as dictionaries or hashes, respectively, to fulfil this storage need. Low-level languages used more commonly for fast computational simulations, such as C and Fortran, lack this functionality. We present libcdict, a C dictionary library, to solve this problem. Libcdict provides C and Fortran application programming interfaces (APIs) to native dictionaries, called cdicts, and functions for cdicts to load and save these as JSON and hence for easy interpretation in other software and languages like Perl, Python and R. Statement of need Users of high-level languages such as Perl or Python have access to associated-array data structures through dictionaries and hashes, respectively.These allow arbitrary data types to be stored in array-like structures.These are in turn accessed through key-value pairs which allow the value to be a further, nested associated array, allowing arbitrary nesting of data.Compiled low-level languages, like C and Fortran, are more suited to high-speed and repeated calculations typical in science.These languages lack native associated-array functionality.While there are pure hash-table solutions out there, such as glib (Glib, 2022) and uthash (Hansen, 2022), these do not combine a simple API for setting and adding to nested structures, a small library footprint, fast input and output, and standardised JSON output to easily interface with other languages and tools.libcdict provides an API for such functionality which allows cdicts to be nested in cdicts, hence arbitrarily-nested dictionaries of variables in C just as in Perl or Python. libcdict is written in C and provides an API through a set of C macros.Nested cdict structures have values in them set with a single line of code.libcdict has been used for the last year in the binary_c single-and binary-star population nucleosynthesis framework (Izzard et al., 2004(Izzard et al., , 2006(Izzard et al., , 2009(Izzard et al., , 2018)).Recent works (Hendriks & Izzard, 2023b;Izzard & Jermyn, 2023;Mirouh et al., 2023;Yates et al., 2023) compute the evolution of millions of singleand binary-stellar systems in only a few hours using its binary_c-python Python frontend (Hendriks & Izzard, 2023a).We provide libcdict as open-source code on Gitlab subject to the GPL3.libcdict also has a comprehensive test suite run through its configuration program cdict-config. Using libcdict libcdict is flexible but pragmatic.Keys to cdicts can be any C scalar or pointer.Values can be scalars, pointers, arrays or other cdicts, but arrays must be of a single C type.Values can store metadata of arbitrary type.Pointer values are optionally garbage collected when a cdict is freed.A set of API macros provides simple nesting facilities so that placing a value in a nested location given a list of keys is a simple task for the C programmer.Issues such as C variable typing are automatically handled for the user. Installation uses meson (Pakkanen, 2022) and ninja (Martin, 2022).libcdict has been tested with the GCC (10.3.0) and Clang (12.0.0) compilers.libcdict in stellar-population statistics calculations libcdict was developed to solve the problem of storing statistics in stellar-population calculations in binary_c.When evolving a population of millions, sometimes billions, of stars, each for thousands of time steps, enormous amounts of data are computed.It is impractical to output these data every time step as these are typically ∼ 10 6 × 10 4 = 10 10 lines, each of which can easily be ∼ 1 KB long.The data from each star could be sent to a Perl or Python front-end which merges them into a dictionary of population statistics.This communication between programming languages involves significant overhead which compares similarly to the runtime of the stellar code itself thus greatly increases runtime and cost. To overcome this problem, binary_c internally generates an associative-array cdict in native C.This cdict, and the stellar statistics it contains, is filled inside the binary_c simulation as each star is simulated.Generation of the stellar-population data in the cdict is efficient because it is only in C and communication with the frontend (Python) code is kept to a minimum.The cdict's dataset is output only once, as human-readable JSON easily understood by Perl or Python, at the end of the simulation.Large simulations are often split across clusters of machines using binary_c-python.The data from each run are stored as JSON chunks then merged in Python when the final run completes.The overhead involved in this joining is small compared to the effort of simulating the stars: the goal of libcdict has thus been achieved.We provide an interactive example made with binary_c and binary_c-python using libcdict in its examples directory (Izzard, 2022).The libcdict JSON output of a Hertzsprung-Russell diagram, the most important diagnostic plot in stellar astrophysics, is plotted using Bokeh (Bokeh Development Team, 2014;Bokeh GitHub, 2022) to provide immediate access to nested data sets.
1,155
2023-12-12T00:00:00.000
[ "Computer Science" ]
DMSO‐ and Serum‐Free Cryopreservation of Wharton's Jelly Tissue Isolated From Human Umbilical Cord ABSTRACT The facile nature of mesenchymal stem cell (MSC) acquisition in relatively large numbers has made Wharton's jelly (WJ) tissue an alternative source of MSCs for regenerative medicine. However, freezing of such tissue using dimethyl sulfoxide (DMSO) for future use impedes its clinical utility. In this study, we compared the effect of two different cryoprotectants (DMSO and cocktail solution) on post‐thaw cell behavior upon freezing of WJ tissue following two different freezing protocols (Conventional [−1°C/min] and programmed). The programmed method showed higher cell survival rate compared to conventional method of freezing. Further, cocktail solution showed better cryoprotection than DMSO. Post‐thaw growth characteristics and stem cell behavior of Wharton's jelly mesenchymal stem cells (WJMSCs) from WJ tissue cryopreserved with a cocktail solution in conjunction with programmed method (Prog‐Cock) were comparable with WJMSCs from fresh WJ tissue. They preserved their expression of surface markers, pluripotent factors, and successfully differentiated in vitro into osteocytes, adipocytes, chondrocytes, and hepatocytes. They also produced lesser annexin‐V‐positive cells compared to cells from WJ tissue stored using cocktail solution in conjunction with the conventional method (Conv‐Cock). Real‐time PCR and Western blot analysis of post‐thaw WJMSCs from Conv‐Cock group showed significantly increased expression of pro‐apoptotic factors (BAX, p53, and p21) and reduced expression of anti‐apoptotic factor (BCL2) compared to WJMSCs from the fresh and Prog‐Cock group. Therefore, we conclude that freezing of fresh WJ tissue using cocktail solution in conjunction with programmed freezing method allows for an efficient WJ tissue banking for future MSC‐based regenerative therapies. J. Cell. Biochem. 117: 2397–2412, 2016. © 2016 The Authors. Journal of Cellular Biochemistry published by Wiley Periodicals, Inc. immunomodulatory effects [Le Blanc, 2003;Caplan and Dennis, 2006] with minimal immunoreactivity [Tse et al., 2003]. Due to these properties, many clinical trials have been initiated for regenerative treatment of various human disorders using MSCs [Singer and Caplan, 2011]. However, the success of regenerative treatments relies on the requirement of a large number of cells which makes many tissue sources incongruous because of difficulty in acquisition of cells due to invasive procedures. Though bone marrow MSCs are considered as gold standard for the use of adult MSCs, they pose several disadvantages such as invasive and painful collection procedure, occurs in low frequency at a rate of 0.001-0.01% [Castro-Malaspina et al., 1980], and their quality also varies according to the age of the donor. Further, due to their low frequency in bone marrow aspirates, they require extensive in vitro expansion to be used as clinical doses for patients. This may further enhance the risk of culture induced epigenetic changes [Redaelli et al., 2012] as well as microbial contaminations [Gong et al., 2012]. These major drawbacks recently gain the attention of many investigators to explore alternative sources of MSCs with least invasive collection procedures. In this context, umbilical cord, a biomedical waste serves as an alternative source of MSCs. The facile nature of MSC acquisition in relatively large numbers from different tissue compartments of the umbilical cord with non-invasive procedure makes it a promising MSC source for regenerative medicine at clinical settings. However, MSCs present in different micro-anatomical regions of umbilical cord differs in their phenotypic and differentiation profiles [Subramanian et al., 2015]. MSCs interspersed in the umbilical cords Wharton 0 s jelly (WJ) tissue are considered to be the better candidates over those from other regions of the umbilical cord in terms of their clinical utility [Subramanian et al., 2015]. WJMSCs are found to be associated with higher proliferation rates and lower immunogenicity [Fong et al., 2011] and possess both mesenchymal and embryonic stem cell markers having prolonged self-renewal and broader differentiation ability with non-tumorigenic property [Fong et al., 2010]. In the current practice of biomedical research, cryopreservation of oocytes and embryos has made a tremendous growth that hundreds of thousands of domestic animals and laboratory animals are produced from frozen embryos [Wiles and Taft, 2010]. As MSC-based regenerative therapies became more promising, an efficient cryopreservation and biobanking have also become progressively important [Mason and Manzotti, 2010]. Nevertheless, a large number of frozen MSCs stored in compliance with current good manufacturing practice (cGMP) will be required for clinical applications. Therefore, developing an effective technique for the cryopreservation of MSCs using cGMP-grade reagents that are free of both animal serum proteins and toxic chemicals could increase the usefulness of these cells in tissue engineering and regenerative medicine. The use of xenogeneic animal serum in either cultivation or cryopreservation of cells impede clinical utility of MSCs as it is directly linked to the detection of anti-FBS antibodies in patients receiving cell infusions or transplants [Sundin et al., 2007]. Moreover, cell preparations in the presence of animal serum are under increased scrutiny by regulatory authorities. Therefore, complete elimination of animal serum from being used in either cultivation or a cryopreservation protocol is the best approach to avoid possible occurrence of post-transplantation complications. In addition, the use of conventional dimethyl sulfoxide (DMSO) in cryopreservation exerts undesirable effects on cells/tissue and even cause posttransplantation complications [Ruiz-Delgado et al., 2009;Yong et al., 2015]. Hence, there is a growing concern to develop both animal serum and DMSO-free cryoprotectants, and this could be achieved probably by using different polymers either alone or in combination as cryoprotectants. When biobanking is considered, isolating WJMSCs can be laborious, time-consuming, and expensive, especially maintaining in compliance with cGMP for clinical use of these cells. Therefore, an ideal option to store WJMSCs which are not immediately needed is the cryopreservation of umbilical cord tissue as a whole with minimal manipulation immediately after receiving from clinics. Indeed, the success of clinical cryopreservation of human amniotic membranes [Hennerbichler et al., 2007;Parolini et al., 2008] served as a promising step towards the development of banking strategies for umbilical cord tissue by using optimal cryopreservation techniques. Although many studies found that freezing umbilical cord tissue fragments using optimal cryopreservation techniques enables their long-term preservation, but only few considered the revitalized capacity of frozen/thaw tissue fragments in terms of cell recovery and differentiation potential of cells isolated from postthaw tissue [Choudhery et al., 2013a;Badowski et al., 2014;Chatzistamatiou et al., 2014;Roy et al., 2014;Shimazu et al., 2015]. Recently, it has been reported that freezing particular microanatomical region of the umbilical cord has more advantage than freezing the entire umbilical cord fragments [Fong et al., 2016]. Indeed, the later poses several drawbacks such as heterogeneity in cell population and varied differentiation ability along the desired lineage as cells originate from the different compartments of the umbilical cord. Moreover, the cryoprotectants may not penetrate and preserve the interior parts of the frozen tissue resulting in genetic and behavioral changes of the cells [Fong et al., 2016]. Therefore, in the present study, we have evaluated freezing of Wharton 0 s jelly tissue and post-thaw behavior of WJMSCs following an optimal cryopreservation protocol using programmed slow freezing with a modified cryoprotectant, which we have earlier successfully used for cryopreserving human dental follicle tissue . CHEMICALS AND MEDIA Unless otherwise specified, all chemicals and media were purchased from Sigma (St. Louis, MO) and Gibco (Life Technologies, Burlington, ON, Canada), respectively. HARVESTING AND FREEZING OF FRESH WHARTON'S JELLY (WJ) TISSUE After obtaining the informed consent under approved medical guidelines set by the GNUH IRB-2012-09-004, human umbilical cords (n ¼ 5) from both sexes were obtained from full-term births undergoing either caesarean section or normal vaginal delivery. The umbilical cords (UC) were collected in sterile containers containing Dulbecco 0 s phosphate-buffered saline (D-PBS) and transferred to the laboratory on ice within 2 h. The UC was approximately cut into 2 cm pieces and thoroughly washed with D-PBS containing 1% penicillin-streptomycin (10,000 IU and 10,000 mg/ml, respectively; Pen-Strep, Gibco) to remove adherent blood. The UC pieces were then cut open lengthwise with sterile forceps and curved scissors (Solco Biomedical TM , Pyeongtaek, Korea). After excising both arteries and vein, the pure gelatinous WJ tissue was separated and transferred to cryovials (Thermoscientific, Roskilde, Denmark) for freezing. The experimental groups were divided into fresh (Control), conventional method with 10% DMSO diluted with advanced Dulbecco 0 s modified Eagle 0 s medium (ADMEM) supplemented with 10% fetal bovine serum (FBS) (Conv-DMSO), programmed method with 10% DMSO diluted with advanced Dulbecco 0 s modified Eagle 0 s medium (ADMEM) supplemented with 10% fetal bovine serum (FBS) (Prog-DMSO), conventional method with cocktail solution consisting of 0.05 M glucose, 0.05 M sucrose, and 1.5 M ethylene glycol in PBS (Conv-Cock), and programmed method with cocktail solution consisting of 0.05 M glucose, 0.05 M sucrose, and 1.5 M ethylene glycol in PBS (Prog-Cock). All experimental groups contain approximately the same amount of WJ tissue in each replicate. Two different freezing methods were followed. In the conventional method (Conv), WJ tissue in 1.8 ml cryovials containing 1 ml respective cryoprotectants was cooled at approximately À1°C/min from 25°C to À80°C in a freezing container (Nalgene, Rochester, NY), and then plunged directly into liquid nitrogen (À196°C) for at least 3 months. In the programmed method (Prog), WJ tissue in 1.8 ml cryovials containing 1 ml respective cryoprotectants was cooled at a pre-set freezing rate in a programmable controlled-rate freezer (Kryo 360, Planer Ltd, Middlesex, UK). Briefly, tissues were equilibrated for 30 min at 1°C, then cooled following the programmed protocol in order: 1°C to À9°C at a rate of À2°C/ min; then À9°C to À9.1°C and held for 5 min; then À9.1°C to À40°C at a rate of À0.3°C/min; then À40°C to À140°C at a rate of À10°C/min. Then the cryovials were immediately plunged into liquid nitrogen (À196°C) and stored for at least 3 months. THAWING, ISOLATION, AND CULTURE OF WJMSCs After 90 days of storage in liquid nitrogen, the cryopreserved WJ tissues were thawed by immersing in a circulating water bath at 37°C and were thoroughly washed twice with ADMEM supplemented with 10% FBS and 1% Pen-Strep by centrifugation at 500g for 5 min in order to remove cryoprotectants. WJMSCs were isolated as previously described [Chao et al., 2008] with minor modifications. Briefly, the WJ tissues from all groups were minced and digested with DPBS containing 1 mg/ml collagenase type I at 37°C for 15 min with gentle agitation to loosen the gelatinous mesenchymal matrix to dislodge the interspersed MSCs. The digested tissue was then sequentially passed through 100 mm and 40 mm nylon cell strainers (BD Falcon, MA) in order to obtain a single cell suspension after enzyme being inactivated by adding ADMEM containing 30% FBS. The cell suspension was then centrifuged at 500g for 5 min and the pellet was reconstituted and cultured in ADMEM supplemented with 10% FBS at 37°C in a humidified atmosphere of 5% CO 2 in air by changing the culture medium for every 3 days. Once cells became confluent (70%), they were trypsinized using 0.25% trypsin-ethylenediaminetetraacetic acid (EDTA) solution and further expanded. WJMSCs isolated from WJ tissue without undergoing the procedure of cryopreservation was herein referred to as Fresh or Control group. In the present study passage 3 WJMSCs under each experimental group were used in all the experimentation unless otherwise specified. POST-THAW MORPHOLOGY OF WJMSCs Morphology of WJMSCs was analyzed under a light microscope at primary culture and upon passaging in all the experimental groups. Images were taken at 100Â magnification with Nikon DIAPHOT 300, Japan. CELL SURVIVAL, CELL RECOVERY, AND GROWTH CHARACTERISTICS OF WJMSCs After isolating cells from both fresh and cryopreserved groups of WJ tissue, they were stained with propidium iodide (PI) for dead cells and Hoechst 33342 for all cells as previously reported . The stained cells were then observed using a fluorescent microscope (Nikon Eclipse Ti-U, Nikon Instruments, Tokyo, Japan) and the rate of cell survivability is calculated in each experimental group. To evaluate the total number of viable cells recovered from both fresh and cryopreserved groups of WJ tissue, the isolated WJMSCs were stained with 0.4% Trypan blue (Sigma-Aldrich Corp., St. Louis, MO) for 1 min at room temperature and then the number of live cells was counted using a hemocytometer and expressed as the number of cells recovered per cm of umbilical cord. To compare the growth characteristics of WJMSCs isolated from fresh and cryopreserved tissue, the plating efficiency, population doubling time (PDT), and saturation density were measured. The colony forming ability (Plating efficiency) was evaluated as previously described [Choudhery et al., 2012]. Briefly, at passage 1, WJMSCs in each experimental group were seeded in triplicates in 25 cm 2 culture flasks at 20 cells per cm 2 and propagated in ADMEM supplemented with 10% FBS for 14 days. After 2 weeks, resulting colonies were fixed with methanol and stained with crystal violet (0.1%). Colonies with >30 cells were counted manually under a microscope. Colonies were counted by two independent observers and the plating efficiency was measured by using the formula: number of colonies counted/number of cells initially plated and then multiplied by 100. The proliferation rate of WJMSCs was evaluated by population doubling time (PDT). Briefly, WJMSCs isolated from all the experimental groups were seeded at 2 Â 10 3 cells per well in triplicates using 24-well culture plate. Cells were propagated for up to 14 days and the cell number was recorded for every 2 days interval. PDT was calculated using a formula, PDT ¼ t (log2)/ (logN t À logN 0 ), where t represents the culture time, and N 0 and N t are the initial and final cell numbers before and after seeding, respectively. The saturation density of WJMSCs in each experimental group was determined in triplicates as previously described [Choudhery et al., 2013b]. Briefly, at passage 1, cells were trypsinized, counted and replated in 25 cm 2 culture flasks at a final concentration of 1000 cells per square centimeter. The cells were observed daily under a microscope until becoming confluent. The cells were counted every other day using a hemocytometer until the cessation of further increase in cell number. FLOW CYTOMETRY WJMSCs were analyzed for the expression of surface antigens and DNA content using flow cytometer (BD FACS Calibur; Becton Dickinson, NJ) in triplicates from three independent experiments. For phenotyping of surface antigens, WJMSCs were harvested using 0.25% Trypsin-EDTA and fixed in 3.7% formaldehyde solution. The cells were then washed twice with DPBS and labelled (1 Â 10 5 cells per marker) with fluorescein isothiocyanateconjugated CD34 (BD Pharmingen, CA, FITC Mouse Anti-Human CD34), CD45 (Santa Cruz Biotechnology, FITC Mouse Anti-Human CD45), CD90 (BD Pharmingen, FITC Mouse Anti-Human CD90) and unconjugated CD73 (Santa Cruz Biotechnologies, Mouse monoclonal), and CD105 (Santa Cruz Biotechnologies, Mouse monoclonal IgG 2a ) for 30 min. Unconjugated primary antibodies were treated with secondary FITC-conjugated goat anti-mouse IgG (BD Pharmingen) for 30 min in the dark. For isotype matched negative control Mouse IgG1 (BD Pharmingen) was used. A total of 10,000 labeled cells per sample were acquired and results were analyzed using cell Quest Pro software (Becton Dickinson). For evaluating DNA content, a total of 1 Â 10 6 cells/ml were fixed in 70% ethanol at 4°C for 4 h. After washing cells twice with DPBS, they were stained with 10 mg/ml PI solution for 15 min. DNA content of each cell was measured and categorized as G 0 /G 1 , S, or G 2 /M phase of the cell cycle. DETECTION OF RECURRENCE OF APOPTOSIS In order to detect the possible recurrence of apoptosis due to cryoinjury in WJMSCs isolated from post-thaw WJ tissue, a well-established Annexin V apoptosis assay was conducted by quantitative flow cytometry using FITC Annexin V Apoptosis Detection Kit 1 (BD Pharmingen, CA). Briefly, both detached and attached cells at passage 1 were pooled, harvested by trypsinization (0.25% trypsin), washed twice with cold DPBS and resuspended cells in 1X binding buffer and stained with Annexin V-FITC and PI for 15 min at room temperature in dark, then added additional 400 ml of 1X binding buffer and immediately analyzed by flow cytometry within 1 h. Cell viability and apoptosis/necrosis assessment were made using FACSCaliber flow cytometer (BD FACS Calibur; Becton Dickinson, NJ) using 488-nm laser excitation and fluorescence emission at 530 nm (FL1) and >575 nm (FL3). A total of 15,000 cells per sample were acquired in triplicate from three independent experiments using cell Quest Pro software (Becton Dickinson). Linear amplification was used for forward-and side-scatter measurements and logarithmic amplification was used for all the fluorescence measurements. The fluorescent dot plots have three cell populations: live (annexin V-FITCnegative/PI-negative), necrotic (annexin V-FITC-positive/PIpositive), and apoptotic (annexin V-FITC-positive/PI-negative). Quadrant analysis was performed on the gated fluorescent dot plot to quantify the percentage of live, necrotic, and apoptotic cell populations. The quadrant positions were placed according to the non-cryopreserved sample (Control/Fresh). IN VITRO MESENCHYMAL LINEAGE DIFFERENTIATION WJMSCs from both fresh and cryopreserved groups were evaluated for their in vitro differentiation ability into osteogenic, adipogenic, and chondrogenic lineages as per the previously published protocols [Patil et al., 2014]. Briefly, cells were cultured in ADMEM supplemented with lineage-specific constituents for 21 days by changing media for every 3 days interval. Osteogenic medium comprised of 0.1 mM dexamethasone, 50 mM ascorbate-2phosphate, and 10 mM glycerol-2-phosphate. Osteogenesis was confirmed by alizarin red and von Kossa staining. Adipogenic medium comprised of 1 mM dexamethasone, 10 mM insulin, 100 mM indomethacin, and 500 mM isobutylmethylxanthine (IBMX). Adipogenesis was confirmed by the accumulation of lipid droplets by staining with Oil red O solution. Chondrogenesis was induced by using the commercial chondrogenic medium (StemPro 1 Osteocyte/ Chondrocyte Differentiation Basal Medium; StemPro 1 Chondrogenesis supplement, Gibco by life technology) and differentiation was evaluated by Alcian blue and Safranin O staining. IN VITRO HEPATOGENIC DIFFERENTIATION The differentiation ability of WJMSCs isolated from both fresh and cryopreserved tissues towards hepatogenic lineage was evaluated as previously described [Patil et al., 2014]. Briefly, cells after reaching 70% confluence in ADMEM supplemented with 10% FBS were further cultured with hepatocyte priming medium consisted of ADMEM supplemented with 2% FBS and 20 ng/ml recombinant human hepatocyte growth factor (HGF, R&D Systems, Inc., MN) for 7 days. Primed cells were then cultured in hepatocyte maturation medium consisted of ADMEM supplemented with 2% FBS, 10 ng/ml oncostatin M (R&D Systems, Inc.,), 10 nmol/l dexamethasone and 1% insulin-transferrin-selenium mix (ITS-mix) for 15 days. Both priming and maturation media were changed on every alternative day. Control cultures were also maintained in parallel to the differentiation experiments. REAL TIME-POLYMERASE CHAIN REACTION (RT-PCR) The expression of transcription factors, apoptosis-related genes, and lineage-specific marker genes was analyzed by RT-PCR in triplicates from three independent experiments. Total RNA was isolated using the RNeasy mini kit (Qiagen, Valencia, CA) from control or induced WJMSCs from all the experimental groups. A total of 2 mg RNA was used to synthesize complementary DNA (cDNA) using Omniscript RT kit (Qiagen) with oligo-dT primer. The reaction was carried out at 37°C for 60 min. Real-time PCR was carried out on a Rotor gene Q (Qiagen) using Rotor Gene TM SYBR green PCR kit (Qiagen). A total of 50 ng cDNA was added to 12.5 ml SYBR Green mix, 5.5 ml RNase free water and 1 ml each of forward and reverse primers at 1 pM (final volume 25 ml). The assay was performed with initial denaturation at 95°C for 10 min, followed by 40 PCR cycles of 95°C for 10 s, 60°C for 6 s, and 72°C for 4 s, followed by a melting curve from 60°C to 95°C at 1°C/s and then cooling at 40°C for 30 s, according to the manufacturer 0 s protocol. CT values and melting curves of each sample were analyzed using Rotor-Gene Q series software (Qiagen). The PCR products were evaluated by 1.5% agarose gel electrophoresis and images were analyzed using zoom browser EX5.7 software (Canon). YWHAZ (Tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein, zeta polypeptide) was used as a housekeeping gene for normalization of the data. The relative level of target gene expression was calculated according to the 2 ÀDDCT method. The primers used are listed in Table I. PERIODIC ACID-SCHIFF (PAS) STAINING WJMSCs differentiated to hepatocytes were evaluated for their glycogen storage ability using PAS staining. Briefly, both control and differentiated cells were fixed in 3.7% formaldehyde for 30 min. Cells were then treated with oxidizing agent 1% periodic acid for 5 min at room temperature and rinsed three times with distilled water before treating with Schiff 0 s reagent for 15 min at room temperature. Finally, cells were rinsed with distilled water for 5-10 min and counter stained with Mayer 0 s hematoxylin for 30 s and washed with distilled water. Glycogen storage was observed under a light microscope. UREA ASSAY After incubating both control and differentiated cells in culture medium supplemented with 1 mM ammonium chloride (NH 4 Cl) for 24 h, culture supernatants were collected, centrifuged at 300g for 5 min and urea levels were measured in 96-well plates at 570 nm as per manufacturer 0 s instruction manual (Abcam, Cambridge, MA). Fresh culture medium supplemented with 1 mM NH 4 Cl was used as a control. LOW-DENSITY LIPOPROTEIN (LDL) UPTAKE ASSAY The uptake of LDL by hepatocyte differentiated WJMSCs was evaluated using Dil AcLDL (Low-Density Lipoprotein from Human Plasma, Acetylated, Dil complex, Thermo Fisher Scientific, MA). Briefly, cells were incubated in serum-free DMEM-LG supplemented with 10 mg/ml Dil AcLDL for 4 h at 37°C. Cells were then washed and visualized under a fluorescent microscope. STATISTICAL ANALYSIS The statistical differences between experimental groups were analyzed by one-way ANOVA using SPSS 21.0. For multiple comparisons, Tukey 0 s test was performed and data were presented as a mean AE standard error of the estimate of mean value (S.E.M) for each sample measured in triplicates obtained from three independent experiments. Results were considered significant when P < 0.05. SURVIVAL RATE, CELL RECOVERY, POST-THAW MORPHOLOGY, AND GROWTH CHARACTERISTICS OF WJMSCs The survival rates of WJMSCs isolated from either fresh or cryopreserved WJ tissue were 93.63 AE 3.32%, 37.70 AE 4.10%, 44.60 AE 2.14%, 61.60 AE 4.57%, and 70.77 AE 2.52% in Fresh, Conv-DMSO, Prog-DMSO, Conv-Cock and Prog-Cock groups, respectively (Fig. 1A). Significantly (P < 0.05) higher cell survivability was achieved using the programmed method of freezing than the conventional method. Further, cocktail cryoprotectant has showed improved cryoprotection of WJ tissue than DMSO. The total number of live cells recovered per cm of umbilical cord from either fresh or cryopreserved WJ tissue following WJMSC isolation prior to culture was found to be 4.24 AE 5.3 Â 10 6 , 1.52 AE 6.1 Â 10 6 , 1.80 AE 3.8 Â 10 6 , 2.42 AE 5.4 Â 10 6 , and 3.12 AE 4.6 Â 10 6 in Fresh, Conv-DMSO, Prog-DMSO, Conv-Cock, and Prog-Cock groups, respectively. Based on these results, only the cryopreservation with a cocktail solution (Conv-Cock and Prog-Cock) was chosen for subsequent experiments in order to address the feasibility of complete elimination of animal serum and DMSO for WJ tissue cryopreservation. WJMSCs from post-thaw WJ tissue in primary culture have shown colonies of adherent and fibroblastic spindle-like morphology on day 5, which became completely confluent by day 12. However, cells isolated from Conv-Cock group have shown retarded growth and reduced cell clumps on day 5 (Fig. 1B). The total cell numbers on day 12 in primary culture were 5.5 AE 1.2 Â 10 6 , 5.3 AE 1.8 Â 10 6 , and 2.7 AE 1.4 Â 10 6 in Fresh, Prog-Cock and Conv-Cock group respectively. WJMSCs from both fresh and cryopreserved WJ tissues grew into colonies when grown in low numbers in 25 cm 2 culture flasks to evaluate their plating efficiency. Cells from Conv-Cock have shown reduced colony forming ability compared to that of Fresh and Prog-Cock group of WJMSCs ( Fig. 2A). Analysis of PDT showed that the proliferative capacity was comparable between cells isolated from WJ tissue of fresh and Prog-Cock group, whereas, cells from Conv-Cock group showed significantly (P < 0.05) reduced proliferative capacity. Doubling time was found to be 54.0 AE 2.4 h, 54.47 AE 3.1 h, and 61.36 AE 1.9 h for Fresh, Prog-Cock, and Conv-Cock group, respectively ( Fig. 2B and C). The saturation density of WJMSCs in all the experimental groups reached by day 12 of culture with the number of cells at saturation density were 812,000 AE 14.3, 794,000 AE 12.1, and 452,000 AE 18.2 in Fresh, Prog-Cock, and Conv-Cock group, respectively (Fig. 2D). Conv-Cock group has shown a significant difference (P < 0.05) in all the phases of cell cycle in comparison to that in Fresh and Prog-Cock groups as analyzed by FACS (Fig. 2E). The proportion of cells in G 0 /G 1 phase was 66.31 AE 1.9%, 69.26 AE 2.7%, and 82.77 AE 1.4%, whereas in S phase was 23.7 AE 2.8%, 19.73 AE 4.1%, and 14.93 AE 2.2% and those in G2/M phase was 9.99 AE 1.1%, 11.01 AE 2.6%, and 2.3 AE 4.6% in Fresh, Prog-Cock, and Conv-Cock group, respectively. RECURRENCE OF APOPTOSIS The possible recurrence of post-thaw apoptosis due to cryoinjury was evaluated using Annexin V-FITC assay. The results showed significantly (P < 0.05) greater apoptosis signals for the cells from the post-thaw Conv-Cock group compared to cells from post-thaw Fresh and Prog-Cock groups (Fig. 2F) EXPRESSION OF CELL SURFACE ANTIGENS Flow cytometric analysis of WJMSCs from both fresh and cryopreserved groups showed that they were negative for CD34 and CD45 whilst positive for CD73, CD90, and CD105 with no significant differences in CD marker expression between cells isolated from fresh and cryopreserved WJ tissue. However, cells from Conv-Cock group showed significantly (P < 0.05) reduced expression of CD73 (Fig. 3). IN VITRO MESENCHYMAL LINEAGE DIFFERENTIATION WJMSCs from both fresh and cryopreserved groups upon in vitro differentiation under specific conditions using lineage-specific differentiation medium were able to differentiate into mesenchymal lineages (osteogenic, adipogenic, and chondrogenic). The formation of mineralized nodules upon osteogenic induction was shown by Alizarin red and von Kossa staining (Fig. 4A). IN VITRO HEPATOGENIC DIFFERENTIATION In order to determine whether cryopreservation affects transdifferentiation ability of WJMSCs isolated from both fresh and cryopreserved WJ tissue, WJMSCs were allowed to grow until 70% confluence before hepatogenic induction. Upon induction, the fibroblast-like morphology of WJMSCs was gradually changed to flattened shape during the priming stage of differentiation. We observed islands of adherent round or polygonal shaped cells surrounded by spindle-shaped MSCs during the first week of maturation step. Further, change in cell morphology was more obvious during the second week of maturation step, where WJMSCs displayed hepatocyte-like morphology in all the experimental groups (Fig. 4B). Differentiated cells expressed hepatocyte-specific markers AFP3, ALB3, and HNF4A1 at m-RNA and protein level although with different expression levels ( Figs. 5D and 8A, B). Additionally, the immunocytochemical analysis revealed that the morphological changes observed during differentiation were sustained with the expression of ALB and HNF-1a consistently in all the experimental groups of MSCs (Fig. 5B). The functional characteristics of HLCs were evaluated by PAS staining, LDL uptake, and urea synthesis assay. The glycogen storage capacity of differentiated WJMSCs was analyzed by PAS staining. The results showed a low level of PAS staining in undifferentiated stage (day 0) of WJMSCs in all the experimental groups. However, after exposure to the hepatogenic medium, cells displayed a significant positive staining of glycogen granules in their cytoplasm at the end of maturation (Fig. 4C). In addition, differentiated cells showed a capacity to accumulate low-density lipoprotein (LDL) inside the cells (Fig. 4E) whereas undifferentiated cells (day 0) did not show this ability (data not shown). Finally, the ureogenesis was analyzed for the metabolic function of differentiated WJMSCs to detoxify ammonia to less toxic urea. The results showed that differentiated cells produced 4.5-to 4.8-fold more urea compared to control medium (P < 0.05). The capacity for urea synthesis in differentiated cells was observed to be comparable in both Fresh and Prog-Cock group. However, cells from Conv-Cock group showed significantly (P < 0.05) reduced capacity for urea synthesis compared to both Fresh and Prog-Cock group (Fig. 4D). Taken together, these data suggest that WJMSCs isolated from both fresh and cryopreserved group (Conv-Cock and Prog-Cock) could commit toward functional hepatocyte-like cells upon hepatogenic induction albeit with varying capacity. IMMUNOCYTOCHEMISTRY Immunocytochemistry was performed to visualize the localization of transcription factors and hepatocyte-specific lineage markers at the protein level. The nuclear expression of Oct-3/4, Sox-2, and Nanog was detected in most of the cells isolated from both fresh and cryopreserved groups. However, Nanog was also occasionally observed in the cell cytoplasm (Fig. 5A). WJMSCs after differentiating into hepatocyte lineage displayed cytoplasmic expression of human serum albumin (ALB) and nuclear expression (predominant) of hepatocyte nuclear factor 1-alpha (HNF-1a) (Fig. 5B). RT-PCR AND WESTERN BLOTTING Total RNA and protein were isolated from WJMSCs at passage 3 in all the experimental groups. There was no significant change observed in the expression of transcription factors such as OCT4, SOX2, and NANOG among Fresh and cryopreserved groups ( Fig. 6A and C). The expression of pro-apoptotic factors such as BAX, p53, and p21 was significantly (P < 0.05) elevated, whereas the expression of antiapoptotic factor BCL2 was significantly (P < 0.05) reduced in WJMSCs isolated from Conv-Cock group compared to that in WJMSCs of Fresh and Prog-Cock group (Fig. 6B and D). The BAX/ BCL2 ratio was found to be 1.35 AE 0.36% and 2.26 AE 0.54% in WJMSCs isolated from the Prog-Cock and Conv-Cock group, respectively. Similar expression patterns were observed in western blot at the protein level for both transcription-and apoptosis-related factors (Fig. 5C). The mRNA levels of osteocyte, adipocyte, chondrocyte, and hepatocyte lineage specific markers were found to be significantly (P < 0.05) increased to about 2.6-to 10.4-fold, 2.9-to 6.9-fold, 4.7-to 10.2-fold, and 1.8-to 4.4-fold, respectively after differentiation in all the experimental groups ( Figs. 7 and 8). There was no significant (P < 0.05) differences were observed in the expression of adipocyte and chondrocyte lineage specific markers among all the experimental groups. However, the WJMSCs isolated from Conv-Cock group showed significantly (P < 0.05) reduced expression of osteocyte and hepatocyte lineage specific markers after differentiation compared to WJMSCs under Fresh and Prog-Cock group (Figs. 7 and 8). DISCUSSION Wharton 0 s jelly is a promising tissue source of MSCs for both autologous and allogeneic applications. Although several studies have reported the successful cryopreservation of WJMSCs, cryopreservation of Wharton 0 s jelly tissue as a whole instead of WJMSCs possesses several advantages as described in the Introduction. DMSO is the most commonly used cryoprotectant along with FBS. However, the use of these two components in cryosolution impedes the clinical utility of MSCs. In this study, we demonstrate that the Wharton 0 s jelly tissue can be cryopreserved using DMSO-and serum-free cocktail cryosolution comprising of 0.05 M glucose, 0.05 M sucrose, and 1.5 M ethylene glycol in PBS in conjunction with programmed slow freezing method (Prog-Cock) developed in our laboratory. In this study, the minimum cell survivability was observed when 10% DMSO supplemented with 10% FBS was used as CPA compared to the cocktail solution (Cock). The reason for this reduction in cell survivability could be multifactorial, but the general rationale is that during cryopreservation, structured multicellular tissues, and simple cell suspensions may respond differently to cryoprotective agents (CPA), cooling, warming, and dehydration. Tissues are generally known to be impervious to cold shock due to their structural complexity compared to single cells. However, the use of DMSO as Fig. 3. Flowcytometric analysis of the expression of surface markers by WJMSCs under fresh, Prog-Cock, and Conv-Cock group. WJMSCs were negative for CD34 and CD45 and positive for CD73, CD90, and CD105 expression. Significant differences were considered when P < 0.05. CPA may increase the sensitivity of certain tissues to cold shock upon cooling before freezing [Morris et al., 1983;Morris, 1987]. Further, the requirement of optimal CPA concentration may also vary from simple cell suspensions to complex tissues. Therefore, typically employed concentrations of DMSO (8-20%) for cell preservation may not be sufficient for tissue preservation, since this concentration is not sufficient to penetrate adequately deep into the tissues to limit intracellular ice formation. Based on these previous observations, we speculate that Wharton 0 s jelly tissue might have undergone cold shock when DMSO is used as CPA or it may require higher DMSO concentration than currently used in this study. Further, we cannot completely exclude the possibility that the remnant DMSO in freeze-thaw WJ tissue even after washing might have exerted its toxicity resulting in poor cell viability. Therefore, washing step plays an important role in removing DMSO from these complex tissues and requires additional washing procedures. The deleterious effect of DMSO due to cold shock may possibly be circumvented by using combinations of two or more cryoprotectants resulting in an additive or synergistic enhancement of cell survival while reducing cytotoxicity. Although it has been demonstrated that increasing DMSO concentration to 6 M can result in higher cell survivability of porcine articular cartilage [Jomha et al., 2004], this higher DMSO concentration may exert considerable cytotoxicity and may not be a better option for clinical utility. Therefore, in the present study, we have not attempted to either increase the DMSO concentration or supplementing DMSO with other CPAs. Instead, a cocktail solution with a combination of 0.05 M glucose, 0.05 M sucrose, and 1.5 M ethylene glycol in PBS was used as CPA, which we have earlier used in our laboratory for cryopreservation of human dental follicle tissue. The rate of cell survivability in cryopreserved WJ tissue with the cocktail solution was similar to our earlier report albeit with minor variations. Previously, it has been reported that cryosolution containing both non-permeable and permeable CPAs seem to be more advantageous than solutions containing only permeable CPAs [Shaw et al., 2000]. The present study also demonstrated that cocktail solution has greatly enhanced the effect of WJ tissue protection during cryopreservation through a possible synergistic mechanism. It is possible that the permeable CPA like ethylene glycol might have protected the cells against freezing injury by reducing ice formation inside and outside the cells whereas non-permeable CPAs like glucose and sucrose might have dehydrated cells and extracellular matrix and thus reduced the amount of water present before freezing. In addition, glucose and sucrose might have also contributed to the stabilization of cellular membranes and proteins during freezing and drying. In general, the biological systems are greatly influenced by cooling rate during cryopreservation. Each system tends to have its own specific optimal cooling rate, with decreased survival at cooling rates that are too low (slow-cooling damage) or too high (fast-cooling damage) [Mazur et al., 1972]. At very slow cooling rates, the cryoinjury occurs due to the solution effects (i.e., the solute concentration and severe cell dehydration) whereas, at high cooling rates, cryoinjury occurs due to the lethal intracellular ice formation. So the optimal cooling rate falls in a range that is neither too fast nor too slow. In this context, the programmed freezing protocol may provide an improved cryoprotection for cells and tissues. Therefore, we further compared a programmed freezing protocol, previously optimized in our laboratory for cryopreservation of dental follicle tissue, with a conventional method (À1°C/min). The present study demonstrated improved cell survivability in programmed freezing method compared to the conventional method for both the CPAs. This improved cell survivability during programmed freezing may be due to the factors such as hold-time and plunging temperature. A suitable hold-time is required for any cryopreservation protocol, as cryoprotectant cannot osmose into cell membrane with shorter hold-time or it may exert chemical toxicity to cells with longer hold-time. Therefore, a suitable hold-time allows the cryoprotectant to osmose into cell membrane without exposing them to the cryoprotectant for too long time. As this study mainly focused on the use of DMSO-and serum-free cryosolution, and also due to the reduced WJMSCs survivability noted in WJ tissue cryopreserved with DMSO, only WJMSCs isolated from WJ tissue cryopreserved with cocktail solution (Conv-Cock and Prog-Cock) were further characterized to evaluate the effect of cryopreservation on their basic stem cell characteristics. Most of the studies report maintenance of morphological and functional characteristics of stem cells even after cryopreservation. However, safe cryopreservation of stem cells and its efficacy for clinical utility depends on several factors such as freezing temperature, freezing rate, freezing duration, cryoprotectant used, thawing, and removal of cryoprotectant. Nevertheless, the success of any stem cell therapy often depends on repeated transplantations and, therefore, relies on freezing and storage of cells. For instance, in patients with chronic heart failure or ischemic heart disease, chronically failing hearts with no recent infarct may not respond to MSC for the first time upon injecting cells into the ischemic area and thus, more than one injection could be necessary to obtain better results [Lee et al., 2004]. Therefore, it is crucial to evaluate the possible effect of any freezing protocol on the change in MSC phenotype and functional characteristics. In this study, colonies of adherent and fibroblastic spindle-like morphology of WJMSCs were observed on day 5 from both fresh and cryopreserved WJ tissue. However, it is possible to notice that the decrease in cell recovery and altered biological characteristics of WJMSCs was directly related to the freezing rate, since WJMSCs from WJ tissue stored using conventional method (À1°C/min) showed a retarded post-thaw cell growth and reduced cell clumps on day 5 resulting in lower cell recovery in primary culture. Moreover, in these cells, the effect of possible freezing injury seemed to be irreversible even after in vitro propagation as indicated by reduced colony forming ability with prolonged doubling time. Although we did not find any significant post-thaw morphological changes among cryopreserved groups compare to the fresh group, earlier it has been reported that cryopreservation-induced morphological changes such as extensive branching of cytoplasmic extensions may affect the proportion of viable cells reattaching on culture dishes [Heng, 2009]. This suggests that programmed freezing method may preserve the better ability of post-thaw cells to maintain higher colony forming ability. Therefore, the conventional method of WJ tissue freezing with cocktail cryosolution (Cock) may require long-term post-thaw in vitro propagation to get adequate cell doses for clinical utility. But this further enhances the risk of culture induced epigenetic changes as well as bacterial and viral contaminations. According to the guidelines of The International Society for Cellular Therapy (ISCT), mesenchymal stem cells should express CD105, CD73, and CD90; and lack the expression of CD45 and CD34; CD14 or CD11b; CD79alpha or CD19; and HLA-DR surface molecules, adhere to a plastic surface when maintained under standard culture conditions and must differentiate to osteoblasts, adipocytes, and chondroblasts in vitro [Dominici et al., 2006]. In our study, WJMSCs retained their expression of CD markers even after cryopreservation in both conventional and programmed freezing method, implying that freezing rate had no effect on the integrity of cells. But the number of cells expressing CD73 was significantly reduced after conventional freezing. CD73 has been reported to play an important role in osteoblast differentiation [Takedachi et al., 2012]. The present study also indicated the diminished expression of CD73 after conventional freezing might have reduced the propensity of WJMSCs towards osteoblast differentiation while maintaining adipogenic and chondrogenic differentiation ability as indicated by lineage-specific m-RNA marker expression. Further, conventional freezing of WJ tissue using cocktail solution has compromised transdifferentiation ability of WJMSCs into hepatocyte-like cells compared to the programmed freezing method. In the present study, the expression of early transcription factors such as OCT4, SOX2, and NANOG was not affected by cryopreservation of WJ tissue using both conventional and programmed freezing method. These transcription-related proteins were mainly localized to the nucleus while NANOG was occasionally detected in the cell cytoplasm in all the experimental groups and these results are in agreement with the previous report [Carlin et al., 2006]. However, it is not clear whether this subpopulation of cytoplasmic NANOG-positive cells will increase the risk of in vivo tumorigenicity or not, since MSCs found in the cervical cancer stroma display cytoplasmic NANOG expression and can promote the progression of cervical cancer in vitro and in vivo [Gu et al., 2012]. The major limitation for using any freeze-thaw tissues for clinical utility is attaining adequate viable cell numbers. A significant number of cells lose their viability during freezing and thawing procedures as a result of cryopreservation-induced apoptosis [Schmidt-Mende et al., 2000]. However, immediate post-thaw cell viability cannot be a true measure of representing the efficacy of cryopreservation. Therefore, we further evaluated the possible recurrence of apoptosis due to cryoinjury in post-thaw cultured cells using Annexin V-FITC assay. The results showed significantly greater apoptosis signals for conventional method than the programmed method of freezing. However, we found a small proportion of cells undergoing apoptosis in WJMSCs isolated from fresh WJ tissue. This could be due to the effect of passaging since the assay was conducted in passage 1 cells. Further, an elevated expression of apoptotic-related factors both at m-RNA and protein level in WJMSCs isolated from WJ tissue frozen using conventional method suggesting the possible occurrence of apoptosis due to cryoinjury resulting in loss of post-thaw cell survivability. The present study also speculates the possible occurrence of DNA damage in WJMSCs isolated from WJ tissue stored using conventional method due to cryoinjury, because of higher expression of p53 and p21 both at m-RNA and protein level with a higher proportion of cells in the G 0 /G 1 phase of cell cycle. In conclusion, the cocktail solution (Cock) comprising of 0.05 M glucose, 0.05 M sucrose, and 1.5 M ethylene glycol in PBS showed higher post-thaw cell survivability in conjunction with the programmed method of freezing. This study also indicated the typical concentration of DMSO (8-20%) used for preservation of simple cell suspensions may not be sufficient for complex tissue preservation. Nevertheless, the feasibility of developing cocktail cryosolutions with both permeating and non-permeating CPAs could synergistically increase the cryoprotection while reducing or completely eliminating the use of cytotoxic DMSO and xenogeneic serum components. Poor cell recovery, growth characteristics, apoptosis, and loss of basic stem cell characteristics noted in conventional freezing method suggested that freezing rate also plays an important role in tissue cryopreservation apart from cryoprotectants used. Although present study has demonstrated the feasibility of using DMSO-and serum-free cryosolution for shortterm WJ tissue banking with controlled rate freezing in vitro, further studies are needed to evaluate the effect of this cryosolution on MSCs stored for longer periods of time with in vivo efficacy of post-thaw cells. Figure 7 (A-C). Significant differences were considered when P < 0.05.
9,858.8
2016-06-23T00:00:00.000
[ "Materials Science", "Medicine" ]
Management of reforming of housing-and-communal services The international experience of reforming of housing and communal services is considered. The main scientific and methodical approaches of system transformation of the housing sphere are analyzed in the article. The main models of reforming are pointed out, interaction of participants of structural change process from the point of view of their commercial and social importance is characterized, advantages and shortcomings are revealed, model elements of the reform transformations from the point of view of the formation of investment appeal, competitiveness, energy efficiency and social importance of the carried-out actions are allocated. Introduction The entity of institutional housing-and-municipal conversions consists in change of elements of institutional system which can be carried out by legal enforcement from the state (on a centralized basis) or in case of the active involvement from economic agents (is decentral). A main goal is minimization of transactional expenses and a solution of the problem of outer effects. Distinctions in the level of economic development, industrialization and innovation of economic system predetermined several scientific and methodical approaches of system transformation of the housing sphere: French-Scandinavian (Dutch) approach and Anglo-American. System of goals management (the French-Scandinavian Dutch approach) it is based on a combination of economic, organizational and administrative and legal methods. Such combination was aimed at formation of the competitive beginnings in housing and extension of a circle of participants of economic process. The basis of economic relations in this system is made by contractual obligations on the competitive beginnings. At the same time the state involvement to become selective, but isn't ignored absolutely [1]. The system of market regulation of housing was developed and in a consequence is realized in the United States of America. In its basis -fast transition to transmission of initiatives of revival and service of housing stock to hands of the private companies so that the housing sphere attracted to itself flows of finance from world and domestic markets of the capital under the projects having exclusively market incentives. The interference of the state in these projects shall be minimum and is justified only in relation to families with the low income (improving of quality of housing for them when lowering prime cost on it). Therefore, the allocated options differ in formation of the different institutional structures of the housing market mediating interaction of the state and non-state, private organizations and institutions and causing his effective functioning from the point of view of compliance of result (quality and quantity) and expenses (financial, temporary and personnel). Within the American approach the big commissions at the federal, regular or local level depending on jurisdiction under which there is a considered branch have been created. For example, the branch of water supply is regulated generally at the local level while power supply -at the level of the state. Anyway, jurisdictions usually partially coincide, and the federal government can also take part in regulation, for example, at the construction of a dam for supply with water of the large city influencing development of the state in which he is, and also on the next states. In most cases they very big, in them are developed the detailed rules of consideration of questions with carrying out the state audit including an assessment of experts and counter experts. Representing often clashing interests, decisions made by them usually are compromise and in this regard, don't satisfy any of the parties. Sluggishness of process and its bureaucratic lines were always exposed to criticism, and, certainly, comparative discredit of by these bodies which consistently destroys their image for the last twenty years can be explained with them [2]. Partially reacting to this not working capacity, the English model of regulators has been developed. The English government has added carrying out the main wave of privatization of the municipal enterprises, creation of regulators which had to be less bureaucratic and much more transparent, than in the American model. For realization of this decision in life one person bore responsibility for activity of each regulator in branches of telecommunications, gas supply, power supply, water supply and rail transportation. The idea consisted in accurate identification of the authorized officer, with granting wide independence to him, but also and imposing on him full responsibility for the made decisions that the consumer, thus, could know precisely whom to address with the complaint and whose decisions to challenge in case of disagreement [3]. Methodological approaches and analysis The analysis of experience of carrying out reform of housing and communal services, allowed to select three models of reforming: 1. England and Chile went on the way of complete privatization of critical infrastructure. 2. In Germany the diagram where the enterprises of branch become the joint-stock enterprises which main packet the municipality possesses is applied. 3. "The French model" assumes a combination of municipal property on objects of housing and communal services and control of them from private business on the terms of long-term lease contracts and the accompanying investment agreements. In the general complex of problems of market conversions in the countries of Central and Eastern Europe the housing sector first appeared on the periphery of reforms. On the one hand, this results from the fact that all attention was concentrated on priority tasks of liberalization of economy, creation of multistructure forms of ownership, labor market, the capital and other structures adequate to the market. With another -the fact that in a number of the states certain changes in housing began to happen even before transition to the market. Development of market economy resulted in need of essential revising of a housing policy for all countries of the region. Reforms in the housing and communal services (HCS) everywhere became a part of economic conversions, but specific ways of their implementation depended on financial opportunities and the general course of economic reforms in this country. At the same time there are lines inherent in all post-socialist countries: shifts in structure of sources of financing of housing construction in favor of the private sector; privatization, restitution of dwellings; changes in mechanisms of maintenance of housing facilities and payment of housing and communal services, and also in mechanisms of social protection of the population on provision and the maintenance of dwellings [4]. The English experience where on reform of housing and communal services 15 years left, is very demonstrative and useful to Russia which the long period experiences reform conversions which consequences are ineffective. England long went to creation of socially acceptable and commercially attractive conditions in municipal sector. For example, in 1997 in case of labourists a ban of switch-off of services of water-supply for home customers was imposed, but the balancing decision on switching on of the amounts of an operating rate underpaid by customers was at the same time made. The investment and tax model of an involvement in reforming of households, complexes and infrastructures was formulated. One more important lesson of municipal reforms in England -transfer and combining hundreds of municipal water utilities at first in property of ten regional state companies and their subsequent privatization. Before privatization, the state wrote off all debts of the municipal enterprises, footed the bill on coercion of property in up state, carried out its certification and setting on balance. In the majority of the European countries, municipal infrastructure isn't transferred to a private property, and, remaining municipal, is exploited by private operators on terms of the contract of concession. After the reunification of Germany of 15 large regional energetic plants shall be transferred to private enterprises of power supply from West Germany in its east part. The federal government considered that such method of restructuring will provide stable financing not of new housing-and-municipal infrastructure [5]. Approach in Germany is connected to features of the constitution in which the considerable power belongs to the regional governments and governmental bodies on places. Development of regulators at the different government levels became the result of existence of such hostess system. For the enterprises of water-supply and water disposal which generally are local monopolies it doesn't create big problems. For more difficult and large systems, such as system of electrical power supply, there is a tangled stranding of various instructions of different regulators that does a situation especially difficult for potential competitors. According to the German legislation all managing directors of firm are obliged to undergo the audit inspections with the right of an independent choice of the organization auditors. The union of housing firms of lands Berlin -Brandenburg (Verband Berlin-Brandenburgischer Wohnungsunternehmen) is at the same time branch combining of housing firms, and also the consulting and auditor company for the members. The union has the electronic database about the federal and land legislation in the housing sphere which is quarterly updated [6]. The French model is the most centralized. Most of the municipal enterprises were in state ownership not so long ago (the branch of telecommunications has opened for private investors and the competition in 1998). Until the end of the eightieth these state corporations have been directly subordinated to the ministry, the board of directors was appointed generally by the government, thus, political intervention was the rule. Anyway, this discretionary power has been actually strongly limited to existence of big and competent bureaucratic managements at the level of the ministries and high competence of managing directors of the state enterprises as well as presence of influential groups of interests for verification of their decisions (generally labor unions). Nevertheless, political intervention slows down normal decision-making process, and projects of reforms on creation of regulators in which the question of level of their independence is discussed are already prepared. The situation which has developed in branches of water supply and water disposal partially differs as the enterprises of these branches are the local monopolies which are in the basic under control of local governments and operated in most cases under contracts of concession (though is as well the state managing directors). But even in these branches, decisions of local regulators are rigidly limited to the rules of the game (an example can be standardization of contracts and rules for their acceptance) developed at the level of the central government. In the countries with the developed market economy, with a long story of universal providing utilities to consumers, including those who experience economic difficulties as have shown the conducted researches that there are possibilities for modernization of housing-and-municipal infrastructure, improvement of quality of services, reduction of the size of tariffs and increase in the common consumer advantage [7]. Experience of institutional transformations in the Netherlands is of the greatest interest as, according to many authors, conditions of development of the Russian economy correspond to conditions of introduction of this housing system. Three tasks on which institutional transformations were based on the housing market have been defined: 1) care and providing with necessary housing; 2) protection of the acceptable quality of social housing (there was a change of the concept of quality of housing which includes the concept "made habitable the residential district"); 3) availability of housing. The system of goals management defined housing system which foundation has been laid in 1901 with adoption of "The housing law". Many experts in the right call this document the most important in process of improvement of residential system as he has designated global complex changes: he has marked transition to regulation of private initiatives by the Central Government, provincial and municipal authorities. The social problem was a priority component of transformations of housing system, and therefore her decision logically laid down on shoulders of bodies of local self-government. In this regard the law defined that care of housing for the poor admitted an obligation of the state: The central Government took the responsibility to provide subsidies for construction of the social rented housing, so other way of financing at the established rules was inaccessible. The law has entered the quality standards of such housing to provide long duration of use of new housing stock in the long term, and has obliged municipalities to exercise control of their observance, being guided by construction rules, having allocated municipalities, thus, with one of important functions of management. Then rules of intervention of the Government to the sphere of housing construction have been for the first time legislatively approved, duties and responsibility of all public and private parties operating in this area are defined. Since then the Central Government together with provincial and local authorities played an escalating role in the housing sphere [8]. During all post-war period the Government has allocated priority line of conduct subsidizing of operating costs in the social rented housing, covering a difference between the actual expenses of house owners and controlled level of the rent. It allowed to realize in practice the main task at that time: volumes of social housing construction became much higher than private. Thus, it is central adjustable construction process I have allowed to realize the large housing program on a covering of deficiency of housing: from 1947 to 1992 housing stock of the country was replenished with 4 million dwellings, and the total of houses and apartments has made 6 million and approximately was made even to number of households. The role of the state in provision of housing for all population has been enshrined also in the Constitution which 2nd paragraph of Art. 22 says: "To promote the population sufficient housing is care of the government". In general management of housing represents complex system in which separate decisions are made at various levels presented by the state and the private sector. The central Government defines policy of housing construction in general, allocating the main key moments by which all subordinate levels at protection of own interests have to be guided. At the level of 12 provincial administrations regional plans which defined placement of new housing construction and infrastructure have been developed. In a consequence municipalities were guided by them by drawing up plans of zoning. Municipalities own in the majority of the cities only territories under roads, squares and parks. Therefore, any town-planning projects begin with repayment of the earth by municipality at the private owner, her engineering training, the equipment under construction and the subsequent sale or delivery to builder for long-term rent. The municipality rather freely disposes of the budgetary grants arriving from the center. The administration of the large cities (with the population more than 30 thousand people) has the right to solve, where to allocate the received funds: on provision of housing for the population with the smaller income or on stimulation of construction of private housing for own accommodation of owners. On a covering of the expenses connected with development of adverse territories, or on decrease in the rent in the central reconstructed regions. Small municipalities submit to stricter rules of an expenditure [9]. The municipality is allocated one of effective tools in realization of housing policy -registration of persons in need and granting social housing. According to requirements of the municipal law the city authorities have the right to establish the quality standards and the number of social housing. to distribute him on the basis of announcements in the special free newspaper publishing the short information about all the subsidized houses and apartments which have again come to the local market. Such forms of municipal associations in housing and communal services as the housing corporations operating by rules are typical for the European countries: 1. they have an opportunity to use the got profit for granting the credits to less successful corporations; 2. they can spend means for construction of expensive own housing; 3. Housing-and-communal complex is in advance obliged to develop the annual action plan answering to policy of municipality. Actions of housing corporations are corrected not only "from above", but also from the public organizations uniting separate groups of the population on social and demographic signs. In the 80th years most such organizations have united in the Housing confederation getting financial support from the government. The role of commercial house owners is small; they own less than 1/4 housing. Large private house owners, the pension funds, insurance companies, banks are owners of better and elite housing. Results In Russia application of the Netherlands-Dutch experience of management of housing with direct assistance of the housing companies has extended. At similarity of a condition of the housing sphere at the moment, the economic, financial and social spheres which are directly connected with housing system are in the mentioned countries at higher level of development, than in Russia. It allows, more effectively to solve problems of housing and communal services. The large role in it is played by development of bank branch and the financial market, the insurance market, existence of the state guarantees, and also the standard of living of the population and so forth [10]. Thus, efficiency of the transformations which are carried out the reform in housing and communal services can be reached at a possibility of creation and realization: 1. the investment-tax mechanisms directed to enterprise appeal of this sector of economy to the purposes: constructions, reconstruction of housing-and-municipal complexes, farms, housing-andmunicipal infrastructure, with attraction to participation of natural and/or legal entities, establishment of public-private partnerships, other legal forms in compliance to the law of the country of application; 2. economic incentives of improvement of quality of works and services in housing sector for the purpose of the fullest providing the accounting of interests of the population in the solution of questions of housing-and-municipal sector; 3. marketing maintenance by management of housing-and-municipal complexes, farms, housing and communal services; 4. effective innovative and technological administrative decisions; 5. available tariffs, a possibility of their choice, proceeding from preferences and the consumer's opportunities; 6. obligations of the company which is carrying out kinds of activity in the housing and communal services directions to obtain the license or to consist in membership of self-regulatory organizations, depending on requirements of the legislation of the country of establishment of a company and/or the place of implementation of activity by the company; 7. systems of insurance, reinsurance; 8. systems of vocational training, professional development, certification of shots; 9. measures for granting to socially not protected categories of citizens of preferential terms for payment of housing and communal services; 10. measures for strengthening of payment discipline; 11. protection of the property rights (including during creation of inseparable improvements) and guarantees of appropriate consideration of a dispute. Besides, insufficient security increases risks of participants of economic activity and limits possibilities of their positive response to the undertaken housing reform. If the private sector is involved in providing utilities, then the rights providing possibilities of receiving and free use of the income from effective activity have to be guaranteed. The high risk of expropriation of a quasi-rent by the government, can lead to unwillingness of private investors to receive concessions on management of the companies. It is especially important for branches of housing and communal services as here it is supposed to direct considerable and long-term investments. The problem also arises in case of providing "the vital services" by the municipal enterprise which is in state ownership. For creation of effective incentives by the managing director and worker, they have to have the right for a part of the income from increase in productivity and quality of work [9]. Efficiency of a management system, depends on regulation of the relations in the market of housing and communal services, and not just on the only administration or the ministry. Discretionary changes in agreements (for example, change of some purposes specified in the contract, or its duration) have to be approved by different institutes (for example, legislative and executive, or two chambers of parliament) and can be challenged in courts or the similar organizations. For example, in the independent Committee which is responsible for competition development. In those branches in which concentration of a rent is high that is where assignment of an economic rent is controlled by limited number of subjects and her sizes are high, it is more difficult to carry out reforming. Level of credibility to institutes as one of the main difficulties of functioning of the market of housing and communal services and involvement of the private companies to participation in supply of consumers with the main utilities, agreements which consider needs of those who can't pay the bill are (disabled people, pensioners, large families, etc.). The main complexity -determination of the status "economically insolvent" subjects of the market in contractual agreements. Two approaches tending to reduction of the uncertainty created by mutually excludability of requirements of efficiency and the social equality shown to the state municipal enterprise were outlined. It can be carried out by means of the procedure of the self-choice within which the menu of tariffs which they can choose, proceeding from the opportunities is offered to users. Within other approach applied by the government of Germany (and to a lesser extent in recent years the government of France), it is offered that suppliers of the vital services shouldn't resolve issues of social equality. The vital needs, consumers who can't be provided owing to the social difficulties realized by special services [10]. The payment discipline which is a result of compliance of tariff policy to threshold values of ability and readiness of the population to pay for housing and communal services, defines financial stability of housing and communal services and its appeal to private business, and eventually, reliability of work of all life support systems. Level of payment discipline acts as an unrecognized integrated indicator of success of housing reform. If he significantly lower than 95% and losses from a shortage of payments can't be compensated neither from tariffs, nor from the budget, then business in housing and communal services becomes unprofitable, and objects of housing and communal services degrade from incomplete repair works. The econometric analysis which is carried out by I. Bashmakov has shown that the key factor defining discipline of payments are even not tariffs, but the relation "payment for housing and communal services/income". To the first threshold -6-7% -the gap between them depends on degree of rigidity of work of the housing and communal services enterprises for increase in a collecting and on collection of debt for housing and communal services, on "appeal of the housing real estate" and on quality of housing and communal services. In process of approach to the second threshold -15% -even very drastic measures on increase in a collecting don't yield practical results. That is increase in loading on payment of housing and communal services at approach to the second threshold leads to such decrease in payment discipline that it isn't possible to restore it any measures any more [11]. The share of expenses on housing and communal services on average the income, equal 6-7%, is the not only Russian, but also universal international threshold of solvency providing a high collecting of payments for housing and communal services. Institutes allow to solve, in particular, coordination problems and problems of cooperation.
5,360.6
2017-10-01T00:00:00.000
[ "Economics", "Environmental Science", "Business" ]
Dicrocoeliosis in Cats and Dogs N e s v a d b a J . : Dicrocoeliosis in Cats and Dogs. Acta Vet. Brno 2006, 75: 289-293. This paper is the first report of clinical cases of dicrocoeliosis in cats and dogs. In cats, symptoms manifested as inappetence, diarrhoea, loss of weight, changes of hair coat, and, in particular, conjunctivitis with mucoserous discharge and prolapse of the third eyelid. In dogs, clinical symptoms manifested with alteration of the digestive apparatus (anorexia, increased peristalsis, vomiting and diarrhoea), loss of weight, jaundice and skin lesions (pruritus, alopecia and dermatitis interdigitalis). The performance of all working dogs was reduced significantly. Both in dogs and cats, a reliable diagnosis is only possible through repeated coprological examination and demonstration of Dicrocoelium eggs. As far as the therapeutic procedures are concerned, albendazole administered for four days was only found fully effective in the cat as well as in the dog, in which praziquantel for 4-5 days was also sufficient. Dicrocoelium denriticum (lancet fluke), coprology, clinical symptoms, therapy Dicrocoeliosis is a worldwide trematodosis, which occurs in the individual continents, countries and regions, with very diverse prevalence and intensity. A great number of mammals, mainly the herbivorous, and recently also birds have been reported as hosts (Rommel et al. 2000; Ducháãek and Lamka 2003). The distribution of dicrocoeliosis in Switzerland has been described in many papers documenting, in particular, its spread among livestock, and, above all, among cattle and sheep, causing serious losses. These have been permanently checked, not only by the coprological examinations, but also with liver examinations of slaughtered animals (Ducommun and Pfis ter 1991; Braun e t al. 1995; Camara et al. 1996). Dicrocoeliosis may also influence in a negative way the state of health and growth of young horses. During his practise, the author witnessed the death of two foals due to this disease. Burger (1999) was interested in his dissertation thesis in the occurrence of the dicrocoeliosis in the Emmenthal region. In the years 1991 to 1998 this author examined a total of 2 840 animals, 1 882, 294, 253, 116, 53 and 35 of which were dogs, cattle, cats, sheep, rabbits and goats, respectively. From the total number of examined animals Dicrocoelium eggs were found in 11.8% of specimens, i.e., in 59.9%, 32.1%, 31.4%, 25.9%, 23.7%, 2.7% and 1.2% of cattle, rabbits, goats, sheep, horses, dogs and cats, respectively. In dogs and cats Burger (1999) presumed, that these are not cases of natural infection, but a matter of passing Dicrocoelium eggs due to eating faeces of infected animals. In cats, it was probably caused by feeding them with food containing Dirocoelium eggs. There is no chapter on dicrocoeliosis in the textbook engaged with the clinical parasitology of the dog and cat by Svobodová and Svoboda (1995). Bowman et al. (2002) recommend praziquantel at the dose of 20 mg·kg-1 of body weight to control flukes in the cat. The same is true of other publications concerning diseases of the cat (Christoph 1977; Kraft and Dürr 1996). Dicrocoeliosis as a disease of the dog is presented only in the publication by Georgi and Georgi (1992). It is also the opinion of these authors that most of the positive findings obtained by coprology in the dog are those of eggs present due to consumption of infected faeces from Dicrocoelium infected animals. The above authors also mention the therapy using albendazol at the dose of 15-20 ACTA VET. BRNO 2006, 75: 289–293 Address for correspondence: MVDr. Jan Nesvadba, Senior Langnaustrasse 35 3532 ZÄZIWIL Switzerland E-mail<EMAIL_ADDRESS>http://www.vfu.cz/acta-vet/actavet.htm mg•kg -1 of body weight for dicrocoeliosis in the dogs.The first description of dicrocoeliosis in the cat can be found in the work by this author (Nesvadba 2000).Considering the epidemiology (Wenker 2004), it is remarkable that llamas imported to Switzerland from South American regions, where dicrocoeliosis is not present at all, have become infected with Dicrocoelium and suffered from very serious clinical symptoms.Very interesting is the report by Rack et al. (2004) describing dicrocoeliosis in humans including clinical manifestations, course of therapy and reconvalescence. Materials and Methods Results of this work are based on data of the examined patients from my veterinary practice in Switzerland in the Emmenthal region, canton Bern.The majority of patients originated from around Zäziwil as well as some larger urban agglomerations, i.e., Bern and Thun, in particular.All the patients were treated on an ambulatory basis.The diagnosis was based on a thorough clinical examination, and when necessary, haematology and biochemistry.The final diagnosis was made by repeated coprological examinations.Flotation solution of the specific weight of 1 300 (a modified method according to Breza) was used for coprology.A total of 11 730 coprological examinations were performed using faeces of different animals in years 1971 through 2004.Samples obtained from cats and dogs amounted to 950 and 7 770, respectively.The intensity of infection was evaluated on the basis of quantitative findings of eggs in the viewing field of the microscope at 100 times magnification and assigned levels 1-5 (level 1 for the sporadic finding, level 2 for 2-5 eggs, level 3 for 5-10 eggs, level 4 for 10-50 eggs and level 5 for the massive finding of eggs). Dicrocoeliosis in the cat The first case of dicrocoeliosis in the cat was confirmed in 1980 in a cat from the village Obertal, 850 m above the sea in the Emmenthal region.It was a 14-year-old, short-haired, European cat.The cat was presented with the history of loss of weight and no kittens during the last two years even though previously it had reared two litters every year.On physical examination, the cat was quite cachectic, showing stomach distension, dull hair coat with alopecia and marked icterus.Considering the poor prognosis, the owner elected euthanasia.Autopsy confirmed the clinical finding including the advanced dropsy of the abdominal cavity and jaundice.There was also liver cirrhosis with markedly thickened bile ducts, completely filled with the flukes Dicrocoelium dendriticum, which were noticeably smaller than specimens found in the sheep and cattle.In the Obertal village, both in the above farm and others, we have discovered dicrocoeliosis using repeated coprological examinations in the sheep, cattle, goats and horses suspect of clinical diagnosis due to loss of weight, drop in milk production and sterility.In this village and also in the neighbouring ones, we did not discover any other cases of dicrocoeliosis in other cats. We have been able to undoubtedly prove other cases of apparent dicrocoeliosis, owing to repeated examinations of faeces and specifically aimed treatment, only from July 2000 in 3 cats.They all were brought for the treatment for the same reason as the abovementioned one.Clinical signs in all 3 patients included mucopurulent conjunctivitis deteriorating in time and leading to the protrusion of the third eyelid.According to their history, there was body weight loss despite good appetite during the last 1 to 2 months.After this period there were apparent problems such as inappetence, recurrent vomiting and diarrhoea.We collected faeces for examination from all three cats.It contained Dicrocoelium eggs at 1-3 level intensity.Within 24 and 48 hours of the first coprological examination, control examinations yielding the same numbers of eggs definitely confirmed the diagnosis of dicrocoeliosis.Prior to these coprological examinations, all three cats were treated symptomatically as well as on purpose, regarding other etiological possibilities of the known clinical status, but without any distinct and permanent improvement of the existing disease. Four days of repeated treatments with praziquantel (DRONCIT inj.), administered s.c. at the dose of 0.1 mg•kg -1 of body weight, had no effect.Only application of albendazol in a paste form consisting of 333 mg in 1 ml (ALBAZOL), which was given to the affected cats during four days at the dose of 1 ml per 5 kg of body weight, was fully effective. The application itself as well as tolerance of this treatment was without any problems.On coprology during the third to fourth day of treatment, Dicrocoelium eggs in intensity of 1-2 were still found.The eggs completely vanished in all cats 3 to 4 days after finishing the treatment.We were not able to detect any eggs even after a longer interval of about one month to three years.In a week after finishing the albendazol treatment, the inflammatory changes of conjunctiva began to recede and the protrusion of the third eyelid was diminishing and in 2 cats after 14 days, and in 1 cat after 3 weeks, it disappeared completely.Shortly after the end of treatment as well as 10-14 days later, the appetite of all cats improved.In about a month the nutritional status of the cats returned to normal. There were other 12 cats in which the clinical manifestation was similar to that in the three cats mentioned above.We were, however, unable to obtain faeces for examination from any of these patients.When the symptomatic treatment, including control of common parasitic infections, remained ineffective and the described clinical status, the changes of the eyes, in particular, persisted, we treated these cats for four days with albendazole and in 9 cases there was a complete cure within the same period as in the three patients with confirmed dicrocoeliosis. Dicrocoeliosis in dogs During 34 years of practice in Zäziwil, we were able to find Dicrocoelium eggs using coprology in 377 dogs.Different breeds were affected.Most of them were from the closest vicinity of our practice, where the infestation of the main hosts for this parasite (i.e., cattle, sheep, goats, horses, rabbits) was severe.We assumed that, with the known coprophagy of dogs, it is a secondary passage of eggs due to feeding on faeces of infected hosts.It was fully confirmed in 294 dogs because of negative subsequent coprological results.In other dogs, in which excrements for subsequent examinations were not available, the clinical status proved that findings of those eggs could have been associated with the infection of the dogs.At the beginning of 2001 in a short time period, we managed to identify and observe two cases of dicrocoeliosis in the dogs.Other six cases were identified until the end of 2004.The infection of all 8 dogs was confirmed by the first finding of Dicrocoelium eggs in the intensity of 1-3 and the subsequent examinations resulting in the same findings. Five dogs were born and reared in Switzerland, three were imported as puppies from the Czech Republic.Considering the fact that those three dogs were several times examined with negative coprological results right after their import, we may assume that they became infected with Dicrocoelium in Switzerland.The clinical manifestation of dicrocoeliosis in all dogs was characterised by alteration of the digestive apparatus.Dogs suffered from changes in the peristalsis, vomiting, diarrhoea, colic states, heavy pains and distinct subicterus.Skin lesions were apparent in four patients (pruritus, eczemas, alopecia, interdigital dermatitis).Half of the clinically sick dogs quickly lost weight, temperament and working performance. The therapy of dicrocoeliosis was based on our experience with treating cats.Only repeated therapeutic doses of the effective drug can lead to permanent recovery.Albendazole was used in 6 dogs, and praziquantel in 2 dogs.We administered albendazole in a paste containing 333 mg in 1 ml (like in the cats -ALBAZOL).We administered it by oral route at the dose of 1 ml per 5 kg of body weight and day during 4 to 5 consecutive days.For the treatment with praziquantel we used the Drontal plus tablets containing 50 mg of praziquantel + 50 mg of pyrantel + 150 mg of febantel in one tablet.Two tablets per 10 kg of body weight for four days were administered to two dogs.Treatment was started in these dogs soon after the appearance of clinical symptoms and all of them made a full recovery. Improvements of the overall state of health, not only the physical condition, but the performance as well, were very persuasive.In two dogs suffering from the disease for several months or even two years due to prior unsuccessful symptomatic therapy, recovery could be observed.Since the fourth day after the treatment, they were without any findings of Dicrocoelium eggs.The general as well as the nutritional state and the condition of these dogs gradually improved.In these dogs, there remained, however, a tendency to diarrhoea.According to our examinations it was due to giardiosis.Despite the successful treatment of giardiosis using drugs, dietetic and hygienic measures, attacks of diarrhoea recurred. Discussion Dicrocoeliosis in the cat and dog has to be considered a rather rare disease (Rommel et al. 2000;Ducháãek and Lamka 2003;Georgi and Georgi 1992).In areas of abundant distribution of dicrocoeliosis in the main hosts (i.e., cattle, sheep, goats, horses, rabbits, and wild ungulates) it is necessary to consider the possibility of infection of dogs and cats.Considering dogs kept strictly in the towns, it is necessary to have dicrocoeliosis on the list, when there is a history of even a short-term stay in some infected areas for vacation, for example (Burger 1999;Nesvadba 2000). Cats and dogs can contract the infection even only through eating grass with an infected ant.It is possible only under circumstances of having free access to the outside of the house.Contrary to dogs that usually gobble the grass beyond control and swallow it fast, cats chew the grass carefully and only then swallow it.It may, therefore, be assumed that tasting an ant leads to spitting the grass more often in the cat than in the dog.Cats are thus infected less frequently (Nesvadba 2000). Clinical symptoms of dicrocoeliosis in cats and dogs are not specific.In dogs, it is a wide range of digestive disturbances leading to recurrent diarrhoea.In all the infected dogs dicrocoeliosis resulted in disruption of the general state of health, loss of temperament and reduction in the performance of working dogs.It is interesting that the clinical manifestation of dicrocoeliosis in dogs has many common features with the disease in humans (Rack et al. 2004).In cats, the infection caused deterioration of the general state of health and persistent conjunctivitis accompanied by the protrusion of the third eyelid. In cases of only a passive passage of Dicrocoelium eggs, coprology results in finding the lowest levels (level 1) of eggs.Only in exceptional cases in dogs it was the level 2 and once the level 3. The only option of making a reliable diagnosis is the coprological examination resulting in finding of Dicrocoelium eggs.If it cannot be excluded that the examined animal was feeding on faeces of other animals or eventually eating some feed possibly containing the eggs, it is necessary to repeat the coprological examination. Albendazole was mainly used as the drug of choice with very good results both in the cat and dog.Dogs treated with praziquantel for four days fully recovered as well.The selection of the dose and duration of administration of both compounds was based on the experience with the therapy of dicrocoeliosis in other domestic animal species.Only high doses and their administration for at least four days can lead to full recovery.There is no drug effective after a single dose available on the market.These facts are far more serious, because the treatment of infected animals is the only possibility of eradication of dicrocoeliosis.Any attempts of restriction or liquidation of both intermediate hosts is out of concern for their necessity in maintaining ecological balance of the environment.
3,500
2006-01-01T00:00:00.000
[ "Biology" ]
ANALYSIS OF THE KEY SUCCESS FACTORS FOR COMMERCIALIZING INNOVATION The development and commercialization of new technologies have inherent uncertainties and associated risks. Many researches conducted by Indonesian R&Ds never reached the diffusion stage, i.e., the commercialization process. Therefore, Indonesia needs a strategy to translate promising technologies into a stream of economic returns for its stakeholders. This study analyzes the critical success factors (KSF) for commercializing innovation. It used new product development literature and TOE (technology–organization– environment) framework and developed a researchmodel to investigate the determinants of commercialization of innovation. In choosing the selected vital success factors, a hierarchy of KSFs was defined. It also used the Analytic hierarchy process to help experts rank the importance of identified KSFs. The KSF hierarchy is constructed with two levels: a critical comprising three dimensions and a detailed level of nine individual factors. The result shows that the experts believe that in the top-level, technology is the most critical dimension followed by environment and organization. Technology is the primary consideration for the company before launching their product to the market. Market demand has the highest rank from the environmental dimension. Lastly, experts suggest that the organization network is the most significant to grasp investors and potential markets for successful commercialization. INTRODUCTION In the era of globalization, technological innovation is one of the main drivers of success in winning the global competition. Previous studies have reported that new products produced by new technologies have resulted in a 40% to 90% increase in national wealth in most countries. Indonesia is one of the developing countries that intensively emphasized technology as a part of the sustainable process of their national development. Hill (1998) mentioned that Indonesia needs technology to sustain its economic growth [1] . As mentioned in Indonesia's National Medium Term Development Plan or Rencana Pembangunan Jangka Menengah Nasional (RPJMN) 2015-2019, one of the government missions is to realize a competitive nation. This mission can be achieved through the development and application in the industry to produce competitive products. Therefore, the direction of Indonesia's national development 2015-2019 is to build a competitive advantage of the economy based on natural resources, qualified human resources, and the ability of science and technology. Indonesian government Ministry of Research, Technology, and Higher Education (RISTEKDIKTI) encourages domestic firms to build their technology by giving them Technological Incentives in Industry since 2015. However, currently, Indonesia's competitiveness rating has declined based on the data from the International Institute of Management Development (IMD) in 2016. Indonesia's competitiveness ranking fell by 6th place, from 42th to 48th. Then according to the World Economic Forum (WEF) in 2016, Indonesia's rank dropped from 34th to 37th out of 140 countries. Another data, according to the Global Innovation Index (GII) in August 2016 Indonesia is at 88th position from 128th countries. This condition is undoubtedly a challenge for Indonesia, given the potential both in terms of natural resources and one of the largest populations in the world. One of the problems encountered, many research activities conducted by national R&D, only reached the alpha test stage (prototype development, replication and laboratory test) and beta (field test and further development) but not yet stages of diffusion. These diffusion stages are the resulting technology already implemented by the user, the initiation of commercialization, market development, and further commercialization. At this stage, various processes are required to meet the standards in accordance with market demand. The most challenging problem facing firms is how the new product or technology successfully commercialized, given a high level of market uncertainty. The experience proved that the commercialization of technology is potentially linked to levels of uncertainty and risk [2][3][4][5][6] . Most studies have emphasized the significant role of market research, which includes market, customer, and competitor analysis in stimulating the need for a new product [7][8][9] . Despite the importance of this research subject, existing empirical results are often very varied and fragmented [10][11][12] . Moreover, few efforts have been made to rank KSFs based on their relative importance [13] . This limited understanding is partly because of the complexity of the topic, and partly due to the researchers' choice of methodology [14] . Furthermore, previous research in this domain has tended to focus on the operations of large and well-established firms based in developed countries. It is unclear whether these findings would also hold in emerging countries such as Indonesia. In this article, we aim to contribute towards simplifying the variations of the aforementioned success factors. This simplification is achieved by adopting a systematic approach using the Technology-Organization-Environment (TOE) framework and adapts its commercialization of innovation. In doing so, it offers an integrated view and conceptual guidelines for examining the determinants of success factors in the context of the high-tech industry [15] . To achieve this goal, this study was conducted in three phases. In the first phase, a preliminary list of factors associated with the success of the high-tech industry was identified through a comprehensive literature review. These factors are further modified and validated through interviews with academicians and experts in the selected industry. However, these success factors are interconnected and cannot be treated as independent factors. Because of these interdependencies, the less critical factors may turn out to be more significant when evaluated collectively [16] . To address this issue, in the second phase, we propose the use of the Analytic Hierarchy Process (AHP) method to obtain the weight distribution of success factors and identify the KSFs. AHP proves to be a practical approach for assessing the models with complex interdependent factors and provides a rigorous basis for addressing the problems involving both quantitative and qualitative factors [17] . AHP is a simple yet powerful tool that was first developed within the management science field over 20 years ago [18] . The relative weights of the KSFs obtained in the second phase are used as inputs for the third phase. In the last stage, we propose the results of AHP on KSFs as criteria for evaluating performance. The proposed approach is applied to two high-tech industries in Indonesia. The two selected high-tech sectors in Indonesia are pharmaceutical and petroleum. RELATED WORKS In almost all cases, the successful innovation commercialization requires that the know-how in question be utilized with other capabilities or asset [19] . Key success factors use as a planning tool. The planning school aims at developing planning instruments which help businesses in finding the right strategy. The main assumption is that by providing input which helps decision-makers in structuring their thoughts, the quality of decision-making can be improved [20,21] . Successful commercialization is crucial in transforming invention into innovation [22] . Based on prior research, a wide variety of antecedent factors can influence the outcomes of innovation activity. The diversity of research in this field includes marketing, organizational, behavior, engineering, and operation management [23] . Key Success Factors of Innovation Commercialization In this part, we collected the previous study that mentioned success factors for innovation commercialization. After summarizing the factors, then we composed categories based on the TOE framework. Categorizing into the TOE framework is very helpful in many simplified factors presented from the previous study. As the ground theory, Project SAPPHO [24] is probably the first study to analyze successful commercial innovation by comparing between successful and unsuccessful innovations; after the study, the dyadic comparisons between project successes and failures have become popular to discover principal discriminating factors of innovation commercialization [17,25] . The SAPPHO study found five main factors of commercially successful innovations, i.e., a better understanding of user needs, more marketing and publicity, work efficiently, use technology and scientific advice, and responsible individual. Following the SAPPHO project, Cooper (1980) identified three success factors: the degree product uniqueness and superiority compared to existing alternatives. The second most important factor is market knowledge and the feeling of future market development. The third one is the synergy of the technological and manufacturing resource. Moreover, Teece (1986) indicated that services such as marketing, competitive manufacturing, and after-sales support were almost always needed for successful innovation commercialization. Cooper and Kleinschmidt (1987) further suggested that the market environment of new product success was determined by the new product strategy and development process execution. Additionally, the studies conducted For technology commercialization, technology competitiveness is the most important factor [26] . Balachandra and Friar (1999) pointed out that the more innovative technology is, the more likely is it to allow customers to do something beneficial through a greater breadth of technologies embodied in new products. R&D capability factors are also considered critical factors for improving the likelihood of new product success [26][27][28][29][30] . R&D capabilities factors are generally related to firms' resource allocation and organizational climate-related to the commercialization process. If the commercialization process and R&D organization are well-organized, the technologies used in the development of the project would also be widely available to the organization. Souder and Song (1998) argued that organizational factors such as experiences, know-how, and professionalism of employees are related to the improvement of new product development capabilities. R&D employees should have a broad spectrum of experience and expertise in the design and implementation of new technology products. Furthermore, management should enhance employees on R&D capabilities by creating a supportive climate that helps increase the innovation capability through rewards and internal support to employees. Managers set directions and develop clear commercialization processes that empower employees' productivity and competence in developing a new product [31] . Based on the previous literature, technology and R&D capabilities should also be applied and adapted to improve the likelihood of success of new technology products. The two industries may have different characteristics of success factors. However, based on previous studies [23,24] , many success factors in commercializing an innovation was conducted in multi-industry. It may be possible as the different industry competes for the same market or have the same characteristics. In this study, the pharmaceutical and petroleum industry carry the same characteristic, which contributes significantly to high-tech industry market performance in Indonesia. FIGURE 1 The TOE framework. TOE Framework Tornatzky and Fleischer proposed the technology-organization-environment (TOE) framework in 1990. The framework, as shown in Figure 1, explained the complete process of innovation from the development of innovation by engineers and entrepreneurs to the adoption and implementation of those innovations from the aspects of technology, organization, and environment context [32] . The technological context refers to technological attributes relevant to innovation. The organizational context refers to the characteristics of the firm, including its size, its resources, and the complexity of its managerial structure. The environmental context refers to the arena in which a firm conducts its business; the arena in question may include the firm's belonging industry, its customers, its competitors, and the government [33] . The TOE framework has a solid theoretical basis, consistent empirical support, and potential of the application, though specific factors identified within the three contexts may vary across different studies [34] . For example, Chau and Tam (1997) adopted this framework. They identified three factors that affect the adoption of open systems, namely, the characteristics of the innovation, organizational technology, and the external environment. Meanwhile, Borgman et al. (2013) used the TOE framework to investigate the factors influencing cloud computing adoption. In this study, the TOE framework is pursued to conceptualize and understand how IT governance processes and structures moderate those factors. Lastly, Pan and Jang (2008) considered the TOE framework to examine the effect of the decision to adopt enterprise resource planning (ERP) in Taiwan's communications industry by identifying factors that distinguish adopters from non-adopters. To construct the TOE framework, the considering factors concerning the successful innovation commercialization were deliberately drawn from a set of related theories and prior research [17,23,35] . Those factors are described in the following sections. Technological Context The technological context represents the pool of technological attributes of innovation for its adoption. These can be both the technologies available to the market and the firm's' current technological assets. In other words, the decision to adopt an innovation depends not only on what is available on the market but also on how such innovation fit with the technologies that a firm already possesses [17,25] . The literature highlighted the importance of technology to the success of innovation commercialization. For example, Gatignon et al. [7] pointed out that the role of technology appeared unconditional; more is better in any kind of market, even in a stable market. Henard and Szymanski [35] and Van der Panne et al. [17] argued that technology was the nature of product innovation; Khademi and Ismail (2013) emphasized the technology employed in the process innovation development. Montoya-Weiss and Calantone [23] asserted the synergy between technology implementation and the strategic positioning of an innovation. The technological context established in the current study has its origins in Innovation Diffusion Theory (IDT) [36] . The IDT proposes five perceived attributes of an innovation that influence its adoption: relative advantage, compatibility, complexity, trialability, and observe ability. However, empirical studies have indicated that, of these five attributes, only relative advantage, compatibility, and complexity are consistently related to adoption or utilization decisions [7,37,38] . These three common attributes are echoed by prior innovation commercialization research. How relative advantage, compatibility, and complexity of an innovation related to the innovation commercialization is explored as follows. Relative advantage Relative advantage is defined as the degree to which an innovation is perceived as better than the idea it supersedes [39,40] . These attributes are often expressed in economic profitability, in status giving, or other ways. The nature of the innovation largely determines what specific type of relative advantage (such as financial, social, and the like). Relative advantage relies on the matter of individual feel the innovation give benefits, The higher the perceived relative advantage of innovation, the more rapid its rate of adoption is going to be [39] . Relative advantage appears as a consistently important characteristic for innovation success. A large-scale meta-analysis study by [23] showed a significant correlation between successful commercialization and the advantages of innovation. Consumers need to be convinced of the potential benefit of innovation [39] . New products that are more continuous, product superiority is relevant for success, although it probably needs to be less overwhelming since customers face a much lower purchase risk [39,41] . In the services industry, the need for product superiority or distinctiveness has also been shown to be an essential success criterion [28,42,43] . For discontinuous service innovations, interaction with clients offers an opportunity to explain and convince customers of the value embodied in a new and unfamiliar service; for incremental products, highlighting on providing customers with a more satisfying experience; for example, offering more efficient problem solving, improved client training, or a more professional working relationship can be an essential basis for differentiating the new service from competitive [44][45][46] . Compatibility Compatibility is the degree to which an innovation is perceived as consistent with existing values, experience, and the needs of potential adopters [39] . Liu et al. [47] contended that an innovation diffuses more easily where such innovation appears to match the adopter's existing processes. Previous studies on innovation commercialization indicated that compatibility was one of the key factors that prevent innovation failure [17,39,47] . The more compatibility the innovation with existing values, experience, and needs, the faster the adoption tends to be. Other authors hold that compatibility is one of the key of market attractiveness and a precise definition of a customer need(s) [42] . The new or emerging technologies usually take a long time to develop to the point where they solve customer problems [48,49] . Compatibility is one of the key points to win the market. Technologies which are incompatible with customer values, system, and consumption pattern leads to purchase risks, particularly when the ultimate direction of the technology is unclear [49] . In the business-to-business sector, customer organizations that are responding to the need for cutting-edge innovation before the majority of the market accept the innovation can play an important role in the ultimate success of these types of new products [43,50] . New products that are adaptation, refinements, and enhancements of existing products and/or delivery systems are often better performers because they build on established product platforms and because they leverage the known resources, skills, and identity of the firms [51] . Complexity Complexity is the degree to which an innovation is perceived as relatively difficult to understand and use. Any innovation may be classified on the complexity-simplicity continuum. Complexity typically indicates a slower innovation adoption rate [52,53] , and may also create disutility through "feature fatigue" [54] . In the process of innovation diffusion, and innovation with complexities often entails potential customers to invest significant effort in learning to achieve promised benefits. Some individuals resist innovation because of their reluctance to undergo the necessary adjustment period. When people are confronted with a new product/service and are requested to undertake learning tasks to adopt it, the new task stress and the perception of less control over their lives produce resistance to innovation in concern. Specifically, innovation resistance occurs when consumers perceive the expected learning task as complex and difficult for innovation [55] . The results of empirical research indicate that 41% of firms observe no return on their innovative products. In comparison, 48% of potential consumers delay purchases, and 30% of consumers return purchased technological innovation products due to consumer perception of product complexity. These empirical studies reveal that a significant proportion of innovation commercialization failure resulting from the complexity of innovation. Rogers thus reiterated that an innovation's complexity could function as an inhibitor and is usually negatively related to innovation diffusion [39] . Organizational Context The organizational context represents the factors internal to an organization influencing an innovation adoption and implementation [15] . The organizational context refers to the characteristics and resources of the firm, including linking structures between employees, intra-firm communication processes, firm size, and the number of slack resources [32] . Organizations play an important role in introducing innovation to the market. Sivadas and Dwyer [56] made their study comparison of organizational factors influencing new product success in the semiconductor and healthcare industry. Their study results show that new product development success is positively related to internal organization support among R&D, marketing, manufacturing, and management support. The success of the commercialization of innovation depends on the firm's new product strategy [28,57] . In the case of highly innovative new products, it is their strategic fit which is essential as these ventures not only determine the firm's business success over a long run but considerably stretch its vital and scarce resource [12,23,28,51] . Another company-related success factor has to do with the type of innovation development culture that permeates the firm. Creating an entrepreneurial and team-oriented climate, with strong support and involvement from top management, is considered important for facilitating successful innovation by firms [30,58] . For a highly innovative new product venture, top management involvement is essential [48,49] . Great management is empowered to move things forward quickly and effectively by activating functional involvement and minimizing expendable steps [23,59] . This study chooses the following three organizational factors since they are widely referred to by prior research works [17,23,35,56,60] . They are organization culture of innovativeness, organization resource, and organization networking. These factors are highlighted because they significantly related to the success of innovation commercialization. Organizational Culture of Innovativeness Organizational culture is the pattern of the underlying assumption that a given group has invented, discovered, or developed in learning to cope with its problems of external adaptation and internal integration. It is widely recognized that innovation culture is related to increased organizational performance [48] . A firm with innovativeness tends to promote change, creativity, and novelty to develop new products and processes [61] . Trot [62] explained that long-term economic growth for a firm depends on the innovation capability. The development of innovation capability requires creativity and room to try out new ideas. This is usually accomplished in a cultural environment that explicitly recognizes the collective nature of innovation efforts. Empirical studies show that a firm's culture that is dedicated to innovation [17] . From a macro perspective, a firm with innovativeness encompasses the capacity of innovation to create a paradigm shift in an industry. From a micro view, a firm with innovativeness can be seen as the capacity to influence the firm's existing marketing resources, technological resources, skills, knowledge, capabilities, or strategy [60] . For several reasons, a firm with strong innovativeness can drive innovation strategy that leads to successful innovation commercialization. First, innovativeness provides a guideline for dealing with strategic issues, such as selecting the markets to enter and the required skills to develop. Second, innovativeness enables a firm to take advantage of the synergy between parallel innovation projects. Third, learning-by-doing can materialize the firm reap benefits of previously successful innovations and firm-specific skills that emanate from them [24] . Organization Resources The Resource-Based View (RBV) is a well-known theory to determine the strategic resources available to a firm. The RBV asserts that firms sustain competitive advantages by deploying valuable resources that are superior, scarce, and inimitable [63,64] . It includes capital, manufacturing facilities, and workforce requirements [23] . Commercializing an innovation not only requires tangible organization resources such as capital, manufacturing facilities, and financial resources but intangible ones, including technical competence, the experience of the industry, market knowledge, and close relationship with customers [65] . The more resources a firm allocates to innovation activities, the more likely it is to achieve successful innovation commercialization [10,66] . For incremental new products, resources and strategic fit are also important. An excellent resource fit can lead to more efficient, error-free, and often more highly leveraged innovation; an excellent strategic fit is essential for planning and introducing innovation in which the firm can sustain its competitive advantage [67] . Particularly in cases where services rely on distinctive, company facilities or resource, for example, a major operating system or a specialized team of experts; a high degree of fit can be incredibly advantageous from a cost, profit, and new product adoption perspective [42,68] . Organization Networking Organization networking refers to the organization's pattern of relationships with other organizations in the same network [69] . The role, development, and performance of companies is described by their ability to develop relationships. The actors in a firm's networking include distributors, buyers, suppliers, research institutes, competitors, government agencies, industry associations. It has been widely acknowledged that networking plays a crucial role in the innovation commercialization route [70][71][72] . Diverse networking actors contribute to innovation commercialization in different ways [73] . For example, vertically related networking actors, such as customers, suppliers, and distributors, help a firm to stimulate innovative ideas and implement them. The horizontal actors, such as research institutions or partners beyond the traditional supply chain, facilitate bringing innovations to the market [74] . Network relations could give the firm access to other firms' resources as complementary resources for innovation commercialization. Therefore organizational network means firms may create future demand and new market by integrating their complementary resources, products, and channel relationship through networking [74] . Sharing resources and expertise to develop new products, achieve economies of scale, and gain access to new technology and market is the weapon to face the fierce competition [62] . Networking activities may serve as a basis for selling innovative products to customers with whom the company is not collaborating technologically. Network competence contributes to a company's innovation success directly, not only through increasing the degree of technological interweaving [50,65,73] . The rationale behind the positive impact is that through collaboration, more resources can be utilized in the development process, i.e., more personal power, a larger pool of technological facilities, larger quantity, and increased quality of information and ideas. Also, more innovation projects can be carried out due to more resources, which reduces the negative impact of failing individual developments [74] . Environment Context Environmental context is the external arena in which a firm conducts its business. The literature has identified several elements that contribute to environment context [15,17,35] . Among all environment-related factors, the current study deliberately chooses three, i.e., market demand, competitive pressure, and government policy. These factors are chosen due to their significant results and the highest number of citation from previous studies [17,23,35] . Market Demand Market demand refers to the presence of consumers with different needs and requirements on firms' innovation choices. Therefore, strong market demand is the prerequisite of any commercialization activities [7] . Based upon an examination of some 567 innovations in five different industries, Myers and Marquis (1969) concluded that "recognition of demand is one of the most frequent consideration factors in innovation opportunity recognition." Cohen and Levin (1989) argued that market demand is more fundamental than other elements such as firm size or market concentration to achieve successful commercialization. Table 1 shows the factors considered for innovation commercialization. Customers perceive the value of a product differently. The market demand is varied and heterogeneous. Hence, firms must understand their target customers and anticipate the dynamic preference of customers to meet market demand rapidly. In Mc Grath (1997), it is stated that a firm can test the market demand of an innovative product by asking the following questions: is there a need existing but not being satisfied by current products? Is the need prevalent? Does the innovative product that my firm develops satisfy the need? If yes, the higher the market demands, the higher value of the innovative product Dimension Factor Definition Technology Relative advantage The degree to which an innovation is perceived as being better than its precursor. Compatibility The degree to which an innovation is perceived as being consistent with existing values, past experience, and the needs of potential adopters. Complexity The degree to which an innovation is perceived as being difficult to understand and use. Organization Organization culture of innovativeness A firm culture that promote change, creativity and novelty in order to develop new product and processes. Organization resource All of firm's assets and organizational attributes including knowledge and processes controlled by them. Organization networking The focal organization's pattern of relationships with other organizations in the same network Environment Market demand The presence of consumers with different needs and requirements on firms' innovation choices. Competitive pressure The intensity of competition coming from the same domain that a firm in practice related to price, quality, service, or the salesforce or distribution system. Government policy The rules set by government to impact a firm's innovation policy. Competitive Pressure Competitive pressure refers to peer pressure coming from the same domain as a firm in practice. It reflects the intensity of competition in the marketplace concerning price, quality, service, or the sales force or distribution system [23] . Competitive pressure has long been recognized as a driving force of innovation; it further presses a company to seek a competitive edge through the success of innovation commercialization [7] Firms gain a competitive advantage in international competition through improvement, innovation, and upgrading. The commercialization of emerging technologies is characterized by intense competition in innovations. There is usually a substantial advantage to be the first innovators when the first mover's advantages are significant, and when the patent preemption is likely. Competition accelerates obsolescence and the rate at which innovators are disseminated, thereby curtailing their expected commercial lives and destroying their appropriable value. Past studies have assumed that if there is no competition in innovation activities, the state of the art of today's technological progress in the future, and that the proprietary rights of current technologies are guarantees forever. Government Policy Government policy is a powerful force to impact a firm's innovation policy [33] . Innovation policies can emphasize basic research and technology development (e.g., public funding of basic research), exploitation of research infrastructure (e.g., universityindustry collaboration), support of industrial technology development (e.g., tax subsidies for R&D), technology adoption, and technical standardization. Government authority attempts to create an environment conducive to innovation through legal mechanisms such as tax codes, patent law, and antitrust regulations. The government, primarily at the national level, provides technology infrastructure that leverages the innovation process. Through its actions providing technology infrastructure, the government supports mechanisms, institutions, and platforms to lessen innovation barriers that cause market failure for investments in all stages of technology-based economic activity [75] . Technology policy was defined as policies that are intended to influence firms decisions to develop, commercialize, or adopt new technologies. Innovation policy refers to policies intended to influence public and private organizations' behavior in the development and commercialization of new technologies. Governments have taken various policy measures to promote innovative activities to reap economic and social benefits from technological progress [76] . There is a tendency in many countries that the government authorities seek to nourish innovations by means of establishing science parks and business incubators. Whether such parks and incubators are the results of a "natural" outgrowth of research centers or spin-offs from incumbent enterprises, the phenomenon has shaped an important mode of innovation commercialization [42] . The example of government policy that successfully supports innovation commercialization is in U.S Small Business Innovation Research (SBIR) program. In the SBIR program, the government provides financial capital, FIGURE 2 The two-level hierarchy of the key successful factors for innovation commercialization. is a decision-maker, is an organizer and coordinator of economic resources, and is an allocator of resources among alternative uses. MATERIAL AND METHOD Thomas L. AHP originally devised the Analytic Hierarchy Process (AHP) is a structured technique for organizing and analyzing complex decisions based on paired comparisons of both projects and criteria. These inputs are converted into scores that are used to evaluate each of the possible alternatives. The AHP is a powerful management science tool that has proven useful in structuring complex multi-person, multi-criterion decisions in business and economics. The advantages of AHP to the user include its reliance on easily obtained managerial judgment data, its ability to reconcile differences (inconsistencies) in managerial judgments and perceptions, and the existence of easy-to-use commercial software (i.e. "Expert Choice") that implements the AHP [23] . The strength of the AHP method lies in its ability to structure complex, multi-person, multi-attribute, and multi-period problem hierarchically. Thus, it has been discussed in a wide variety of decision situations in fields such as business planning, resource allocation, priority setting, and selection among alternatives. This AHP method can give reasonable approximation when the decision maker's judgments are consistent. The first stage is grouping the success factors from previous studies [17,19,23,24,28,35] , and put their success factors into TOE framework. The process of choosing the dimension involved scholar opinions. The following three questions were asked to the expert: Q1 Which KSFs do you think are more appropriate for the commercialization of an innovation? Q2 Which KSFs are fractional but related enough to be combined into one? Q3 Which KSFs are similar and can be grouped into categories in light of TOE? Figure 2 shows the hierarchy of KSF for innovation commercialization. The hierarchy is designed as two-level hierarchy. Inline with the hierarchy, we designed a questionnaire complying with the AHP format. The questionnaire was distributed to 16 selected members from 2 companies; 8 marketing experts from PT. Pertamina, which represents the petroleum industry in Indonesia and eight marketing and promotion experts from PT.Bio Farma represents the pharmaceutical industry, and all members had more than ten years of working experience in their industry. The chosen members have been working in the industry for at least more than eight years. The two companies are chosen based on their performance and reputation as a successful high-technology based industry in Indonesia. Research Survey The survey was conducted for approximately four months, starting from March to June 2018. Initially, we submitted a research proposal to be reviewed by both companies. The official proposal was sent via email. Furthermore, the questionnaire was given directly to both companies in Indonesia. Correspondence was done face to face, via phone also via email. Results from the questionnaire were sent gradually over two months via email. The survey allows the selected members to give the relative importance ranking of the aforementioned KSF each dimension and factors. RESULTS AND DISCUSSION After the pair-wise comparison process is completed, and an initial decision matrix is achieved, the initial matrix is normalized. As shown in Figure 3, in this study, overall CR is 0.07, which fell within the acceptable level of 0.10, as recommended by Saaty [18] . This shows that the survey respondents have assigned their weights consistently after examining the priorities of success factors of commercialization for new technology products As shown in Table 2, the criterion group of technology has the highest rank, weighing 53,9%. More than half of the weight falls into the technology factor. This result is relevant to prior research. In the successful new product commercialization, technology is one of the most commonly cited success factors [26] . Environment factors rank below the technology factor with 25,4%; this shows that environmental factors also embodied significant role for a successful new product, supported by a literature review that emphasized the significant role of the market research, which includes market, customer, and competitor analysis in stimulating the need for a new product. Most of the literature pointed out the impotence of product technology and market demand and put the organization factors below these two factors, which explained why the organization factor placed third among the TOE framework with 20,7%. As shown in Figure 4, the criterion group of technology has the highest rank, with weighting 53,9%. More than half of the weight falls into the technology factor. This result is relevant to prior research. In the successful new product commercialization, technology is one of the most commonly cited success factors. FIGURE 4 Histogram of weights against 9 sub-criteria in 3 criterion groups. The results of our analysis could provide an assessment framework for the commercialization of new technology products. We suggest a new technology product assessment model that can be practically applied by experts to minimize the market and technology uncertainties and to increase the decisions' effectiveness.The criterion group of technology has the highest rank, with weighting 53,9%. It suggest that in the successful new product commercialization, technology is one of the most commonly cited success factors. Environment factors rank below the technology factor with 25,4%; this shows that environmental factors also embodied significant role for the successful new product, supported by a literature review that emphasized the significant role of the market research, which includes market, customer, and competitor analysis in stimulating the need for a new product. Most of the literature pointed out the impotence of product technology and market demand and put the organization factors below these two factors, which explain why the organization factor placed third among the TOE framework with 20,7%. This result can also explain in the technology-driven product. Where innovation has been induced by technological capability rather than expressed market need, the concept of technology-push applies [34] . For years, scholars have argued about two kinds of successful market-driven; technology-push and demand-pull [50] . Market-pull driven strategy predominates because they aim to improve existing product lines according to consumer market trends. Most technical innovations were driven by science and technology, the role of demand, and more broadly of the market and social forces were complementary in that respect. However, more recent studies believe that many technological innovations have their origin in science and technology but still need a market and the related complementary assets to be successfully commercialized. It is assumed that both strategies, technology push, and market pull, are equally important to all kinds of organizations. It is believed that 'technology-push' and 'demand-pull' are complementary, rather than contradictory, factors determining the success of innovation. This study found that alongside technology factors (relative advantage and compatibility), environmental factors (market demand) also received a higher score than remaining factors. The overall result shows that Technology's factors are more appealing in Indonesia's high-tech industry than Environment factors like market demand. This can be explaining by the light-emitting diode (LED) lighting in Korea as an illustrative example because it coincides with features of companies adopting the technology-push strategy. Most consumers may be ignorant about laser medical devices because the specifications of the relevant technology embodied in products are challenging to understand, such as principles and components. Because it is challenging to acquire customer requirements, new product development is led not by customer needs but, instead, by advances in technology and inventors. Although the proposed approach is based on technology, and consumers are not familiar with technology, they would be closely associated with technology. Potential and possible concepts of products and markets are extracted from the technological specification. Thus, firms in the solar LED lighting field easily choose a technology-driven strategy more than a market-pull strategy for their R&D planning. The results of this study might give more insight for managers to have a better strategy for successfully launch their new product, especially in the Indonesian market. The top-level of the KSF hierarchy, within the technology dimension, the success of new product commercialization depends heavily on the relative advantage and compatibility of the product. Firms must be able to convince the market that their new product gives advantages over other available products [39] . The new technologies also have to compatible with Indonesian customer value. Compatibility is one of the key points to win the market. Ease of use of the innovation facilitates its diffusion and speeds-up its adoption. Running trials reduces customer uncertainty and reinforces positive attitudes, and thus eases adoption [9] . Customers evaluate the relative advantage and therefore need to be convinced about the potential benefits [39] . The more observable such benefits are, and the more compatible the innovation is with existing values, experiences, and needs, the faster the adoption tends to be. Other than the relative advantage and compatibility of the product, this study shows that the market demand factor is nearly as crucial as product compatibility. Firms must understand that consumer needs are subject to change over time, as are the specific ways in which consumers seek to satisfy these needs through the purchase and consumption of particular products ('demand'). Such changes may occur as a result of demographic, socio-economic, and cultural changes in society. CONCLUSION This study aims to analyze the critical success factors for commercializing innovations produced by R&D in Indonesia. It used TOE framework to investigate the determinants of innovation commercialization. The results of our analysis could provide an assessment framework for the commercialization of new technology products. We suggest a new technology product assessment model that can be practically applied by experts to minimize the market and technology uncertainties and to increase the decisions' effectiveness. As shown in Figure 4, the criterion group of technology has the highest rank, with weighting 53,9%. In the successful new product commercialization, technology is one of the most commonly cited success factors. Environment factors rank below the technology factor with 25,4%; this shows that environmental factors also embodied significant role for the successful new product, supported by a literature review that emphasized the significant role of the market research, which includes market, customer, and competitor analysis in stimulating the need for a new product. Most of the literature pointed out the impotence of product technology and market demand and put the organization factors below these two factors, which explain why the organization factor placed third among the TOE framework with 20,7%. Most technical innovations were driven by science and technology, the role of demand, and more broadly of the market and social forces were complementary in that respect. However, more recent studies believe that many technological innovations have their origin in science and technology but still need a market and the related complementary assets to be successfully commercialized.
9,249.8
2020-08-27T00:00:00.000
[ "Business", "Economics" ]
Evaluation of Bronze Electrode in Electrical Discharge Coating Process for Copper Coating One of the widely used non-traditional machines for machining of hard materials into complex shapes and different sizes is the electrical discharge machine (EDM). Recently, the EDM has been used for deposition by controlling the input parameters (current and duty cycle). This work was carried out to evaluate the readily available bronze (88% Cu + 12% Sn) electrode for deposition of copper material on titanium alloy. Experiments were conducted according to Taguchi experimental design considering the input parameters of current, Ton, Toff and preheating temperature of substrates. Titanium alloy was further hardened by preheating at temperatures of 100 °C, 300 °C and 500 °C and quenching in brine, castor oil and vegetable oil in order to avoid workpiece erosion. After this treatment, hardness, grain area, grain diameter and number of grains were characterized to compare with pretreated substrates. Then, the treated substrates were taken for copper deposition with the EDM. Output parameters such as material deposition rate (MDR), electrode wear rate (EWR), coating thickness (CT), elemental composition and surface crack density (SCD) were found. Material characterization was carried out using a scanning electron microscope (SEM) with energy dispersive X-ray spectroscopy (EDX) and optical microscopy. Output parameters were optimized with technique for order of preference by similarity to ideal solution (TOPSIS) to find optimum parameters. A sixth experiment with parameter values of Ton of 440 µs, Toff of 200 µs, preheating temperature of 300 °C and quenching medium of castor oil was optimum with MDR of 0.00506 g/m, EWR of 0.00462 g/m, CT of 40.2 µm and SCD 19.4 × 107 µm2. Introduction The EDM is a non-traditional machine that supports the fabrication of complex and intrinsic shapes with an excellent surface finish in materials [1]. This is a well-established technique in the fields of biomedical, automotive, chemical, aerospace, tool and die industries [2]. Usually, the EDM removes material by repeated sparks between the workpiece and tool electrode immersed in dielectric medium [3]. The thermal energy between electrode and workpiece creates high temperature plasma, which erodes, melts and evaporates the workpiece material [4]. Meanwhile, the EDC process requires a low current and high duty cycle which reverses the process of the EDM [5]. In EDC, electrode material is deposited on the workpiece with a difference in parameters [6]. Even in EDC, high frequency electrical discharges or sparks cause the workpiece material to melt and vaporize. Extreme temperatures in the range of 8000-12,000 • C lead to erosion and vaporization of workpiece and electrode [7]. Then, material transfer occurs from the electrode to the workpiece under the suitable process conditions and parameter setup. On the surface of the workpiece, a recast layer of redeposited melt materials from the electrode is deposited on the workpiece which is immersed in dielectric medium [8,9]. The deposited material solidifies and forms a coating in dielectric medium. This process modifies the workpiece surface by generating new compositions which can be further processed by quenching and hardening processes [10]. In this work, superalloy Ti6Al4V is exploited as a workpiece due to its essential characteristics, viz., fracture toughness, biocompatibility, improved ductility, wear resistance, yield strength and corrosion resistance [11]. This alloy has proven its applicability in various fields such as medical implants, marine appliances, airframes, automotive industry, etc. Among them, usage of this alloy in some applications, such as medical implants, wastewater treatment plants, etc., requires antibacterial coating [12]. Copper material has proven antibacterial activity since it helps to increase human immunity [13]. So, in this work, copper material was proposed as coating material. In our previous research, attempts were made to coat copper on titanium alloy using copper electrodes. Firstly, an attempt was made using copper electrodes and it was sparsely coated on the workpiece [14]. Instead, workpiece material was removed and microhole formation was observed. Secondly, brass, which is an alloy of copper (67%) and zinc (33%), was selected to coat copper and a regular, crack free and stable coating of thickness of 22 µm was obtained. In this work, one more attempt is made with a bronze electrode which is also an alloy of copper containing from 0.5 to 11% tin and 0.01 to 0.35% phosphorus. Bronzes or tin bronzes are alloys containing copper, tin and phosphorus. The addition of tin increases the corrosion resistance and strength of the alloy whereas phosphorus increases the wear resistance and stiffness of the alloy [6]. Phosphor bronzes have high fatigue resistance, solderability, excellent formability and high corrosion resistance. Phosphorus bronze has established applicability in sleeve bearings, cam followers, thrust washers and electrical products such as diaphragms, corrosion resistant bellows and spring washers [14]. This material has proven strength, high wear resistance, fatigue resistance with good machinability and corrosion resistance [15]. Researchers around the world are working to stabilize and standardize the procedure of electrical discharge coatings. Some of the examples are as follows: Algodi et al. [16] have examined the hardness variation of TiC-Fe nanostructured coating by varying the input parameters such as current and Ton and concluded in their study that the latter is the most influencing factor. Mussada et al. [17] have investigated the possibility of PM electrodes for EDM-based surface modification. The investigation was performed in a stepwise manner, though it takes more time, and a good surface finish was obtained. Hsu et al. [18] varied input parameters of the EDM, viz., material removal rate (MRR), surface roughness (Ra) and electrode wear rate (EWR), to improve the surface finishing. Here, oxygen plasma etching treatment was performed to decrease the surface roughness [19]. In order to further increase the surface characteristics, physical vapor deposition (PVD) was performed to coat TiN. Algodi et al. [20] investigated antibacterial coating on titanium alloy by mixing silver nanopowder with dielectric medium and compared it without mixing powder. It was concluded that electrode material deposition is comparatively less when dielectric medium is mixed with silver nanopowder. Tyagi et al. [21] conducted a study to coat a mild steel (MS) workpiece surface with WS2 and copper green compact electrodes in different composition mixing ratios. It was observed that WS2 increases coating thickness whereas current and duty factors influence wear and hardness. Murray et al. [22] reported their work varying the input parameters of EDC to coat different materials of copper, zirconium and tungsten carbide on stainless steel. Bui et al. [23] studied the elemental composition of the modified workpiece surface, tool electrode and dielectric fluid with immersed powder particles. Due to the application of titanium material (Ti6Al4V) in various fields, many studies are ongoing around the world. For instance, Wuyi Ming et al. [24] studied microporosity and microtrench machining, Kahlin et al. [25] studied fatigue behavior of materials, Zhen Zheng et al. [26] worked on laser-induced plasma micromachining and Schnell et al. [27] studied surface topography using femtosecond laser-induced periodic surface structures (FLIPSSs) and micrometric ripples (MRs). In this work, a bronze electrode was selected to coat copper on titanium alloy (Ti6Al4V) to compare with our previous attempts. Prior to coating, workpiece substrates were preheated at different temperatures of 100 • C, 300 • C and 500 • C and quenched in brine, castor oil and vegetable oil in order to avoid workpiece erosion [28]. After this treatment, hardness, grain area, grain diameter and number of grains were characterized to compare with pretreated substrates. EDC input parameters selected to be optimized were current, Ton, Toff and preheating temperature. TOPSIS techniques were used to optimize the input parameters and material characterization was conducted using SEM with EDX. Explanations about the electrode and workpiece material are provided in Section 2. The experimental procedure is described in Section 3 with a process flowchart. Section 4 explains the results obtained from TOPSIS and material characterization. Section 5 concludes the paper with short conclusions on this work. Workpiece and Electrode Materials In this work, titanium alloy was selected as a workpiece due to its applications in various fields, mainly as medical implants, and a bronze electrode was selected as electrode material in order to evaluate it for copper coating [29]. Initially, a plate of titanium was obtained from Ramesh Steels Corporation Pvt. Ltd., Mumbai, India and then substrates of 20 mm × 20 mm × 8 mm were made using a wire-cut EDM, whereas bronze electrodes of 100 mm in length and 10 mm in diameter were made by power-hacksaw. EDM 30 was used as dielectric fluid in this experiment. The chemical composition, density (kg/m 2 ), melting point ( • C), specific heat capacity (J/g • C) and hardness of the electrode and substrate are shown in Table 1 [11]. The three levels of EDM machining process parameters selected are shown in Table 2. Output parameters such as surface quality, surface topography and homogeneity of the coatings rely on the input process parameters, viz., current, Ton, Toff and temperature, as shown in Table 2 [19]. Taguchi L9 design was followed to prepare the combination of parameters [30,31] as shown in Table 2. Output Process Parameters In this work, output process parameters considered for optimization are material deposition rate (MDR) [32], electrode wear rate (EWR) [33,34] and surface crack density (SCD) [25]. MDR can be represented as where WAM = weight after machining and WBM = weight before machining. EWR can be represented as where EAM = weight of electrode after machining and EBM = weight of electrode before machining. Finally, surface crack density was considered which can represented as follows: where Tl is total crack length in µm and Ai is image area in µm 2 . Every researcher is interested in this parameter to provide crack free coating since it is the proper measure of cracks. This parameter depends upon the coefficient of thermal expansion of coating and workpiece material. Step 2: In this step, normalization of the above decision matrix is carried out and we obtain a normalized decision matrix γ ij . The formula for r ij is given below: Step 3: Here, weights are assigned according to the importance and the weighted normalized decision matrix can be calculated by using the formula V = w j γ ij w · r. Step 4: Positive and negative ideal solutions are calculatedtion is calculated by using the follow in this step. The solutions can be represented as the positive ideal (best) solution [36]. J and J' are associated with the beneficial and non-beneficial attributes. Step 5: Here, the Euclidean distance of each alternative from the positive and negative ideal solution is calculated by using the following equations: Step 6: Here, relative closeness to the ideal solution for each alternative is calculated by using the equation is given below: Step 7: In the final step, ranking according to the preference order is given. The alternative with maximum relative closeness should be the best choice. +C i is multiperformance characteristic index (MPCI) in TOPSIS. Experimental Procedure The EDM at the Production Engineering Lab, Osmania University was used for coating. This machine is of CREATER make and numerical control (CNC) is shown in the process flow diagram. Firstly, titanium substrates were ground and polished with emery papers of 50, 100 and 200 micrometers. Then, the substrates were taken for preheat treatment at temperatures of 100 • C, 300 • C and 500 • C and quenched in brine, castor oil and vegetable oil in order to avoid workpiece erosion. Taguchi L9 was followed for heating temperatures as shown in Table 3. Preheat treatment was performed to increase the hardness that prevents workpiece erosion when coating. Before and after the heat treatment, hardness, grain size and grain area of each substrate were measured. Then, the substrates were taken for deposition following the input parameters shown in Table 3. Figure 1 depicts the steps followed in this work for coating copper on titanium alloy. Table 3 shows the MDR and EWR and Tables 4 and 5 shows the average hardness, average diameter, average grain area, average grain number and grain structure. Results This section covers the results obtained from the experiments and discussions on analysis of output parameters, surface integrity, surface characterization, surface crack density, coating interface analysis, elemental analysis and optimization by TOPSIS. Analysis of Output Parameters MDR, EWR, CT, elemental analysis and SCD are the output parameters considered Results This section covers the results obtained from the experiments and discussions on analysis of output parameters, surface integrity, surface characterization, surface crack density, coating interface analysis, elemental analysis and optimization by TOPSIS. Analysis of Output Parameters MDR, EWR, CT, elemental analysis and SCD are the output parameters considered in this study [37]. In EDC, output parameters are required to be studied since they depend on the input parameters and finding desired values for input parameters with respect to output parameters is difficult. Conditions to be followed for these parameters are: the higher the better for MDR, the lower the better for EWR and the lower the better for SCD [38]. From the literature, it was observed that lower values of EWR and SCD can be obtained by lower current, pulse-on time, pulse-off time and temperature [39]. According to the design of experiments, it is not possible to select the desired parameter values so there is a requirement of optimization techniques for this problem. Table 6 shows the MDR, EWR, CT, SCD and elemental analysis obtained for all the experiments in which the maximum MDR and CT are 0.0122775 gram/min and 39.9 µm, respectively, whereas minimum values are 0.00044249 gram/min and 0.0000992775 µm 2 , respectively. Figure 2 shows the graph of output parameters created with the table. Variation in output parameters with respect to variation in input parameters can be observed from Figure 2. Analysis of Surface Integrity before and after Heat Treatment For the study of surface integrity, all the substrates were measured for hardness individually and showed variation in average hardness ranging from 348 HV to 398 HV taken at six points. For the similar substrates, hardnesses were again measured after heat treatment. From Figure 3, it can be seen that experiments 3, 4, 6 and 9 have shown an increase in hardness after heat treatment at 100 • C, 300 • C, 300 • C and 500 • C, respectively, and quenched in castor oil, vegetable oil, castor oil and castor oil, respectively. An important observation among these was that all substrates quenched in brine solution have shown a decrease in hardness [40]. An increase of around 10 HV after heat treatment at 100 • C with quenching in castor oil was seen, so these parameters were selected. Analysis of Surface Integrity before and after Heat Treatment For the study of surface integrity, all the substrates were measured for hardness individually and showed variation in average hardness ranging from 348 HV to 398 HV taken at six points. For the similar substrates, hardnesses were again measured after heat treatment. From Figure 3, it can be seen that experiments 3, 4, 6 and 9 have shown an increase in hardness after heat treatment at 100 °C, 300 °C, 300 °C and 500 °C, respectively, and quenched in castor oil, vegetable oil, castor oil and castor oil, respectively. An important observation among these was that all substrates quenched in brine solution have shown a decrease in hardness [40]. An increase of around 10 HV after heat treatment at 100 °C with quenching in castor oil was seen, so these parameters were selected. Figure 4 shows the SEM images of EDCs developed using different combinations of input parameters with the L9 orthogonal array. While designing the experiments, the process of heating treatment and quenching medium were also considered. It is observed from Figure 4 that all the coatings have a cauliflower structure and uniform coating. Figure 4b,d,g show uneven coating surfaces and machining spatters can be observed. Figure 7c shows uniform coating on the substrate at 100 °C with quenching in castor oil. Input parameters for experiment 3 are current of 4 Amp, Ton of 440 µs and Toff of 400 µs. Figure 4 shows the SEM images of EDCs developed using different combinations of input parameters with the L9 orthogonal array. While designing the experiments, the process of heating treatment and quenching medium were also considered. It is observed from Figure 4 that all the coatings have a cauliflower structure and uniform coating. Figure 4b,d,g show uneven coating surfaces and machining spatters can be observed. Figure 7c shows uniform coating on the substrate at 100 • C with quenching in castor oil. Input parameters for experiment 3 are current of 4 Amp, Ton of 440 µs and Toff of 400 µs. Figure 5 shows the surfaces of coatings captured using the scanning electron microscope. Coatings were thoroughly examined using SEM and if a crack was found, it was zoomed into with a magnification of 500X. Crack length was measured with SEM and SCD was calculated for all the coatings as per Equation (3) [41]. Cracks were observed in almost all the coatings and a minimum crack density of 0.000099277 µm/µm 2 was obtained for Figure 5a. Figure 5 shows the surfaces of coatings captured using the scanning electron microscope. Coatings were thoroughly examined using SEM and if a crack was found, it was zoomed into with a magnification of 500X. Crack length was measured with SEM and SCD was calculated for all the coatings as per Equation (3) [41]. Cracks were observed in almost all the coatings and a minimum crack density of 0.000099277 µm/µm 2 was ob-tained for Figure 5a. Figure 6 depicts the interfacing and bonding of coating on the base material. The investigation of copper coatings obtained on preheated substrates showed major variations in the CT which is also a function of process conditions. SEM images of the cross section of coatings deposited under different conditions are shown in Figure 6. It was observed that with an increase in current, heat is generated and damages the base material, as shown in Figure 6e,g,i. For these experiments, the deposition rate was high due to the current and duty cycle. The highest CT of 40.2 µm can be observed from Figure 6f but it seems to be highly discontinuous with the parameter combination of current of 8 Amp, Ton of 440 µs, Toff of 200 µs, preheated temperature of 300 °C and quenching in castor oil. Figure 7 shows the graph with CTs along with MDR and EWR. From this figure, it can be observed that higher CT does not necessarily mean high MDR and EWR. Figure 6 depicts the interfacing and bonding of coating on the base material. The investigation of copper coatings obtained on preheated substrates showed major variations in the CT which is also a function of process conditions. SEM images of the cross section of coatings deposited under different conditions are shown in Figure 6. It was observed that with an increase in current, heat is generated and damages the base material, as shown in Figure 6e,g,i. For these experiments, the deposition rate was high due to the current and duty cycle. The highest CT of 40.2 µm can be observed from Figure 6f but it seems to be highly discontinuous with the parameter combination of current of 8 Amp, Ton of 440 µs, Toff of 200 µs, preheated temperature of 300 • C and quenching in castor oil. Figure 7 shows the graph with CTs along with MDR and EWR. From this figure, it can be observed that higher CT does not necessarily mean high MDR and EWR. Elemental Analysis SEM was used to inspect the composition of obtained coatings with energy dispersive X-ray spectroscopy (EDX). It was understood from Figure 8 that a higher Ti and Cu percentage was obtained in the coating deposited with the parameters of experiment 8 (Figure 8h), being Ti 92.42%, current 12 Amp, Ton 360 µs, Toff 200 µs and temperature 500 • C and with quenching in brine solution. Meanwhile, the substrate that had the highest copper percentage (7.65%) was coated with the input parameters of current 4 Amp, Ton 280 µs, Toff 200 µs, temperature 100 • C and quenching in sunflower oil. Elemental Analysis SEM was used to inspect the composition of obtained coatings with energy dispersive X-ray spectroscopy (EDX). It was understood from Figure 8 that a higher Ti and Cu percentage was obtained in the coating deposited with the parameters of experiment 8 (Figure 8h), being Ti 92.42%, current 12 Amp, Ton 360 µs, Toff 200 µs and temperature 500 °C and with quenching in brine solution. Meanwhile, the substrate that had the highest copper percentage (7.65%) was coated with the input parameters of current 4 Amp, Ton 280 µs, Toff 200 µs, temperature 100 °C and quenching in sunflower oil. Optimization by TOPSIS From the discussion, it can be understood that an optimization technique is required to select the optimum coating among all these coatings. It is difficult to select one manually because each experiment was best with some output parameter. So, the TOPSIS optimization technique was applied to select the optimum coating [42]. TOPSIS is an optimization technique involving seven steps [4]. The formula used to calculate at each step was described in the Materials and Methods section. The first step is to form a matrix using the output parameters that support simplifying and processing easily and efficiently, as shown Table 7. Then, further steps were followed as per Section 4. Optimization by TOPSIS From the discussion, it can be understood that an optimization technique is required to select the optimum coating among all these coatings. It is difficult to select one manually because each experiment was best with some output parameter. So, the TOPSIS optimization technique was applied to select the optimum coating [42]. TOPSIS is an optimization technique involving seven steps [4]. The formula used to calculate at each step was described in the Section 2. The first step is to form a matrix using the output parameters that support simplifying and processing easily and efficiently, as shown Table 7. Then, further steps were followed as per Section 4. The last step is to calculate the relative closeness using the formulae shown in Equation (12). The values are tabulated in Table 8, and it can be observed that experiment 6 has higher closeness and takes the rank of 1. This represents the coating obtained from experiment 6 which is the optimum coating as per the conditions of the higher the better MDR and the lower the better EWR and SCD. 0.370057 7 Conclusions In this work, a bronze electrode was selected to coat copper on titanium alloy (Ti6Al4V) to compare with our previous attempts. Prior to coating, workpiece substrates were preheated at different temperatures of 100 • C, 300 • C and 500 • C and quenched in brine, castor oil and vegetable oil in order to avoid workpiece erosion. After this treatment, hardness, grain area, grain diameter and number of grains were characterized to compare with pretreated substrates. EDC input parameters selected to be optimized were current, Ton, Toff and preheating temperature. The TOPSIS technique was used to optimize the input parameters and material characterization was conducted using SEM with EDX. Some of the conclusions from this study are as follows: Experiments were carried out according to the Taguchi L9 design of experiments. A higher increase (10 HV) in hardness was obtained for substrate heat treated at 100 • C and quenched in castor oil. It was observed that MDR increases with a decrease in current and EWR increases with an increase in current and Ton. Surface morphology of all coatings showed a cauliflower structure. SEM with EDX confirmed a maximum copper percentage of 7.65% in the coating surface whereas copper coated with brass electrodes in our previous study had up to 70% copper material when experiments were performed with the same experimental conditions. The highest coating thickness of 40.2 µm was obtained for experiment 6 when observed in SEM images of magnification of 500X. Finally, TOPSIS has ranked experiment number six with the input process parameters of current 8
5,608.4
2023-01-01T00:00:00.000
[ "Materials Science" ]
Phosphoric acid containing proanthocyanidin enhances bond stability of resin/dentin interface Abstract Proanthocyanidin (PA) is a promising dentin biomodifier due to its ability to stabilize collagen fibrils against degradation by matrix metalloproteinases (MMPs); however, the most effective protocol to incorporate PA into bonding procedures is still unclear. This study evaluated the effect of dentin biomodification with a PA acid etchant on MMP activity, adhesive interface morphology and resin-dentin microtensile bond strength. Sound extracted human molars were flattened to expose dentin and acid-etched for 15 s according to the groups: EXP - experimental phosphoric acid; EXP+PA - experimental phosphoric acid 10% PA; TE - total-etching system; SE - self-etching system. Samples were restored with composite resin and stored in distilled water (37ºC). MMP activity and interface morphology were analyzed after 24 h by in situ zymography (n=6) and scanning electron microscopy (n=3), respectively. The resin-dentin microtensile bond strength (μTBS) was evaluated after 24 h and 6 months storage (n=6). Significantly higher MMP activity was detected in etched dentin compared with untreated dentin (p<0.05), but no difference among acid groups was found. Resin tags and microtags, indicative of proper adhesive system penetration in dentinal tubules and microtubules, were observed along the hybrid layer in all groups. There was no difference in μTBS between 24 h and 6 months for EXP+PA; moreover, it showed higher long-term μTBS compared with TE and EXP (p<0.05). The results suggest that 15 s of biomodification was not sufficient to significantly reduce MMP activity; nonetheless, EXP+PA was still able to improve resin-dentin bond stability compared with total- and self-etching commercial systems. Introduction A major challenge in restorative dentistry is to overcome deficiencies of current adhesive systems and improve the clinical longevity of resin composite restorations (1). Once exposed to the oral environment, the resin-dentin interface slowly undergoes hydrolysis of its hydrophilic resinous components by esterases, and degeneration of exposed collagen fibrils by matrix metalloproteinases (MMPs) and cathepsins, such as MMP-2, -8 and -9 (2). MMPs are enzymes present in the organic portion of dentin capable of cleaving collagen fibrils that are not protected by hydroxyapatite or resin. During secretion of dentin matrix, MMPs are produced by odontoblasts and remain trapped within the calcified matrix in an inactive form until caries, erosion and/or acid etching for adhesive restoration releases and activates them (1,2). Within time, this progressive interface degradation can lead to interfacial nanoleakage, loss of adhesive bond strength and compromised longevity of resin composite restorations (1 3). Different strategies can be used for bonding to dentin regarding the acidity of the etchants. Total-etching adhesive systems involve the application of a strong phosphoric acid (35-37%) that completely removes the smear layer and exposes the collagen network in a depth of 3-7 µm (3). The primer and adhesive applied in sequence infiltrate the interfibrillar spaces and form a hybrid layer that protects the collagen fibrils from hydrolysis (1,2). However, the demineralized dentin is not fully infiltrated by resin monomers, and fibrils at the bottom of the hybrid layer remain exposed and susceptible to degradation by MMPs released and activated by the etching procedures (4). On the other Proanthocyanidin (PA) is a promising dentin biomodifier due to its ability to stabilize collagen fibrils against degradation by matrix metalloproteinases (MMPs); however, the most effective protocol to incorporate PA into bonding procedures is still unclear. This study evaluated the effect of dentin biomodification with a PA acid etchant on MMP activity, adhesive interface morphology and resin-dentin microtensile bond strength. Sound extracted human molars were flattened to expose dentin and acid-etched for 15 s according to the groups: EXP -experimental phosphoric acid; EXP+PAexperimental phosphoric acid 10% PA; TE total-etching system; SE -selfetching system. Samples were restored with composite resin and stored in distilled water (37ºC). MMP activity and interface morphology were analyzed after 24 h by in situ zymography (n=6) and scanning electron microscopy (n=3), respectively. The resinevaluated after 24 h and 6 months storage (n=6). Significantly higher MMP activity was detected in etched dentin compared with untreated dentin (p<0.05), but no difference among acid groups was found. Resin tags and microtags, indicative of proper adhesive system penetration in dentinal tubules and microtubules, were observed along the hybrid layer in all groups. moreover, it showed higher long-(p<0.05). The results suggest that 15 s of biomodification was not sufficient to significantly reduce MMP activity; nonetheless, EXP+PA was still able to improve resin-dentin bond stability compared with total-and self-etching commercial systems. Key Words: Dentin biomodification, matrix metalloproteinases, in situ zymography, microtensile bond strength, adhesive interface. hand, self-etching systems rely on acidic monomers of the primer to etch the dentin surface and partially remove the smear layer, incorporating it to the hybrid layer (2,3). Since the dentin demineralization and the resin monomers infiltration occur at the same time, theoretically this protocol does not create a region of unprotected fibrils (2). Nonetheless, it has been demonstrated that small areas of incomplete monomer infiltration can be observed even when using self-etching systems, and that further degradation of the hydrophilic resin components also leads to areas of exposed collagen susceptible to hydrolysis (2,3). In this context, several studies have investigated the use of collagen cross-linker agents (e.g. glutaraldehyde, carbodiimide, riboflavin, chlorhexidine and proanthocyanidin) as dentin biomodifiers (5,6). Cross-linkers as substances capable of forming new links between the collagen chains, known as cross-links, which enhance the mechanical properties of the collagen fibrils against proteolytic degradation (6). Other studies have also demonstrated that cross-linkers are able to minimize dentin MMP activity (6,7). Among those, proanthocyanidin (PA) is a natural substance that can be easily obtained from plant sources such as grape seeds, cocoa seeds, cinnamon and green tea extract (7). Compared with glutaraldehyde, riboflavin and chlorhexidine, PA has demonstrated to be the most effective in reducing MMP activity (6,7). Moreover, PA does not affect cell viability and proliferation, which makes its use safe in dentistry and advantageous over certain synthetic cross-linkers that present high cytotoxicity, such as glutaraldehyde (6). Therefore, PA is a promising dentin biomodifier to increase the longevity of adhesive restorations. A fundamental aspect to the viability of the use of cross-linker agents is a clinically short biomodification time (8). Several studies used considerably long biomodification protocols (9 12) or added it as an extra step for resin composite restorations (4,13). In this context, it is still unclear what is the most effective and timesaving protocol for PA incorporation into bonding procedures. Therefore, the aim of the present study was to evaluated the effect of dentin biomodification for 15 s with an experimental phosphoric acid containing 10% PA on MMP activity, adhesive interface morphology and resin-dentin microtensile bond strength. The null hypothesis was that the application of an experimental etchant containing PA would not influence the analyzed parameters when compared to an experimental etchant without PA and commercial total-and self-etching systems. Materials and methods The present study was approved by the Ethics Committee of the School of Dentistry of Ribeirão Preto at University of São Paulo (CAAE 68497217.0.0000.5419). Experimental solutions Phosphoric acid (85 wt.% in H2O), obtained from Sigma-Aldrich (Milwaukee, WI, USA), was diluted in a 50/50% water-propylene glycol solution to obtain a 35% experimental phosphoric acid. A thickening agent was added to increase the solution viscosity and make it more resistant to flow, avoiding the etching of undesirable spots. In order to produce an experimental phosphoric acid Caieiras, SP, Brazil) was dissolved in the 35% experimental acid solution prior to the thickening step, to a final concentration of 10% w/v of PA. The resulting mixture underwent magnetic stirring for 24 h under room temperature until complete dissolution of the GSE, followed by addition of the thickening agent. The final solutions were stored at 4ºC. Specimen preparation and bonding procedures Twenty-four sound extracted human molars had their occlusal enamel and roots removed perpendicularly to their long axis using a diamond disc attached to a cutting machine (Minitrom: Struers A/S, Copenhagen, Denmark) at 350 RPM under constant water cooling. The roots were sectioned 2 mm below the cementoenamel junction and all pulp tissue remnants were removed. The pulp chamber was restored using adhesive system (Adper Scotchbond MP, 3M ESPE, St. Paul, MN, USA) and Filtek Z350 (3M ESPE) resin Box 1). Subsequently, the occlusal side was ground flat using #280 -#600 grit silicon carbide papers under running water (Politriz Arotec APL-4, Arotec S/A Ind. e Comércio, São Paulo, SP, Brazil) in order to fully expose the coronal dentin and standardize the smear layer. The teeth were then randomly treated according to the following groups (n=6): EXP -35% experimental phosphoric acid; EXP+PA -35% experimental phosphoric acid containing 10% PA; TE -total-etching system (Ultra-Etch, Indaiatuba, SP, Brazil) and SE -self-etching system (Clearfil SE Bond, Noritake Dental Inc., Osaka, Japan). TE was included as a control group since the experimental acids are also classified as total-etching systems. SE was used as a second control because it is the current gold standard for adhesion to dentin (2). For EXP+PA, EXP and TE, the acid treatments were applied for 15 s in dentin, the samples were rinsed, dried and the ad instruction. SE samples were treated using the acid primer and bond components of Clearfil SE Bond (Noritake Dental Inc.) self-etching system. The detailed description of materials, manufacturers and application protocols are presented in Table 1. All groups were restored using 1 mm layers of resin composite (Filtek Z350, 3M ESPE) light-cured for 20 s using a LED light-curing unit (DB 685, Dabi Atlante, Ribeirão Preto, São Paulo, Brazil) with an irradiance of 700 mW/cm². The restored specimens were then longitudinally sectioned in mesial-distal and buccal-lingual directions to produce 0.8 x 0.8 mm beams using a diamond saw under constant water cooling in a cutting machine (Minitrom: Struers A/S). The beams were stored at 37ºC in distilled water, which was replaced weekly for fresh amounts, and used to perform in situ zymography and microtensile bond tests. Box 1. Composition, manufacturer and application mode of materials used in the study. Microtensile Bond Test The microtensile bond test was performed after 24 h and 6 months of storage using 5 beams per tooth (n=6). The microtensile bond strength values for all beams from the same tooth were averaged and each tooth was considered as the statistical unit. The beams were fixed to a jig using cyanoacrylate glue (Superbonder, Gel-Henkel Loctite Adesivos Ltda., São Paulo, SP, Brazil), placed on an Universal Testing Machine (DL 2000, EMIC Equipamentos e Sistemas de Ensaio Ltda., São José dos Pinhais, PR, Brazil) and subjected to tensile forces at a crosshead speed of 0.5mm/min, with 500 load cell, until debonding. Microtensile bond strength values (MPa) were calculated by dividing the peak force (N) by the area of bonding (mm²) measured using a digital caliper. The broken beams were examined under confocal laser scanning microscopy (OLS 4000, Carl Zeiss, Oberkochen, Germany) at 10x magnification to determine the failure mode. Fractures were classified as adhesive (failure at the resin-dentin interface), cohesive (failure within dentin or resin portion) or mixed (adhesive and cohesive). The percentages of the fracture modes were recorded for all groups at the two experimental periods. In situ zymography The MMP activity at the adhesive interface was determined by in situ zymography, performed after 24 h of storage. A control group consisting of beams that did not receive acid or adhesive treatment before restoration was used to measure the basal fluorescence activity of dentin. Beams (n=6) were immersed for 15 min (3x) in a 1.0 mg/mL borohydride sodium solution (Sigma Corporation, Tokyo, Japan) and rinsed with phosphate-buffered saline (PBS). Subsequently, a fluorescein-conjugated mL concentration was used to incubate the specimens for 3 h at 37°C in a humidified dark chamber. In order to verify if the observed proteolytic activity was due to MMP enzymes, additional slices were preincubated in 20 mM ethylenediaminetetraacetic acid (EDTA, Sigma Corporation), a strong MMP inhibitor, for 1 h and then immersed in the gelatinous substrate. The hydrolysis of the fluoresceinconjugated gelatin substrate, indicative of MMP activity, was evaluated under a fluorescence microscope at 100x magnification (10x objective lens) using the Alexa Fluor 43HE filter (FT 570, BP 550/25, BP 605/70, Carl Zeiss). The fluorescence emission was analyzed by densitometry using ImageJ software (National Institutes of Health, Bethesda, MD, USA) and expressed as arbitrary units of fluorescence per mm 2 . Scanning Electron Microscopy (SEM) Three sound extracted human molars had their roots cut off using a low-speed saw (IsoMet, Buehler Ltd., Evanston, IL, USA). The occlusal side was ground flat using number #180 grit silicon carbide paper under running water to expose the dentin surface. On each specimen, 4 cavities (1.5 mm depth, 4 mm buccolingual length and 1.5 mm mesiodistal width) were produced in dentin using a cylindrical diamond bur (Shofu Inc., Kyoto, Japan) in a high-speed handpiece under water cooling. The samples were then washed ultrasonically in distilled water (15 min) and each cavity was treated and restored according to the 4 abovementioned groups. Using a water-cooled diamond saw in a cutting machine (Minitrom, Struers A/S, Copenhagen, Denmark), the specimens were longitudinally cut in the mesiodistal direction to produce two slices each (one buccal and one lingual) with the hybrid layer, dentin and resin composite areas exposed. The resulting slices were individually fixed inside a stainlesssteel ring with the hybrid layer facing up. Self-polymerizing acrylic resin (Epofix Harden, Struers A/S), e samples without coating the surface. After the resin polymerization, the specimens were polished under water cooling using #600 -#2000 grit silicon carbide papers (1 min each) followed by felt discs with aluminum oxide pastes (1.0 and 0,5 µm) for 1 min each. In order to enable a clear visualization of the hybrid layer and the resin tags, demineralization and deproteinization treatments were performed. The specimens were embedded in 85% phosphoric acid (Sigma-Aldrich, Milwaukee, WI, USA) for 3 min and then incubated for 10 min in 1% sodium hypochlorite solution (Sigma-Aldrich), rinsed with distilled water, air dried and stored at room temperature for 24 h. Afterwards, the specimens were sputter-coated with a layer of approximately 50 nm thickness of gold-palladium alloy at 50 militorr for 45 s (Desk II Cold Sputter Unit, Denton Vacuum LLC, Moorestown, NJ, USA). Images were obtained from the hybrid layer area at magnifications of 1350x, 3737x and 21600x using a high-resolution SEM (Quanta FEG 650; FEI, Hillsboro, OR, USA). Statistical analysis Data was submitted to a normality test (Shapiro-Wilk) and presented normal distribution. Statistical analysis was performed using One-Way ANOVA for the in situ zymography test (test power of 84.7%) and Two-Way Mixed Model ANOVA for the microtensile bond strength test (test power of interaction). α=0.05. All analyses were performed using the SPSS Software for Windows version 21.0 (SPSS Inc., Chicago, IL, USA) and GraphPad Prism 5.0 (GraphPad Software, Inc., San Diego, CA, USA). Microtensile Bond Test Microtensile bond strength means and standard deviations, according to the acid treatment and storage time, are shown in Table 1. At 24 h, EXP+PA showed no statistical difference compared with the commercial groups (TE and SE). No difference between immediate (24 h) and long-term (6 months) microtensile bond strengths was found for EXP+PA and SE, while values decreased significantly (p<0.05) for EXP and TE. After 6 months of storage, EXP+PA showed a statistically higher microtensile bond strength (p<0.05) compared with TE and EXP, and no statistical difference compared with SE. The fracture analysis showed predominantly adhesive failures in all groups at 24 h (~81%) and 6 months (~86%) (Figure 1). In situ Zymography The in situ zymography revealed an intense activation of MMPs for all groups after acid etching. MMP activity was significantly greater in etched dentin compared with untreated dentin (p<0.05), but no difference among acid groups was found (Figure 2). p<0,05 for control compared to acid groups). EXP -experimental phosphoric acid; EXP+PA -experimental phosphoric acid 10% PA; TE total-etching system; SEself-etching system; Control: untreated dentin. Scanning electron microscopy (SEM) SEM images revealed the presence of a thick and continuous hybrid layer in all groups ( Figures 3 and 4). Resin tags with lateral branches (microtags), indicative of a proper adhesive system penetration in dentinal tubules and microtubules, were also evident for all acid treatments (Figure 4). Reverse cone-shaped resin tags in close contact to dentin walls at the opening of dentinal tubules can be observed for EXP, EXP+PA and TE; gaps between the dentin walls and the resin tags at the tubules openings were found only for SE. At deeper portions of the dentinal tubules, EXP+PA and SE presented smaller caliber tags compared with EXP and TE (Figure 4). . SEM images of the resin-dentin interface with 3737x and 21600x magnification. White arrows point to gaps between resin tags and dentin tubules openings. EXP -experimental phosphoric acid; EXP+PA -experimental phosphoric acid 10% PA; TE total-etching system and SE -self-etching system. CR: resin composite; HL: hybrid layer. Discussion The findings of this study indicate that 15 s of application using a PA etchant was not sufficient to significantly prevent the activation of MMPs. Those results can be due to the short application time or the subsequent rinsing of the samples that limits the time for PA to promote MMP inhibition effects. Previous studies used PA biomodification times greater than or equal to 1 min, which are considered clinically unfavorable (6,14 16). Studies that used PA for periods as short as 15 s incorporated it into the adhesive system or used it as primer, and did not rinse the substrate (8,15). Nonetheless, recent findings indicate that PA possesses a radical scavenger activity that can impair resin monomers polymerization, resulting in lower bond strength and increased adhesive failures (5,17,18). This property makes PA unsuitable to be incorporated into bond or primer, and rinsing is necessary to remove its residues. PA incorporation into a phosphoric acid formula may be the most suitable for dental purposes, since its collagen crosslinker property remains active even in acid environments (19), no extra step is added into the bonding protocol, and PA residues would be rinsed before the application of the primer and/or bond. The adhesive interface formed with EXP+PA was similar to the ones found on other acid groups. SEM images showed the formation of a continuous uniform hybrid layer with resin tags and microtags in all groups. At deeper portions of the dentinal tubules, however, the formed tags presented a smaller linker activity, since the aggregation and overlap of collagen fibrils may result in reduced permeability to the adhesive system (20,21). In addition, PA biomodification has a hydrophobic effect that may also compromise the infiltration of hydrophilic resin monomers of the primer (6,21). It has been demonstrated that applying PA to collagen films leads to an increase of up to 15° in the surface contact angle, and a consequent decrease in wettability compared to pure collagen films (21). Nonetheless, these characteristics did not result in impaired resin-dentin bond strength after 24 h or 6 months of storage. Studies suggest that the most contributing tag features to bonding efficacy may be its shape and attachment to the dentin walls (22). In fact, in the present study the SEM images revealed reverse cone-shaped resin tags in close contact with dentin walls at the first section (opening) of dentinal tubules for EXP+PA, EXP and TE, and it was correlated with higher microtensile bond strengths at 24 h openings may have contributed to these results, even when deeper parts of the resin tags presented a smaller caliber. The findings also indicate that EXP+PA was able to preserve the bonding stable for 6 months and presented higher long-term bond strength compared with experimental and commercial total-etching groups (EXP and TE); therefore, the null hypothesis of the study was rejected. It is interesting to note that bond strength values for SE also remained stable and were not different from EXP+PA after 6 months. MMPs are released and activated by both total-and self-etching systems and can induce progressive degradation of the resin-dentin interface over time (23). Nevertheless, since selfetching techniques produce smaller areas of exposed collagen fibrils, a longer period of analysis could be necessary to observe significant interface degradation and reduction in bond strength (2,3). All groups exhibited a predominance of adhesive failures in the fracture analysis. This is probably due to the reduced size of the beams subjected to microtensile testing, once low forces are required to fracture the adhesive interface of small samples. On the other hand, large specimens may present a higher number of intrinsic defects and, as consequence, can exhibit premature cohesive failures in dentin or resin even under low tensile forces, which prevents the proper assessment of the adhesive bond strength (24). Regarding the mechanical enhancement of the collagen matrix, it has been demonstrated that application times as short as 10 s can improve the collagen`s resistance toward enzymatic breakdown (8) -linker activity probably promoted the increased bond strength and stability found for EXP+PA, even though significant MMPs inhibition was not achieved. Additionally, recent findings suggest that psychochemical interactions of PA with the collagen matrix may also play a major role on increasing adhesive strength (25). PA has bioadhesive properties due to the presence of cathecol moieties, which intermediate the binding of collagen fibrils with the hydrophobic methacrylate adhesives. This property could contribute to create a tight bond between the dentin matrix and the adhesive system that promotes the sealing of the resin-dentin interface (25). However, further studies are still needed to determine whether the bioadhesive properties of PA and its cross-linking effects are more relevant than MMP inhibition to the stabilization to the adhesive interface. In conclusion, the findings indicate that 15 s of application of a phosphoric acid containg 10% PA was not sufficient to inactivate the MMPs at the resin-dentin interface. Nonetheless, dentin biomodification with PA as a natural biocompatible cross-linker incorporated to an etchant formula was able to preserve the resin-dentin bond stability and enhance the long-term bong strength compared with total-and self-etching commercial systems.
5,093.2
2022-08-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Evaluation of Co-administration of Roselle Water Extract ( Hibiscus sabdariffa L.) and Aspirin for Antiplatelet Therapy in Male Sprague-Dawley Rats Evaluation of Co-administration of Roselle Water Extract ( Hibiscus sabdariffa L.) and Aspirin for Antiplatelet Therapy in Male Sprague-Dawley Rats. 563-9. ABSTRACT Background : Various herbal side effects caused by interactions between herbs and drugs have been reported and reviewed. For instance, roselle water extract and aspirin have similar functions in maintaining cardiovascular function. Objective: This study aimed to investigate the effect of roselle water extract on aspirin pharmacodynamics observed through the parameters of bleeding time, survival rate and the number of microthrombus that induced thromboembolism in rats. Materials and Methods: Male Sprague-Dawley rats were divided into two different experimental group for bleeding time and survival rate assay. Roselle water extract was given in three various doses (12.5 mg, 25 mg, 50 mg/200 g BW) for seven days followed by aspirin on the last treatment. Results: Results showed that the co-administration of roselle water extract and aspirin did not cause significant changes in the increase in bleeding time, the number of animals that survived and the number of microthrombus. Conclusion: Therefore, roselle water extract does not affect the pharmacodynamics of aspirin. 2N was used for saponin screening, 3% FeCl 3 for phenolic, NaCl-gelatin for tannins, Lieberman Bourchard for terpenoids, and Molisch for glycosides. The phytochemical screening showed the presence of alkaloids, flavonoids, saponins, phenolic, tannins, terpenoids, and glycosides in the extract. INTRODUCTION The use of numerous herbs has been extensively studied in terms of the important aspects of herbs. Various herbal side effects caused by herb-drug interaction have been reported and reviewed. The interactions between herbs and drugs have a higher potential than the interactions between conventional drugs because of the number of active components contained in herbs and because the conventional or synthetic drugs generally only contain a single chemical entity. 1 Some cases of herbal-drug interactions include the ginkgodiuretic thiazide interaction, which causes an increase in blood pressure; the ginkgo-trazodone interaction, which causes coma; ginsengphenelzine interaction, which induces mania; and ginkgo-aspirin interactions, which causes hyphema. 2 Based on the data shown by Tsai et al. (2012), as many as 90 incidents caused by interactions between herbs and conventional drugs in the treatment of cardiovascular systems were identified. Aspirin is one of the drugs used for cardiovascular function maintenance and one of the drugs often used in conjunction with herbs. In a literature study, 36 incidents caused by the interactions between herbs and aspirin were identified. 3 Low-dose aspirin has antiplatelet activity. Aspirin is a relatively selective inhibitor of the concentitutive isoforms of cyclooxygenase-1. The mechanism of action of aspirin in inhibiting platelet function is through the acetylation of the cyclooxygenase platelet enzyme that is in its essential amino acid serine 529. This reaction prevents access to the substrate (arachidonic acid) binding to the catalytic site of the enzyme in the amino acid tyrosine 385, thereby resulting in an irreversible inhibition of thromboxane formation. 4 Aspirin is the gold-standard antiplatelet agent. It has shown to be effective as a preventive therapy drug in patients who are at risk of cardiovascular disease (primary prevention). It is also used as a therapy drug in patients who have had one or more cardiovascular diseases (secondary prevention). 5 According to Liperoti et al. (2017), the use of herbs tends to be more dominant than the use of conventional medicines in the management of cardiovascular disease. One herb that is believed to have efficacy in maintaining cardiovascular function is roselle water extract (Hibiscus sabdariffa L.). 6 Roselle water extract has antioxidant 7,8 , antihypertensive 9 , anticoagulant and antiplatelet activities. 10 Roselle water extract can inhibit platelet aggregation in vitro, as indicated by the inhibitory activity of collagen. In addition to thrombin that possesses an inhibitory activity, adenosine diphosphate (ADP) is produced by rosella water extract in relation to anticoagulants. The inhibition of collagen, ADP and thrombin influences haemostasis. 10 Furthermore, roselle water extract has been used in the community for years and is utilised in various preparations, such as tea, flavoured drinks and also food colouring. Considering that roselle water extract has a variety of uses, researchers used it together with drugs, thereby having the potential to cause interaction. The presence of drug interactions can cause severity at varying levels, ranging from conditions that can still be tolerated to conditions that can cause death. 2 Therefore, researchers intend to investigate the effect of rosella water extract on the pharmacodynamic effects of aspirin, so that the effects of possible interactions can be identified. Materials and animals Roselle water extract (Hibiscus sabdariffa L.) was obtained from the Research Institute for Medicinal and Aromatic Plants (Bogor, Indonesia), with the test certificate number 369/T/LAB/V/18. Aspirin, 0.9% saline, epinephrine and collagen were acquired from the same company (Sigma Aldrich, China). Meanwhile, neutral buffer formalin (NBF) 10% (Indogen, Jakarta, Indonesia) and carboxymethyl cellulose (Daiichi, Japan) were also utilised in this study. Phytochemical screening of roselle water extract The presence of metabolite in the plant extracts was identified by phytochemical screening. The screening was performed using standard procedures described by Sakti, et al. (2019). 11 The Dragendorff method was used for alkaloid screening with quninine used as a positive control. Briefly, alkaloid test was performed by dissolving 100 mg extract in 9 ml of aquadest in test tube, then added 1 ml of 10% HCl solution. The mixture was heated at 70°C for 1 min. After that, to the test tube was added 1 ml of Dragendorff solution. The shinoda test was used for flavonoids with cathecin as a positive control. Briefly, 100 mg sample was dissolved in 10 ml of 96% ethanol. Then, 5 ml sample solution in the test tube was added 4 drops of concentrated HCl and 100 mg of magnesium powder. Pink color showed the presence of flavonoid. Furthermore, HCl 2N was used for saponin screening, 3% FeCl 3 for phenolic, NaCl-gelatin for tannins, Lieberman Bourchard for terpenoids, and Molisch for glycosides. The phytochemical screening showed the presence of alkaloids, flavonoids, saponins, phenolic, tannins, terpenoids, and glycosides in the extract. Total phenolic content (TPC) determination of roselle water extract Total Phenolic Content of roselle water extract was determined using Folin-Ciocalteu reagent following standard procedure of Sakti, et al. (2019). 11 Briefly, a volume of 25 µl extract solution was mixed with 100 µl Folin-Ciocalteu reagent (diluted 1 : 4 (v/v) with ddH2O) and shaken on 96-well microplate, then left to stand for 4 min at room temperature. Into the well added 75 µl Na2CO3 solution (w/v), then the mixture shaken for 60 s. Subsequently the reaction mixture was incubated for 2 h at room temperature. The absorbance was read at 756 nm using a microplate reader (VersaMax Microplate Reader, USA). Total phenolic content of roselle water extract was found at 1.396%. High performance liquid chromatography (HPLC) analysis of roselle water extract Roselle water extract was analysed by HPLC using the Agilent 1200 series system, that is, the HPLC-0053 system (Agilent Technologies, Santa Clara, CA), equipped with diode-array detectors (Agilent, Serial No.: DE 60555816) and C18 columns (Inertsil ODS-3, 5.0 µm). The extract contains chlorogenic acid and gallic acid, which has an antiplatelet activity. [12][13] Chlorogenic acid and gallic acid chromatograms will be published elsewhere. Moreover, the extract was analysed using a validated HPLC method described in our previous studies. 14 Animal experimental design Male Sprague-Dawley rats with a body weight (BW) of 100-200 grams were purchased from the Bogor Agricultural Institute. The use of animals in this study was ethically approved by the ethics committee of a medical faculty in Universitas Indonesia, with the approval number 0646/UN2.F1/ETIK/2018. Before experiment, the acclimatization was carried out for two weeks. All animal had a normal diet and free access to a drinking water. The animals were placed in a well-ventilated cage with 12 hours light-dark periods and with a constant physical ambient temperature (25 ± 5°C). The animals were observed for the activity and weighed every day. Only healthy animals were included in this study. A total of 36 rats were divided into six groups with six rats individually for tail bleeding assay. Each group was treated differently as follows: normal control (0.5% CMC, orally for 7 d) was labeled as vehicle, aspirin (2 mg/200 g BW, orally for 7 d), roselle (roselle water extract at 50 mg/200 g BW, orally for 7 d), RD1A (roselle water extract at 12.5 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day), RD2A (roselle water extract at 25 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day) and RD3A (roselle water extract at 50 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day). Another total of 42 rats were divided into seven groups with six rats individually for survival rate assay by pulmonary thromboembolism model. Each group was treated differently as follows: normal control (0.5% CMC orally for 7 d; normal saline injection), vehicle (0.5% CMC orally for 7 d; collagen/epinephrine injection), aspirin (2 mg/200 g BW orally for 7 d; collagen/epinephrine injection), roselle (roselle water extract at 50 mg/200 g BW orally for 7 d; collagen/epinephrine injection), RD1A (roselle water extract at 12.5 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day; collagen/epinephrine injection), RD2A (roselle water extract at 25 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day; collagen/epinephrine injection) and RD3A (roselle water extract at 50 mg/200 g BW orally for 7 d + aspirin at 2 mg/200 g BW orally on 7 th day; collagen/epinephrine injection). Pharmacodynamic interactions evaluation of roselle water extract and aspirin The interaction evaluation was conducted through bleeding time assay and survival rate assay by pulmonary thromboembolism model as described by Saputri et al (2017). 15 Such treatment was administered orally for 7 days. In the co-administration groups (RD1A, RD2A and RD3A), aspirin was administered 30 minutes after the administration of roselle water extract on day 7. Each dose of roselle water extract was suspended in 0.5% CMC to obtain the suspension of extract. While aspirin was dissolved in water. Tail bleeding time assay Bleeding time of rats was tested by injuring their tails. Prior to tail injury, each rat was anaesthetised using xylazine (10 mg/kg BW) and ketamine (100 mg/kg BW). The rat's tail was injured by incising the tip of the rat tail for 2 cm using surgical scissors. The tail was placed in a falcon tube containing 0.9% saline. The blood coming out of the rat's tail was seen in the falcon tube with a certain time interval. From the time the blood began to flow until the time the blood stopped flowing is regarded as the bleeding time. 16 Survival rate assay by pulmonary thromboembolism model The survival rate test was performed on rats by injecting an intravenous induction solution through the rat's tail. 17 The induction solution used was collagen at 0.21 mg/200 g BW and epinephrine at 0.07 mg/200 g BW in 0.9% saline solution to induce pulmonary thromboembolism. While the normal group was given an injection of normal saline. Intravenous injection was given 1 hour after the administration of last treatment in each rat. The lethal and paralysis effects are observed for 15 minutes after induction. Calculations were performed on rats that died as well as those that survived due to the occurrence of thromboembolism induced by collagen and epinephrine. Rats survived if they could resume to normal activities after 1 hour of collagen and epinephrine administration. The survival rate (%) was calculated based on the formula which described by Sakti et al. (2020). 18 Histopathology of a rat's lung The extracted part of a rat's lung was perfused with phosphate-buffered saline and 10% NBF. The parts of the lung were removed and stored in a container with 10% NBF. In the histology process, the isolated part of the lung was washed gradually with alcohol and xylen. Next, paraffinisation was performed. The part of the lung that was initialised was sliced into 5 µm using a rotary microtone, stained with haematoxylin and eosin, cleaned again with xylen and then examined using a fluorescence microscope. 19 Statistical analysis The bleeding time assay data are presented as mean ± SD (Standard Deviation). The Levene test was used to determine the homogeneity, while the Shapiro-Wilk test was used to analyzed the normality of data. The differences in mean was analyzed with ANOVA followed by Tukey and LSD post hoc analysis using SPSS v. 22. Values of p < 0.05 were considered to be statistically significant. Effects of roselle water extract, aspirin, and coadministration on bleeding time assay After a week administration, bleeding time was significantly increase in all the drug treatment groups (aspirin, roselle water extract, RD1A, RD2A, and RD3A) compared to normal control group (p<0.05, Figure 1). Co-administration group showed slightly increase in bleeding time in dose dependent manner, with the highest mean value of bleeding time was shown by RD3A group. RD1A, RD2A, and RD3A groups did not induce a further prolongation in bleeding time compared to roselle water extract alone. Furthermore, the bleeding time in the coadministration groups did not significantly increase compared with that in the single aspirin group (p>0.05). Effects of roselle water extract, aspirin, and coadministration on survival rate in pulmonary thromboembolism model Seven days of treatment with aspirin and roselle water extract increased the survival rate with the highest percentage of survival rate was demonstrated by aspirin (66.7%). As shown in Table 1, co-administration groups increased the survival rate due to thromboembolism induction as well as the dose of roselle water extract increase. Microthrombus formation in the lung was used to evaluate the reason of the animal death. In the aspirin, roselle and three co-administration groups (RD1A, RD2A, and RD3A), a number of microthrombus were formed and showed a significant decrease than that in the vehicle group. The microscopic images of microthrombus formed are shown in Figure 2. However, in the co-administration groups (RD1A, RD2A and RD3A), the number of microthrombus formed did not decrease significantly compared with that in the single aspirin group. The analysis results of the number of microthrombus observed from the five fields of view are presented in Table 2. DISCUSSION Aspirin, when enters the body, it is rapidly hydrolysed into salicylic acid by the esterase enzyme found in the intestine. In this metabolic process, species differences have a significant effect related to the activity and specificity of the esterase enzyme as an aspirin-metabolising enzyme. Through testing the bleeding time, we can investigate the ability of a compound to affect the process of haemostasis in general and platelet activators in particular. Moreover, haemostasis plays a role in ceasing blood flow through cellular and biochemical mechanisms with the aim of preventing massive blood loss due to injury through blood clotting mechanism. The formation of a blood clot is certainly inseparable from the various factors involved in it. Among these factors are thromboxane A2, ADP, collagen and thrombin. 21,22 Based on the study results, aspirin as a compound has indeed shown to be an antiplatelet drug, exhibiting its ability to influence haemostasis, especially in the process of inhibiting platelet activation and aggregation. This phenomenon can be observed from the significantly longer bleeding time in the group of rats that received aspirin than that in the vehicle group. The ability to increase bleeding time was also seen in a single roselle water extract group and three co-administration groups (RD1A, RD2A and RD3A). This activity is due to the inhibition of platelet activation by chlorogenic acid via A2A receptor/adenylate cyclase/cAMP/PKA activation, and consequently, the GPIIb/IIIa receptor activation and platelet secretion suppressed. 23 As mentioned, in the group survival test, thromboembolism was induced by an intravenous injection of collagen-epinephrine solution. However, no significant differences were found between the groups in providing protection against thromboembolism events. The possible reason for such insignificance is that the initiation of platelet activation and the occurrence of platelet aggregation through exposure to external collagen are stronger than the ability of single roselle water extract or co-administration with aspirin in inhibiting thromboxane A2, ADP and collagen adhesion to platelets. In the end, aspirin and roselle water extract failed to significantly prevent thrombus formation; the formed thrombus then circulated and clogged the arteries in the lungs, thereby causing death. The histopathological results of rat lungs in the survival rate test group showed that thromboembolism induction using the collagenepinephrine solution was successful. The successful thromboembolism was characterised by microscopic identification of microthrombus in the lungs of rats. In the aspirin, roselle and three co-administration groups (RD1A, RD2A and RD3A), a number of microthrombus were formed and showed a significant decrease than that in the vehicle group. Thus, these compounds have a tendency to prevent thrombus formation induced by collagen-epinephrine solution. On the basis of the microscopic image above, the intravascular exposure to collagen-epinephrine solution can cause microthrombus formation in the lungs of rats. The microthrombus formation can be seen in the pictures (A, B, C, D, E and F). Microthrombus was formed because of the presence of collagen and epinephrine, which initiate the activation of platelets. Activated platelets circulated in blood vessels and caused blockages in the pulmonary arteries of the lungs, ultimately causing occlusion of blood vessels. In addition to the formation of microthrombus, another condition that also caused the death of rats in this study is the occurrence of oedema in their lungs. One of the causes of pulmonary oedema is the formation of pulmonary embolism. The development of oedema secondary to pulmonary embolism is possible due to excessive perfusion. Extensive damage to blood vessels in the lungs due to embolism can cause excessive perfusion of the capillaries and generate 'dependent flow' or hydrostatic pulmonary oedema in this area. With increased flow and intracapillary pressure, the fluid will undergo extravasation. However, the rate of fluid loss is likely to increase with pathological increase in capillary permeability caused by pressure failure. Increased total lung volume causes an increase in pressure failure in the pulmonary capillaries. 24 CONCLUSION Rosella water extract co-administered with aspirin in three dose variations did not show significant changes in the increase in bleeding time, in the survival rate of rats with induced pulmonary thromboembolism and in the number of microthrombus formed. Therefore, roselle water extract does not affect the pharmacodynamics of aspirin.
4,319
2020-03-04T00:00:00.000
[ "Biology" ]
URI alleviates tyrosine kinase inhibitors-induced ferroptosis by reprogramming lipid metabolism in p53 wild-type liver cancers The clinical benefit of tyrosine kinase inhibitors (TKIs)-based systemic therapy for advanced hepatocellular carcinoma (HCC) is limited due to drug resistance. Here, we uncover that lipid metabolism reprogramming mediated by unconventional prefoldin RPB5 interactor (URI) endows HCC with resistance to TKIs-induced ferroptosis. Mechanistically, URI directly interacts with TRIM28 and promotes p53 ubiquitination and degradation in a TRIM28-MDM2 dependent manner. Importantly, p53 binds to the promoter of stearoyl-CoA desaturase 1 (SCD1) and represses its transcription. High expression of URI is correlated with high level of SCD1 and their synergetic expression predicts poor prognosis and TKIs resistance in HCC. The combination of SCD1 inhibitor aramchol and deuterated sorafenib derivative donafenib displays promising anti-tumor effects in p53-wild type HCC patient-derived organoids and xenografted tumors. This combination therapy has potential clinical benefits for the patients with advanced HCC who have wild-type p53 and high levels of URI/SCD1. • The use of liver organoids from HCC patients could be a key experiment for reinforcing the results of the authors.Authors should collect HCC samples from patients, generate liver organoids and treat them with or without inhibitors of SCD1, shURI and sorafenib to confirm their results. • Identification of p53 target signature by RNA-seq data analysis from previous published work in URI overexpressing mice (Accession number GSE48654) could strengthen the results.Additionally other previous models for HCC could be checked in this regard. Minor points • In general, font size should be increased in all figures • There are some typo mistakes along the manuscript that should be corrected as in line 78 "metabolism" • Authors could check the expression of SCD1 in figure 5a • Authors should include the control of flag-SCD1 alone in figure 3h • Authors could strengthen their claims by checking the altered pathways in publically available data from URI overexpressing mice • p values are missing in GO table in Extended Data Figure 3g.Authors added them. • Number of patients with URI low SCD1 low is missing in Extended Data figure 10b, which in fact is 25. • It is not clear if the complete list of URI binding candidate proteins identified by LC-MS is provided by authors • Authors should include n points in Figure 1f • It is unclear the ubiquitinated SCD1 state in Extended Data Figure 5b.Could authors have swapped the ubiquitinated SCD1? • Legend of Figure 1a should be modified, since it says "the triangle size indicates" but there are no triangles in the figure .• Specific information should be included in Figure 7 h and g such as the time scale and months Reviewer #2: Remarks to the Author: Tyrosine kinase inhibitors (TKIs) represent a type of promising drugs in hepatocellular carcinoma (HCC) treatment, while the resistance to them is a vital bottleneck to overcome.In this paper by Ding et al, authors aimed to study the role of unconventional prefoldin RPB5 interactor (URI) in HCC.They found that URI could enhance the resistance to TKIs in HCC by reprogramming the SCD1-related lipid metabolism.This endows HCC more resistant to TKIs-induced ferroptosis.Then, authors discovered that URI-mediated SCD1 upregulation is p53 dependent.They also proved that SCD1 is a p53 repressive target gene.Next, they revealed that URI could bind TRIM28 to promote the ubiquitination and degradation of p53.Finally, they showed that combination of SCD1 inhibitor with TKI has synergic effect in HCC treatment.Although this study provides some interesting findings, several critical issues need to be addressed. Major ones: (1) SCD1 as a p53 target has been reported before by several other papers.TRIM28-MDM2-p53 axis is not new, too.In addition, combination therapy by using SCD1 inhibitor and TKI in cancer is also not novel.These facts may weaken the novelty of this research. (2) About the "URI reprograms SCD1-associated lipid metabolism" section, I'm curious why URI knockdown only increase the level of saturated fatty acids but not PUFA.The level of PL-PUFA in Fig 2a and b seem to be downregulated.How about the level of peroxidized PL-PUFA?This is the direct evidence to demonstrate the effect of URI is through ferroptosis.If PUFA level can't be changed by URI, how do the author explain the enhanced lipid peroxidation when inhibiting URI or SCD1? (3) In fig 3c-e, authors need more evidence to prove that the effects of URI and inhibitors are through affecting ferroptosis.Ferroptosis inhibitors should be used to reverse these effects.In addition, oxidized PL-PUFA level should be determined.(4) In fig 7 and related Extended figures, all the data didn't consider the p53 status (null or mutation) in the patient samples.This may undermine the conclusion of this paper.Minor ones: (1) Why did the authors choose URI to investigate?The rationale should be provided.Can sorafenib treatment induce URI expression?It has been reported that sorafenib could upregulate p53 level, which is opposite to the effect of URI.Therefore, how "high" should the level of URI be that could reverse the effect of sorafenib on p53 activation?What's the percentage of HCC patients bearing WT p53 and high URI? (2) The domains in URI and TRIM28 responsible for the binding between these two proteins need to be determined.(13) In this sentence "URI depletion significantly increased the sensitivity of JHH1 and HepG2 cells to sorafenib (Extended Data Fig. 2c), with a decreased in IC50", "decreased" should be "decrease".Or you can delete the word "in".( 14) In this sentence "Lipid metabolic reprogram is involved in cancer drug resistance", "reprogram" should be "reprograming".(15) In this sentence "we found that URI high-expression in HCC is associated with cancer malignant and poor survival of patients", "malignant" should be "malignancy".( 16) In this sentence "This may helpful to keep p53 levels low as has been detected in cancer cells", "may" should be "may be".(17) In this sentence "URI-p53-SCD1 axis mediates resistance of TKIs and may explain why p53wild type HCC still showed intrinsic resistant to TKIs", "resistant" should be "resistance". Reviewer #3: Remarks to the Author: This is an interesting manuscript suggesting that sorafenib and other TKIs may be more effective in HCC when cells are sensitized by SCD1 inhibitors, the combination of which causes ferroptosis.One strength of the manuscript is that it spans the gamut of experiments done in cell culture, xenografts and human patient samples.The link between levels of URI and SCD1 are convincing. Specific comments: There are typos throughout the manuscript.Please fix all of these. Results section 1 The full list of genes altered in control versus shURI HepG2 cells should be provided in a table. Please state how many genes are related to ferroptosis and what percentage of these are affected by URI KD. Please reference and describe the SCD1 promoter used in the luciferase construct. Extended data 5, there are two panels labeled "b", but no panel "c". Figure 5.There is no panel labeled "n". Figure 6h.The p53 IHC is not possible to interpret and should be improved.Figure 6e.There is a large effect of Donafenib alone on these tumors.In light of this fact, please explain/justify how this could be a good model to study synergy between Donafenib and Aramchol. Figure 7a It is very difficult to see the IHC.Please make brighter.Figure 7 d,h,f and g.The color scheme shown for the interpretation of the graphs are different than the colors on the graphs and it is difficult to interpret the data because of this.Text-Please define para-tumor. Given the hypothesis that URI interacts with TRIM28 to promote p53 ubiquitination and degradation involving MDM2, it would be important to determine how the levels of URI and SCD1 correlate with levels of p53 in patient samples shown in Figure 7. Reviewer #1 -HCC metabolism -(Remarks to the Author): By using RNA-seq and lipidomic analysis in different cell lines, human samples and xenograft models, the manuscript by Ding et al aims to explore mechanisms behind resistance to tyrosine kinase inhibitors (TKIs) for treatment of advanced hepatocellular carcinoma (HCC).Authors identify SCD1 a key enzyme involved in lipid metabolism and which inhibition could be combined with TKIs to efficiently treat HCC patients with wild-type p53.To reach this conclusion, authors demonstrated that an increase of the prefoldin URI is linked to TKIs-induced ferroptosis.Authors demonstrate that URI interacts with TRIM28, leading to proteasome-mediated p53 degradation.The lack of p53 prevents the transcriptional repression of Scd1.Accordingly, SCD1 decreases TKIsinduced ferroptosis and enhances resistance of cancer cell to TKIs by increasing the ratio of monounsaturated fatty acids (MUFAs).Inhibition of URI-p53-SCD1 axis will enhance ferroptosis in patients provoking higher efficiency of TKIs in patients with HCC.This is a very impressive and well conducted study.The work is truly extensive and solid with clear data and strong evidences.The paper is linear in the experimental procedure.Yet, there are still some points that authors have to consider prior publication in Nature communications. Major points (1) Since the mutation of p53 can be recurrent in some HCC patients, authors should correlate the incidence of those patients in order to show resistance of TKIs.Indeed, according to their results, authors should show data with less resistance to TKIs.Furthermore, how is the p53 mutation status affecting the survival plot in URI or SCD1 patients?Can authors provide this plot? Response: We thank for the reviewer's vital advice.Similar comment was also given by reviewer#2 and the results were very important to strength our conclusion.Since p53 mutation is common in tumors including HCC and the kinds of p53 mutation are complicated, we thought that only the DNA sequencing data could demonstrate the p53 status in tumors.The cohorts we enrolled in the former manuscripts were lack of the genomic sequencing data, thus the p53 status could not be determined.To address this question, we first employed a new HCC cohort enrolled by Gao et.al (Cell, PMID: 31585088), which we named "Fudan_HCC_cohort".The results were shown in Fig. 8 and Extended Data Fig. 10.By analysis the WES and transcriptome data of this cohort, we found that in HCC patients with p53-WT status, the SCD1 expression was lower in URI low tumors than in URI high tumors, while other ferroptosis-associated molecules, such as ACSL4, ALOX12, GPX4, SLC7A11 and AIFM2, were not significantly altered.Interestingly, the SCD1 level in p53-mutation HCC patients were comparable between URI low and URI high tumors.Moreover, higher URI or SCD1 expression in p53-WT HCC patients were correlated with poorer clinical outcome.We did not observe this correlation in p53-mutation HCC patients.Taken together, these results demonstrated that the potential correlation between URI-SCD1 and the clinical outcome of HCC patients was existed in patients with wild type p53, but not in p53-mutation ones. To explore the URI-SCD1 axis in sorafenib resistance, we employed our previous cohort which enrolled HCC patients with recurrent HCC, the patients were then received systemic therapy containing sorafenib (named Cohort C in the revised manuscripts, PMID: 32373219).The mutation landscape of this cohort was performed.Forty-five patients were p53-WT and one patient harbored p53 synonymous mutation, whom we also grouped into p53-WT.The protein levels of SCD1, URI and p53 were measured by immunohistochemistry.We found significant correlation between SCD1 and URI in p53-WT group, but not in p53-mutation group (Fig. 8 and Extended Data Fig. 11). Meanwhile, higher levels of SCD1 or URI were associated with worsen prognosis in p53-WT HCC patients receiving sorafenib treatment, while no significant correlation were found in p53-mutation patients.Thus, our results demonstrated the important role of URI-SCD1 axis in sorafenib resistance in p53-WT HCC patients. (2) The use of liver organoids from HCC patients could be a key experiment for reinforcing the results of the authors.Authors should collect HCC samples from patients, generate liver organoids and treat them with or without inhibitors of SCD1, shURI and sorafenib to confirm their results. Response: Thanks for the constructive suggestion.We had obtained four HCC patient derived organoids (HCC-PDOs), two with wild-type p53 and two with mutant p53 (Fig. 6d).Consistent with our results in cell lines and xenograft tumor models, interfering URI promoted the sensitivity of TKI drugs and elevated lipid peroxidation levels of HCC-PDOs with wild-type p53 (Fig. 6e-h).The combination of SCD1 inhibitor aramchol and deuterated sorafenib derivative donafenib also displayed promising anti-tumor effects in HCC-PDOs with wild-type p53 (Fig. 6e-g).In HCC-PDOs with mutant p53, donafenib was less effective than in HCC-PDOs with wild-type p53.The combination of donafenib and aramchol synergistically upregulated lipid peroxidation levels to a certain extent (Extended Data Fig. 8e, f).However, this combination treatment showed less effect in HCC-PDO with mutant p53 than in HCC-PDO with wild-type p53 (Extended Data Fig. 8b, c). These results imply that URI could regulate the TKI sensitivity and the combination therapy identified here may be effective in p53 wild-type HCC. (3) Identification of p53 target signature by RNA-seq data analysis from previous published work in URI overexpressing mice (Accession number GSE48654) could strengthen the results. Additionally other previous models for HCC could be checked in this regard. Response: Thank you very much for your kind advice.We have read this article (PMID: 25453901) and analyzed the data set (GSE48654).In their mouse model, overexpression of human URI (hURI) in mice could lead to spontaneous liver cancer by inducing DNA-damage and subsequent p53mediated apoptosis and liver injury.This is a tumorigenesis model and requires multistep process including acquiring mutations of the driver/suppressor genes in normal or premalignant liver cells. Indeed, the authors also mentioned that all tumors displayed dramatic increases in p53 abundance and phosphorylation, and the authors had pointed that the p53 was inactivated by mutation or inappropriate folded.Thus, their results showed that hURI overexpression in normal liver cells could lead to stress-induced p53 activation and finally p53 inactivation (by mutation or other mechanisms).Following the authors' opinion, these mouse tumors might be similar with the URI high p53 mut HCC in humans. As shown in Additional Figure 1, we then analyzed the transcriptome data between hURI-mice and control mice for 1 week, the nonpathological timepoint that no major mutation was accumulated and the liver cells were not transformed.We found that p53-target genes were comparable or slightly changed than control mice.However, the lipid metabolism pathways, including linoleic acid metabolism and arachidonic acid metabolism, were affected by hURI overexpression at early stage. These results suggested that hURI has a critical role in lipid metabolism reprograming.Interestingly, we found that Scd1 was also elevated in 1-week hURI mice than control mice, supporting the promoting role of Scd1 by URI in liver.Then we compared the transcriptome data hURI-mice and control mice at 8 weeks, the early premalignant state of HCC.Remarkably, p53-associated pathway was significantly changed in mouse with hURI at this timepoint.Collectively, these results strongly suggest that wild type p53 was involved in hURI-mediated liver damage, then p53 was inactivated for further tumor progression. In our study, we focused on the role of URI in inhibiting TKIs in HCC with wild-type p53.We found that URI could regulate SCD1-meidated lipid metabolism in wild type-p53 dependent manner in tumor cells, this effect was contributed to URI-mediated TKI resistance in p53-WT HCC.Response: Thank you again for your meticulous attention to detail.We apologize for this mistake. We have added the number in Extended Data Fig. 11b (original Extended Data Fig. 10b). (8) It is not clear if the complete list of URI binding candidate proteins identified by LC-MS is provided by authors. Response: Thanks for your suggestion, we have added the complete list of URI binding candidate proteins identified by LC-MS to the supplementary material (Supplementary Table 3). (9) Authors should include n points in Figure 1f Response: Thank you for your careful review, we have added n points in Fig. 1f.(12) Specific information should be included in Figure 7 h and g such as the time scale and months Response: We have added the information in Extended Data Fig. 11d, e (original Fig. 7h, g).We would like to thank you sincerely for your guidance on our article.We apologize again for the mistakes.We have proofread and corrected each one in the light of your review. Reviewer #2 -P53, metabolism, ferroptosis, mass-spec, RNA-seq -(Remarks to the Author): Tyrosine kinase inhibitors (TKIs) represent a type of promising drugs in hepatocellular carcinoma (HCC) treatment, while the resistance to them is a vital bottleneck to overcome.In this paper by Ding et al, authors aimed to study the role of unconventional prefoldin RPB5 interactor (URI) in HCC.They found that URI could enhance the resistance to TKIs in HCC by reprogramming the SCD1-related lipid metabolism.This endows HCC more resistant to TKIs-induced ferroptosis.Then, authors discovered that URI-mediated SCD1 upregulation is p53 dependent.They also proved that SCD1 is a p53 repressive target gene.Next, they revealed that URI could bind TRIM28 to promote the ubiquitination and degradation of p53.Finally, they showed that combination of SCD1 inhibitor with TKI has synergic effect in HCC treatment.Although this study provides some interesting findings, several critical issues need to be addressed. Major ones: (1) SCD1 as a p53 target has been reported before by several other papers.TRIM28-MDM2-p53 axis is not new, too.In addition, combination therapy by using SCD1 inhibitor and TKI in cancer is also not novel.These facts may weaken the novelty of this research. Response: We thank for the reviewer's constructive comment.In our view, the novelty of our research was based on the following findings: (1) Our previous study and other research works had revealed that URI could act as an oncogene and potential therapeutic target in liver cancers.However, whether URI could regulate sorafenibinduced cytotoxicity in HCC was unclear.Here, by employing various HCC cell lines and patientderived organoids (we had employed the PDOs from HCC patients with p53-WT or -mutation, as shown in Fig. 6 and Extended Data Fig. 8), we had showed that URI could promote the resistance to TKIs in a p53-SCD1 dependent manner.Moreover, according to the results of our clinical cohorts, we had found that although HCC patients with p53-wild type had a better clinical outcome than p53-mutation group, higher levels of URI in the p53-wild type group still indicated worsen prognosis (Fig. 8).However, in HCC cells and tissues with p53-mutation, we had not found the effect of URI in sorafenib-resistance (Extended Figure 10 and 11).Thus, our results suggested that the role of URI in sorafenib resistance was relied on the function of wild type p53. (2) By screening the ferroptosis-associated molecules in p53-WT HCC cells, we had identified SCD1 as the target molecule regulated by URI, which revealed a former unknown correlation between oncogene URI and the lipid metabolism in HCC.This correlation was further confirmed in mice once hURI were overexpressed in liver for 1 weeks (as mentioned in the response to reviewer#1).The dependence of SCD1 in sorafenib resistance in p53-WT HCC cells made it suitable as the combination target. (3) In this work, we had discovered the URI-p53-SCD1 axis in regulating lipid metabolism and sorafenib-resistance in HCC.As the reviewer mentioned, SCD1 has been identified as the p53repressed gene, especially in ovarian cancer (PMID: 12789273).The TRIM28-MDM2-p53 pathway was also discovered in some tumors such as lung cancer (PMID: 27834954).Here we further confirmed that the TRIM28-MDM2-p53-SCD1 pathway was also existed in HCC with p53-WT, suggesting the conserved role of this pathway in various tumors.Moreover, we had found that URI, the potential oncogene in HCC, could also employ this pathway to inhibit p53 and its-related functions, further expanded the role of URI in HCC.Although we had not tested, considering the higher expression of URI in other tumors (PMID: 30209015, 26328264, 24625985), the URI-TRIM28-p53-SCD1 axis might be general in tumors. (4) SCD1 has been found to regulate the population of liver T-ICs via modulation of ER stress, and its inhibitor could partly overcome sorafenib-resistant in HCC (PMID: 28647567).However, their work had not considered the p53 status and they performed their experiment mainly on Huh7 and PLC/PRF/5 cells, the two cells with p53 mutation.Consistent with their results, we also found that combination of donafenib with aramchol had slight reduced tumor growth of PLC/PRF/5 cells in vivo (Extended Data Fig. 9).Moreover, our experiment also showed that this combination treatment could induce lipid peroxidation in p53-mut PDOs, but had little effect on the cell cytotoxicity (Extended Data Fig. 8).Notably, we had found that the combination therapy could significantly induce cell death in HCC cells and PDOs with p53-WT than the inhibitors used alone, and URI had an important role in the TKI sensitivity in p53-WT cells.The role of URI in TKI resistance had not been observed in p53-mutation samples.These results suggested that HCC patients with p53-WT could achieve more benefit from this treatment.We also used aramchol, the clinical phase 3/4 SCD1 inhibitor, in our animal experiments.Our results might be helpful to provide some evidence for the clinical use of this combination regimen in HCC patients, especially in patients with p53-WT status. (2) About the "URI reprograms SCD1-associated lipid metabolism" section, I'm curious why URI knockdown only increases the level of saturated fatty acids but not PUFA.The level of PL-PUFA in Response: We reanalyzed our lipidomic data.The PLS-DA and OPLS-DA analysis of extracted lipid features exhibited clear separation and tight clustering among the groups (Fig. 2a).By analysis the composition of free fatty acids, we found that the monounsaturated fatty acids (MUFA) exhibited a much larger decrease than PUFA in HepG2-shURI cell than its control, while the saturated fatty acids (SFA) were slightly increased in these cells (Fig. 2b-d and Extended Data Fig. 3).These results suggested that URI could lead to enhanced conversion from SFA to MUFA.Consistent with this notion, we found that the protein level of SCD1, the enzyme that catalyzes the conversion from SFA to MUFA, was significantly inhibited by URI knockdown in p53-WT HCC cells.Interestingly, according to the GSE48654 data (as mentioned in the response to reviewer#1), we found that Scd1 transcripts were elevated in hURI-mice than control in the early-stage, together with aberrant lipid metabolism. PL-PUFAs are susceptible to ROS and their lipid peroxidation can fuel ferroptosis cascade.On the contrary, MUFAs could suppress this process by promoting the displacement of PUFAs from plasma membrane phospholipids (PMID: 30686757).We then analyzed the lipid species of phospholipids (such as PC, PE, PI) between HepG2-shURI and HepG2-Ctrl cells.As shown in Fig. 2, there was a decreasing tendency of MUFA in phospholipids of HepG2-shURI cells than controls under steady-state.The contents of C16:0/C20:4 PL-PUFA were increased in HepG2-shURI than control cells, while the levels of PL-MUFA C16:0/C18:1 were decreased in HepG2-shURI cells (Fig. 2f).Thus, although no significant change in PUFAs was found between HepG2-shURI and control cells, the PL-PUFA was decreased. We then measured the lipid peroxidation levels between HCC cells by C11-BODIPY, Liperfluo and MDA analysis.We found that comparable levels of lipid peroxidation between HCC tumors with shURI or control (Fig. 3e, Extended Data Fig. 4i).However, when these cells were treated with sorafenib, higher increased levels of lipid peroxidation were observed in HCC cells with shURI than controls, together with a significant reduction of cell viability (Fig. 3, Extended Data Fig. 4).SCD1 inhibitors alone also had little effect in lipid peroxidation (Fig. 3). Taken together, these results suggested that URI or SCD1 alone could affect the lipid composition of HCC cells, which made them more sensitive to ferroptosis inducer, such as TKIs. (3) In fig 3c-e, authors need more evidence to prove that the effects of URI and inhibitors are through affecting ferroptosis.Ferroptosis inhibitors should be used to reverse these effects.In addition, oxidized PL-PUFA level should be determined. Response: We thank for the reviewer's constructive comment.We had used the ferroptosis inhibitor ferrostatin-1 (Ferr-1) to investigate whether this treatment could reverse the URI and inhibitors induced cytotoxicity.As shown in Fig. 3 and Extended Data Fig. 4, Ferr-1 treatment could inhibit sorafenib-induced cell death in HepG2 and JHH1 cells by cell viability assay and colony formation assay.Meanwhile, SCD1 inhibitors A939572 and MK8245 mediated synergistic effect in sorafenibinduced cell death could also be reversed by Ferr-1 treatment.We then measured the lipid oxidization by Liperfluo staining and MDA test.When cells were treated with sorafenib, the lipid oxidization levels in all cells tested were increased, and shURI cells showed much higher levels than their controls.Combination with SCD1 inhibitors further increased the lipid oxidation contents in cells.Notably, Ferr-1 treatment could inhibit the lipid oxidation status in cells.Taken together, our results had showed that ferroptosis is the major cell death form in sorafenib-treated HCC cells and URI-SCD1 axis could regulate sorafenib-induced ferroptosis. (4) In fig 7 and related Extended figures, all the data didn't consider the p53 status (null or mutation) in the patient samples.This may undermine the conclusion of this paper. Response: Thanks for the constructive suggestion.This comment and the major point one of the Reviewer#1 are very important to strengthen the conclusion of our paper.Considering that p53 mutation is frequency in tumors and its mutation form is various between patients, we decided to obtain the p53 status by their WES or target DNA sequencing data.We enrolled a HCC cohort from Gao et.al (Cell, PMID: 31585088) and named it as "Fudan_HCC_cohort", which had complete data of WES, transcriptome and clinical information.A shown in Fig. 8 and Extended Data Fig. 10, we found that SCD1 expression was associated with URI levels in p53-WT group, but not in p53mutation group.Higher URI or SCD1 expression in p53-WT HCC patients were correlated with poorer clinical outcome.We did not observe this correlation in p53-mutation HCC patients.These results suggested an important role of URI/SCD1 in HCC progression in patients with wild type-p53. Then we employed our previous cohort which enrolled HCC patients with recurrent HCC, the patients were then received systemic therapy containing sorafenib (PMID: 32373219).The cohort was named as cohort C and containing 45 patients with p53-WT, 1 patient with p53-synonymous mutation and 34 patients with p53-mutation (including nonsynonymous mutation, splicing and stopgain mutation).The levels of SCD1, URI and p53 were measured by immunohistochemistry. Significant correlation between SCD1 and URI was found in p53-WT group, but not in p53mutation group (Fig. 8, Extended Data Fig. 11).Higher levels of SCD1 or URI were associated with worsen prognosis in p53-WT HCC patients receiving sorafenib treatment, while no significant correlation were found in p53-mutation patients.Thus, our results demonstrated the important role of URI-SCD1 axis in sorafenib resistance in p53-WT HCC patients Minor ones: (1) Why did the authors choose URI to investigate?The rationale should be provided.Can sorafenib treatment induce URI expression?It has been reported that sorafenib could upregulate p53 level, which is opposite to the effect of URI.Therefore, how "high" should the level of URI be that could reverse the effect of sorafenib on p53 activation?What's the percentage of HCC patients bearing WT p53 and high URI? Response: We thank for the kind comment.As mentioned in the major point 1, URI is higher expressed in most HCC tumors than their counterpart non-tumor liver tissues, and URI has been (3) In fig 5e, there are several p53 binding peaks in the gene body region of SCD1.There intensities are comparable to the peak located at the promoter region.Are these gene body sites responsible for p53-mediated SCD1 repression?(4) About the xenograft model in fig 6, authors can test the combination treatment in p53 null or mutated cells.The lipid peroxidation level in the isolated xenograft tumor need to be tested.(5) p53-mediated ferroptosis is different from what GPX4-mediated (PMID: 35087226 and 30962574).To confirm the effect of URI/SCD1 on ferroptosis is related to p53, authors can use tert-Butyl hydroperoxide (TBH) to trigger ferroptosis in ACSL4-KO cell to test their major conclusion.(6) In Extended Data Fig. 1b, why is SLC7A11 upregulated when knocking down URI? SLC7A11 is a suppressive target of p53.(7) In fig 4l, I suggest the authors to use dox rather than nutlin-3a to repeat this experiment.Because the authors didn't mention or use nutlin-3a in previous figures.(8) In fig 5b, "p53" but not "P53".(9) In this description "URI depletion significantly decreased ubiquitination of wild-type p53 in HepG2 cells (Fig. 5d and Extended Data Fig. 6b)", Extended Data Fig. 6b should be "Extended Data Fig. 6c".In addition, IgG and p53 antibody should be noted in Extended Data Fig. 6c.(10) In fig 6i, why did aramchol and donafenib reduce p53 level?(11) In fig 7a, e and Extended Data Fig. 10a, the intensities of the fluorescence signals of certain panels should be enhanced.It is hard to recognize the signals.(12) Delete the full stop "." in the end of the title and several captions in the results part. ( 7 ) Number of patients with URI low SCD1 low is missing in Extended Data figure 10b, which in fact is 25. ( 10 ) It is unclear the ubiquitinated SCD1 state in Extended Data Figure 5b.Could authors have swapped the ubiquitinated SCD1?Response: Thank you for your suggestion.We have increased the concentration of the external transfer plasmid and the concentration of the antibody.We replaced the western blot strips with clearer ones in Extended Data Fig. 5b.(11) Legend of Figure 1a should be modified, since it says "the triangle size indicates" but there are no triangles in the figure.Response: Thank you for pointing out our mistake, we apologize for this.We have changed the description in Figure Legend of Fig. 1a. Fig Fig 2a and b seem to be downregulated.How about the level of peroxidized PL-PUFA?This is the direct evidence to demonstrate the effect of URI is through ferroptosis.If PUFA level can't be changed by URI, how do the author explain the enhanced lipid peroxidation when inhibiting URI or SCD1?
6,743.8
2023-10-07T00:00:00.000
[ "Biology" ]
Simulation framework for connected vehicles: a scoping review Background: V2V (Vehicle-to-Vehicle) is a booming research field with a diverse set of services and applications. Most researchers rely on vehicular simulation tools to model traffic and road conditions and evaluate the performance of network protocols. We conducted a scoping review to consider simulators that have been reported in the literature based on successful implementation of V2V systems, tutorials, documentation, examples, and/or discussion groups. Methods: Simulators that have limited information were not included. The selected simulators are described individually and compared based on their requirements and features, i.e., origin, traffic model, scalability, and traffic features. This scoping review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). The review considered only research published in English (in journals and conference papers) completed after 2015. Further, three reviewers initiated the data extraction phase to retrieve information from the published papers. Results: Most simulators can simulate system behaviour by modelling the events according to pre-defined scenarios. However, the main challenge faced is integrating the three components to simulate a road environment in either microscopic, macroscopic or mesoscopic models. These components include mobility generators, VANET simulators and network simulators. These simulators require the integration and synchronisation of the transportation domain and the communication domain. Simulation modelling can be run using a different types of simulators that are cost-effective and scalable for evaluating the performance of V2V systems in urban environments. In addition, we also considered the ability of the vehicular simulation tools to support wireless sensors. Conclusions: The outcome of this study may reduce the time required for other researchers to work on other applications involving V2V systems and as a reference for the study and development of new traffic simulators. Introduction In recent decades, a significant increase in vehicle use has increased traffic congestion and fatalities 1 . According to the World Health Organization, 1.25 million people are killed and severely injured involving vehicle accidents 2 . Hence, connected vehicle technology responds to this constraint, aiming to leverage inter-vehicle communication to produce safe, userfriendly, and fuel-efficient vehicle assistive technologies 3,4 . One of the main aspects of connected vehicle research is to optimise traffic flow through the exchange of information 5 . This communication can be sorted in terms of vehicle (V2V), infrastructure (V2I), a pedestrian (V2P), and network (V2N) 6,7 . The exchange of information, collectively known as V2X communications, could assist drivers in preventing accidents by providing warnings of danger invisible to drivers and other sensors (e.g. collision avoidance, lane departure warning and speed limit alert) 8,9 . Nevertheless, the adoption of connected vehicle technology poses a range of challenges, particularly in urban environments. It is challenging to analyse the effectiveness of the application of connected vehicles under traffic conditions [10][11][12] . As such, simulations using traffic and network simulators as well as mobility generators are viable alternatives to modelling and determining the effectiveness of such deployments in the real world 13,14 , as it provides an affordable and scalable method for analysing model compliance in various contexts and parameters. Traffic simulations are categorised by level of detail into three separate categories 15 . First, the most precise information on each vehicle in the system is microscopic simulations 16 . Second, mesoscopic simulations exploit aggregate velocity-density functions to represent their behaviour and view traffic as a continuous stream of vehicles 17 . Finally, macroscopic simulation is the large-scale traffic model, which focuses on combined traffic status 18 . Microscopic simulations provide the highest degree of detail for modelling, although they are the slowest to execute 19,20 . In addition, mobility generators are a possible option for modelling vehicle elements such as traffic, temporal and spatial mobility, and generating mobility traces 21,22 . These traces are then uploaded to a network simulator, which mimics vehicle-to-vehicle communication. Furthermore, these traces can be generated by observing real-world vehicles on the road and then used in network simulations 23,24 . The effect of network parameter modifications on traffic mobility is a strategic objective simulation 25 . It is also restricted to the use of the trace controlled by the mobility model. Another option is to use a simulator that directly integrates the mobility framework. This study focuses on the Vehicular Adhoc Networks (VANET) which relies on network protocols to assess their performance 22,23 , given that actual experiments are not possible. Over the last decade, efforts have been made to produce a full transport simulator for VANET solutions, including a wireless network simulator for modelling and evaluation 24,25 . A wide range of simulators can be used for VANET simulation modelling, both commercial and open source. Older simulators provide a network simulator to communicate with stationary mobility models. Many researchers have examined various mobility models with simulation tools for several contexts. Such simulator tools are not yet well explored since many researchers base their simulations depending on their use case settings. Thus, this motivates the identification of different simulators do not yet exist. Therefore, this study conducted a systematic scoping review to identify the applicability and availability of existing mobility generators, network simulators, and combination simulators. The rest of the paper is organised as follows. The following section explains the methods used for analysing existing mobility generators and network simulators. Further, the outcome of the review scoping is depicted with several discussions. The final section concludes by considering future directions. Methods A This work involves identifying the research question, identifying relevant studies, selecting the studies, charting the data, collating, summarising, and reporting results. The review was carried out in compliance with the PRISMA Extension for Scoping Review 26 . Inclusion criteria Under standard procedures for performing scoping reviews 26 , published primary studies on VANET were considered for inclusion. Although research may be conducted in any country without focusing on language restrictions, we only obtained data from studies published in English-language journals. Studies had to include mobility generators, network simulators, and vehicle network simulators. We included studies related to vehicular communications that investigated V2V safety Amendments from Version 1 The abstract and introduction were improved to highlight the focus of the study which contributes to the domain of Vehicular Adhoc Networks (VANET) by describing the mobility generators and network simulators suitable to be considered in the context of connected vehicles or V2V. The sections organization was added in the last paragraph of the Introduction section. The Method section has been re-written to provide better presentation and clarity. The Result section was also improved to describe the findings of the scoping review in a clearer way. Similarly, the Discussion section also has been improved, highlighting the contribution and benefit of this study. The conclusion includes the recommendation of simulators for realtime modelling and recommendation for extending this study. The citations and references in the reference list are unified according to a Harvard referencing style, and the paper was proofread and edited accordingly. Any further responses from the reviewers can be found at the end of the article applications, vehicle network performance, driver behaviour, and vehicle simulation tools. We excluded studies prior to the year 2015. Databases IEEE Xplore and Science Direct were used to perform in-depth searches of the information included in these databases. Literature Search Strategy The search method included controlled vocabulary and free-text word phrases generally linked to (1) network simulation, mobility generators, or network simulators, and (2) VANET, vehicle, or nodes. All searches were initiated in November 2019 and December 2020, when the database was updated. Endnote was used to import the search results 27 . Citation Screening After removing duplicate studies, search results were exported for screening. This was done to filter references based on the above-mentioned inclusion criteria. To minimise the possibility of bias, each reference was checked twice by two team members, and the team addressed any inconsistencies. The first screening process ended in April 2020, and the outcomes were updated in January 2021. Data Extraction To extract data from each included research, a spreadsheet was created using Microsoft Excel. Several rounds of piloting the data extraction spreadsheet was performed, during which all team members collected data from the same research, and the results were reviewed during team meetings to ascertain content consistency. The piloting extraction process guaranteed that all relevant data fields were collected and that the content was uniform across the research team. After familiarising all team members with the data extraction method, studies were allocated to each member, and the relevant data were extracted separately. Year, country, mobility generators, network simulators, active development, release date, licence, predefined map, traffic model, architecture language, and simulation language were collected from each included research where accessible. In March 2021, data extraction was finalised. Data Analysis The objective outcomes of all included studies were retrieved in the order in which they were reported. The research team then classified these findings according to their similarity to the measured concepts: routing protocol, scenario, mobility generator, and network simulator. Subjective outcomes were similarly retrieved in the way described in the publications and then classified according to their similarity to the assessed concepts: contribution. The subjective outcome criteria for each study were extracted and operationalised by consensus among our research team. Results The initial search turned up 269 matches. After removing duplicates, a total of 184 titles and abstracts were screened, from which 72 publications were subjected to full-text review. 10 studies fulfilled the criteria for inclusion and were included in the analysis (see Figure 1 for the PRISMA Flow Diagram). We found that open-source mobility and network simulators were popular among researchers. Microscopic models were preferable for research related to vehicular communications since the simulations provide the most precise information of each vehicle or mobile node and the highest degree of detail for modelling compared to macroscopic and mesoscopic models. Common network simulators were NS-2, Ns-3 and OMNeT++. However, not all mobility simulators supported active development, which is important in current active research domains such as vehicular communications. The mobility generators and simulators available after 2015 are further shown in Table 1 and Table 2, respectively. Besides, summarised previous studies are shown in Table 3 of using mobility generators or simulators. Discussion Since this area of study is considered as a relatively new but rapidly growing field, this scoping review process only considers relevant papers published from 2015 onwards, which shows that extensive research has been conducted to create security standards for communication technologies, particularly the vehicular network. Although various simulators can be enhanced with library extensions, none of the simulators is related to security and privacy. Ultimately, researchers and professionals cannot compare their security measures to a given circumstance. For instance, ensuring the privacy of a vehicular user in a fast-moving network and disseminating messages in a secure vehicular environment. However, there is no simple practice of extending existing simulators to the desired security standard, which implies that future development research will need to be done. In addition, the quality of a simulation depends largely on the precision of the models. The range of precision has increased dramatically recently, where several modules contain signal attenuation components, multiple antenna models, and environmental interferences. However, one continuous barrier to producing accurate simulations is the evolution of rapid prototyping and its increasing use in-vehicle networks. For example, vehicle nodes would depend on three-dimensional scenarios to communicate with other nodes. It would be crucial for current and future simulators to extend the current simulators to these new conditions. This paper uncovers an automatic routing protocol for the VANET scenario. The idea is to disseminate the information provided by several roadside units. There are three routing protocols evaluated using several performance metrics in terms of delay, number of hops, total service time, and number of fragments. The paper focuses on two routing protocols within the VANET scenario. The idea is to ensure an optimal path from source to destination under a few performance measures in terms of throughput and packet delivery ratio. Apart from that, integration with real-time system modelling based on non-real-time events creates additional challenges. Due to resource limitations, current simulators do not correspond with the physical properties of the hardware prototype while simulating a comprehensive network with multiple vehicles. Several alternatives have been put forward to reduce the complexity that could speed the simulation. However, this approach usually does not include indirect outcomes, which could seriously impact the behavior of real-world network components. It is, therefore, necessary to examine the interconnection between simulators and hardware devices with the security standards concerned. Conclusions Studies have led to the discovery of comprehensive and realistic simulation tools due to the increasing popularity and interest for the future transportation system. This work has examined the current availability of simulators. While testing VANET with essential performance, it is necessary to deploy a mobility generator and mobility network that accurately represent real vehicle traffic. Based on our comparative identification, NS-3 and SUMO has been the optimal choice for real-time VANET modelling. Although several simulators have many features, it is worth exploring further the improvement of the simulators for specific scenarios. In addition, this work can be further be expanded in future by investigating relationships of appropriate simulator for a V2V or V2X application to different scenarios and protocols. We plan to study the used simulators in this context and the extent of benefit and development achieved using the simulators. Data availability Underlying data All data underlying the results are available as part of the article and no additional source data are required. Open Peer Review Communication, Sensors and systems. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com
3,295.6
2021-12-09T00:00:00.000
[ "Computer Science" ]
Bacterial resistance trends among intraoperative bone culture of chronic osteomyelitis in an affiliated hospital of South China for twelve years Background The purpose of this study was to gather temporal trends on bacteria epidemiology and resistance of intraoperative bone culture from chronic ostemyelitis at an affiliated hospital in South China. Method Records of patients with chronic osteomyelitis from 2003 to 2014 were retrospectively reviewed. The medical data were extracted using a unified protocol. Antimicrobial susceptibility testing was carried out by means of a unified protocol using the Kirby-Bauer method, results were analyzed according to Clinical and Laboratory Standards Institute definitions. Result Four hundred eighteen cases met our inclusion criteria. For pathogen distribution, the top five strains were Staphylococcus aureus (27.9%); Pseudomonas aeruginosa (12.1%); Enterobacter cloacae (9.5%); Acinetobacter baumanii (9.0%) and Escherichia coli (7.8%). Bacterial culture positive rate was decreased significantly among different year-groups. Mutiple bacterial infection rate was 28.1%. One strain of Staphylococcus aureus was resistant to linezolid and vancomycin. Resistance of Pseudomonas aeruginosa stains to Cefazolin, Cefuroxime, Cefotaxime, and Cefoxitin were 100% nearly. Resistance of Acinetobacter baumanii stains against Cefazolin, Cefuroxime were 100%. Ciprofloxacin resistance among Escherichia coli isolates increased from 25 to 44.4%. On the contrary, resistance of Enterobacter cloacae stains to Cefotaxime and Ceftazidime were decreased from 83.3 to 36.4%. Conclusions From 2003 to 2014, positive rate of intraoperative bone culture of chronic osteomyelitis was decreased; the proportion of Staphylococcus aureus was decreased gradually, and our results indicate the importance of bacterial surveilance studies about chronic osteomyelitis. Background Osteomyelitis, as a serious deep bone infection, is caused by microorganisms [1,2], and persistence of microorganisms, low-grade inflammation are chronic osteomyelitis characteristics [3,4]. Trueta J demonstrated that hematogenous osteomyelitis was caused by a single agent, while other mechanisms of infection showed poymicrobial infection [5]. Hematogenous osteomyelitis was considered as predominantly pediatric disease with 85% of patients aged below 17 years, while about 47-50% of all osteomyelitis was post-traumatic in adult patients [6]. Staphylococcus species was the most common isolated microorganism in most types of osteomyelitis, approximately affecting 50-70% of cases [7,8], and the second and third were Enterobacteriaceae and Pseudomonas species [9]. Meanwhile, the treatment of chronic osteomyelitis remains challenge, and multidisciplinary approach including adequate surgery and antibiotics were required [10,11]. Though microbiologic testing would cause false-negative result, it's an useful mean to identifying the organism [12], and intra-operative bone culture appears to predict the complete etiologic organisms more reliably [13]. Osteomyelitis' commonly isolated microorganisms were related to age and susceptibility factors, which including injectable drug users, immunocompromised, urinary infection, orthopedic fixation devices, diabetes mellitus and so forth [14][15][16]. With the rapid development of antimicrobial resistance and expression of virulence factors, regardless of patient's immune status, the bacterial distribution, bacterial culture positive rate and antibiotic resistance of osteomyelitis had changed gradually [17]. There was few studies about continuous changes of bacterial culture positive rate, causative organisms and antibiotic resistance for intraoperative bone culture from chronic osteomyelitis in the same hospital over a period of time. Up to now, few study was from mainland China, a developing country. Our present study was aimed to evaluate the changing trends of positive rate, causative organisms and antibiotic resistance for intraoperative bone culture from chronic osteomyelitis over a 12-year periods in a south-central region of China. Our this study can help to see the status of bacterial culture positive rate of intraoperative bone and antimicrobial resistance of causative organisms of chronic osteomyelitis, even can help to take more effective measures for treatment. Methods We retrospectively reviewed the medical records of patients who were admitted to the orthopedics department with chronic osteomyelitis from January 12,003, to December 31,2014. The health facility is an university teaching hospital that located in the south central region of China. Medical record information included the basic information of patients, cause of osteomyelitis, the bone(s) affected, the status of bacterial culture, antimicrobial susceptibility testing, results of laboratory tests and radiography, and even pathological examination. Chronic ostemyelitis was defined clinically as bone infection with clinical signs persisting for more than 10 days or the relapse of a previously treated or untreated osteomyelitis [18], and bone infection was defined as at least two bone cultures with the same organism growth, or one positive bone culture combined with the intraoperative finding of purulence, acute inflammation on histologic examination consistent with infection, or a sinus tract communicating to the bone [19]. Patients were choose in accordance with the unified standards [20]. At first, we picked over cases with chronic osteomyelitis which were diagnosed based on above definition through the medical records; and then chronic osteomyelitis who had taken intraoperative bone culture was sorted out; Lastly, we had shut out cases who had not quit antibiotic therapy for at least 1 week within the period preceding admission for surgery. In order to avoid duplicate counts, only one isolate from the same species was included per patients [21]. Specimens from the depths of sinus tracks were taken for surgery. Marrow pus, curetting, sequestra and bone biopsy were obtained at surgery, and sent through appropriate transport medium for microbiological examination and culture. All samples were inoculated onto a pair of blood agar and one Mac Conkey agar plates. One blood agar plate was inoculated anaerobically for 48 h and the other two plates aerobically for 24 h. Special identification of the isolates was performed by standard biochemical methods, and antimicrobial susceptibility testing was carried out by means of a unified protocol using the Kirby-Bauer method, results were analyzed according to Clinical and Laboratory Standards Institute (CLSI) criteria (as applicable each year) [22]. Statistical analysis was performed with the Statistical package for social sciences (SPSS)21.0 software (SPSS Inc., Chicago, IL, USA). Patient's demographics were described as the mean and the standard deviation or as the count and percentage as appropriate. Chi-square test or Kruskal-Wallis H test was used to analysis the difference of rates about numeration data. All significance tests were two-sided, and p value of less than 0.05 was considered statistically significant for all tests. Results In order to increase the number of different group, we divided 2 years into one group. The mean age was 39.3 ± 16.5 years, and 10.8% of the patients was younger than 18 years, 50.7% was 18-45 years, and 38.5% of the cohort was 45 years or older; and 327 of patients were males (78.2%). The majority of the infections (95.7%) involved only one bone, and the most common anatomical affected sites was tibia (35.9%); followed by femur (27.5%); calcaneus (5.7%) and humerus (2.2%). We have observed an increasing proportion of culture-negative (28.5%), and positive rates were statistically different between all year-groups (χ 2 = 11.95, P = 0.036). Agespecific and Etiology-specific positive rate for intraoperative bone culture were shown in Tables 1 and 2. The year-trend of positive rate was not significantly different in the age-group of younger than 18 years(p = 0.062) and group of 45 years or older (p = 0.117), but were statistically different in the group of 18-45 years (p = 0.003). In the risk-factors of different etiology, positive rate was statistically different in traumatic-group (p = 0.00) and hematogenic-group (p = 0.01). The total number of bacterial isolated from 418 cases was 398. The percentage of the top five species was shown in Fig. 1: Staphylococcus aureus infections was responsible for 27.9%, followed by Pseudomonas aeruginosa (12.1%); Enterobacter cloacae (9.5%); Acinetobacter baumanii (9.0%) and Escherichia coli (7.8%). Mutiple bacterial infection rate was 28.1%, which included three strains infection rate (5.02%) and double bacterial infection rate (23.08%). For Staphylococcus aureus, We found one strain in group of 2009-2010, which was resistant to linezolid; and found one strain in group of 2011-2012 resistant to vancomycin and linezolid. Ciprofloxacin and Erythromycin resistance levels decreased from 41.7 to 30% and from 66.7 to 46.7% (Table 3). For Pseudomonas aeruginosa, Cefazolin, Cefuroxime, Cefotaxime, and Cefoxitin resistance levels were 100% nearly (Table 4). For Enterobacter cloacae, We found nearly all resistant to Cefazolin, Cefuroxime, Cefoperazone and Cefoxitin; Cefotaxime and Ceftazidime resistance levels decreased from 100 to 54.5% and from 83.3 to 36.4%, respectively (Table 5). For Acinetobacter baumanii, they were stable 100% nearly to Cefazolin and Cefuroxime (Table 6). For Escherichia coli, Gentamicin resistance levels decreased from 50 to 22.2%, respectively. However, a marked increase of resistance was seen for Ciprofloxacin from 25 to 44.4%, respectively (Table 7). Discussion A culture directed antibiotic therapy is proper treatment of chronic osteomyelitis [23]. The culture of material should obtained by superficial swabbing of the wound, depths of sinus track, intraoperative and even specimens by other methods. Culture of superficial material swabbing of the would was considered adequate to identify pathogens causing osteomyelitis before 1978 [24], but recent literature have suggested that bone specimen cultures were more reliable compared to sinus track culture on the complete etiologic organisms [20]. Therefore, we taken patients who had made intraoperative bone cultures into our these studies for the aim of gathering temporal trends. Although all kinds of organisms, including bacteria, viruses, parasites, fungi, and tuberculosis may cause osteomyelitis, bone infection was mainly caused by pyogenic bacteria and mycobacteria. Tong SY had drawn a conclusion that Staphylococcus aureus was responsible for 80 to 90% of the cases of pyogenic osteomylitis, while Staphylococcus epidermidis was the most abundant skin flora [25]. We have discovered Staphylococcus aureus infection was responsible for 27.9% of the total number of cases, followed by Pseudomonas aeruginosa (12.1%) and Enterobacter cloacae (9.5%). Though Staphylococcus aureus was still the most common pathogenic bacteria, the proportion was decreased year by year due to long-time, unreasonable and even abuse using agents and increase of high-energy open fractures [26,27]. Because chronic osteomyelitis require antibiotics therapy for months to years, therefore, chronic osteomyelitis entails a major financial burden and substantially affects the quality of life. This situation made the accurate identification of pathogen as an absolute cornerstone of antimicrobial therapy. We suggested that the treatment of chronic osteomyelitis according to the microbiological analysis from surgery or bone biopsy. Consistent with temporal trends in the distribution of chronic osteomylitis, we observed a decline in the proportion of patients with gram-positive bacteria infections and an increase in the proportion of cases with culture-negative over time. Some recent studies had described an increase in culture-negative cases because of early antibiotic administration and even rampant use of antibiotics [28]. Failure to incubate anaerobic cultures for sufficient time might also have contributed to the culture-negative rate [29]. Culture-negative specimens may because of tuberculosis and fungal infection which should require further investigation using specialized techniques. Biofilms [30] and failure to recognize small colony variants (SCVs) may cause false-negative culture results [31]. Such bacteria are in a stationary phase of growth because oxygen and glucose are limited in biofilms [32]. Small colony variants (SCVs) were first depicted exceed 100 years, and it was 20 years ago to described the relationship between chronic staphylococcal infection and the presence of SCVs [33]. SCVs figure a very heterogeneous bacterial population found in different staphylococcal species. In fact, SCVs are difficult to recover, to identify and to store. Clinical studies had found that SCVs exhibit so-called phenotypic (or functional) resistance beyond the classical resistance mechanisms by their intracellular lifestyle, and SCVs can often be retrieved from therapy-refractory courses when it was infected [33]. Therefore, we recommended that tissue for culture of aerobic organism, anaerobic organism, tubeculosis and fungal must be obtained during intraoperative in Table 3 Resistance rates(%) of Staphylococcus aureus to antimicrobial agents order to identify all the etiologic organisms. Treatment failures in chronic osteomyelitis will then be reduced to the minimum. The published literature had shown the tibia was the most commonly affected site [28], and ours finding have corroborated that tibia (36.1%) was the common site. Osteomyelitis encompasses a broad spectrum of disease mechanisms, and the three generally accepted categories was hematogenic, contiguous to an adjacent infection focus, and direct bacterial inoculation from a traumatic. Our study discovered that direct bacterial inoculation from traumatic was responsible for 54.3% of all cases, followed by adjacent infection (8.6%); hematogenic (8.1%) and others of unexplained factor (28.9%). Along with the advances were made in the management of chronic osteomyelitis, the epidemiology of the condition appears to evolve over time. Chen AT has made a conclusion that the incidence of bone infection may continue to rise because of multiple factors including improved diagnosis, increasing patient risk factors, and increased needs for arthroplasties [34]. Mader JT concluded that the increased survival following traumatic injury has been accompanied by an increased occurrence of post-traumatic osteomyelitis [35]. The patients with post-traumatic osteomyelitis require repeated surgery and long-time agents using, therefore, it may lead to bacterial resistant and lower bacterial culture positive rate. In our study, we described the trends in the change of positive rate for intraoperative bone culture over time and demonstrated that the positive rate changed substantially over 12-year from 2003 to 2014. We discovered the Table 4 Resistance rates(%) of Pseudomonas aeruginosa to antimicrobial agents Table 5 Resistance rates(%) of Enterobacter cloacae to antimicrobial agents bacterial culture positive rate for introperative bone was changed by age-factor over time. Age is an important factor which could determine the etiology of chronic osteomyelitis. In children group, the most common etiology was hematogenous infections. Due to elder cases had experienced a higher frequency of disorder that may lead to infection, such as diabetes mellitus, orthopaedic surgeries and vascular or neurologic insufficiency disease, elder patients were susceptible to chronic osteomyelitis. Host condition has been emphasized because it was the importance factor for chronic osteomyelitis treatment modality. Parkkinen M considered host condition-related risk factors for bone infection included diabetes, arteriosclerosis, alcoholism, obesity, smoking, and aging [36]. Therefore, we should make effort on effective prevention and treatment of older's chronic osteomyelitis since older people are susceptible to be infected and prognosis is poor once chronic osteomyelitis developed. We made a conclusion that multiple bacterial infection rate was 28.1%, which included three strains infection rate (5.02%) and double bacterial infection rate (23.08%). We found one stain of Staphylococcus aureus was resistant to linezolid and vancomycin; and we also found stains of Pseudomonas aeruginosa, Enterobacter cloacae and Acinetobacter baumanii were resistant to carbapenems. Carbapenems are the most potent and reliable βlactam antibiotics for the treatment of serious infection caused by multidrug-resistant Gram-negative bacterial [37]. Infection of multidrug-resistant bacterial present a serious clinical challenge for physicians in healthcare setting. Treatment options for these infections are limited, and the use of inappropriate empirical antibiotic therapy of delayed appropriate antibiotic therapy can lead to worse outcomes. We also found large fluctuations over time in our study, and taking antibiotic resistance surveillance studies over longer time periods is important. We realized that our retrospectively investigation may have been influenced by a number of methodological shortcomings. In this regard, retrospective error is inevitable. We believe that it is important to keep in mind that our study focused on chronic osteomyelitis with Table 6 Resistance rates(%) of Acinetobacter baumanii to antimicrobial agents intraoperative bone culture and quit antibiotic therapy for at least 1 week. For this reason, the positive rate of intraoperative bone culture may have been higher in these patients in our study in comparison with the patients described in other investigation. Furthermore, the number of cases was distributed uneven each year, although we do not feel that this particular limitation greatly affected the trend of the change of positive rate. Finally, we only made study in a single center in China. Future cohort studies in multi-center study should be taken into research. Conclusions Based on the results of this investigation, the proportion of Staphylococcus aureus is decreased gradually, and our results indicate the importance of bacterial surveilance studies about chronic osteomyelitis. Further research is warranted to replicate these findings in more center and to gather temporal trends on bacterial epidemiology and resistance of chronic osteomyelitis.
3,683.4
2019-09-18T00:00:00.000
[ "Medicine", "Biology" ]
Molecular Morphogenesis of T-Cell Acute Leukemia Many molecular alterations are involved in the morphogenesis of T-cell acute leukemia (TALL), classified as lymphoblastic leukemia/lymphoma by the World Health Organization. TALL is a malignant disease of the thymocytes which accounts for approximately 15% of pediatric acute lymphoblastic leukemia (ALL) and 20-25% of adult ALL. Frequently, it presents with a high tumor load accompanied by rapid disease progression. About 30% of T-ALL cases relapse within the first two years following diagnosis with long term remission in 70-80% of children and 40% of adults [1]-[4]. This poor prognosis is a consequent of our insufficient knowledge of the molecular mechanisms underlying abnormal T-cell pathogenesis. Under‐ standing the abnormal molecular changes associated with T-ALL biology will provide us with the tools for better diagnosis and treatment of lymphoblastic leukemia. Recent improvements in genome-wide profiling methods have identified several genetic aberrations which are associated with T-ALL pathogenesis. For simplification these molecular changes can be separated into 4 different groups: chromosome aberrations, gene mutations, gene expression profiles, and epigenetic alterations. This chapter will discuss these molecular changes in depth. Introduction Many molecular alterations are involved in the morphogenesis of T-cell acute leukemia (T-ALL), classified as lymphoblastic leukemia/lymphoma by the World Health Organization.T-ALL is a malignant disease of the thymocytes which accounts for approximately 15% of pediatric acute lymphoblastic leukemia (ALL) and 20-25% of adult ALL.Frequently, it presents with a high tumor load accompanied by rapid disease progression.About 30% of T-ALL cases relapse within the first two years following diagnosis with long term remission in 70-80% of children and 40% of adults [1]- [4].This poor prognosis is a consequent of our insufficient knowledge of the molecular mechanisms underlying abnormal T-cell pathogenesis.Understanding the abnormal molecular changes associated with T-ALL biology will provide us with the tools for better diagnosis and treatment of lymphoblastic leukemia.Recent improvements in genome-wide profiling methods have identified several genetic aberrations which are associated with T-ALL pathogenesis.For simplification these molecular changes can be separated into 4 different groups: chromosome aberrations, gene mutations, gene expression profiles, and epigenetic alterations.This chapter will discuss these molecular changes in depth. T-cell development The progenitors for T lymphocytes arise in the bone marrow as long-term repopulating hematopoietic stem cells (LT-HSCs) (Figure 1).These cells then differentiate, generating shortterm repopulating hematopoietic stem cells (ST-HSCs) and lymphoid-primed multipotent progenitor (LMPP) [5]- [7].LMPPs, which migrate via the blood and a chemotaxis process to the thymus [8], phenotypically resemble early T-cell progenitors (ETP) [9], [10].ETP cells, also called double negative 1 (DN1), are capable of differentiating into either T-cells or myeloid cells and phenotypically belong to a CD3 -CD4 -/low CD8 -CD25 -CD44 -KIT + (Figures 1 and 2).If ETP cells commit to the T-cell lineage they progress to double negative 2 (DN2), followed by double negative 3 (DN3) and finally to double negative 4 (DN4) T-cell development stages.This process starts with the downregulation of c-KIT receptor resulting in the cell surface phenotype CD4 -CD8 -CD25 + CD44 -for DN2 cells, next CD44 is lost for a cell surface phenotype of CD4 -CD8 -CD25 + CD44 -for DN3 cells, and finally CD25 is lost for a cell surface phenotype of CD4 -CD8 -CD25 -CD44 -for DN4 cells (Figures 1 and 2) [9], [11]- [13].This differentiation from ETP to DN4 cells occurs within the thymus in intimate contact with the epithelial stromal cells, which express Notch ligands, essential growth factors (interleukin-7), and morphogens (sonic hedgehog proteins) important for T-cell development.Before differentiation into double positive cells (DP) which have the cell surface phenotype CD4 + CD8 + , DN4 cells lose their dependence on Notch ligand, interleukin-7 and sonic hedgehog (Shh) [14], [15].Once they are DP cells, they undergo positive and negative selection.Following selection, αβ T-cell receptor (TCR) + T cells migrate from the thymus to secondary lymphoid organs to manifest their immune function.These mature cells are single positive (SP) with the cell surface phenotype of either CD4 + or CD8 + [9], [11]. Recurring chromosomal aberrations Chromosomal translocations which alter gene function were among the first clues to the genes and molecular mechanisms involved in abnormal T-cell biology.In T-ALL, approximately 50% of cases have cytogenetically detectable chromosomal abnormalities.There are at least two distinct molecular mechanisms of chromosomal translocations that can lead to abnormal Tcell biology (Figure 3).In one mechanism a strong regulatory element such as a promoter or enhancer is rearranged next to a gene resulting in abnormal expression of this gene.The affected gene typically encodes a transcription factor or a protein involved in cell cycle regulation.In the second mechanism the translocation results in a fusion protein.Frequently this fusion creates a novel protein that affects normal cell cycle regulation [16].One of the hallmark features of T-ALL is translocations involving T-cell receptor genes, which are observed in majority of T-ALL patients.The bulk of these recurring aberrations involve strong transcriptional regulator elements from the T-cell receptor (TCR) genes being juxtaposed with genes encoding transcription factors.These alterations are frequently caused by erroneous V(D)J recombination events during T-cell development.Overall these chromosomal abnormalities lead to aberrant gene expression and proteins that alter normal growth, differentiation, and survival of T-cells and their precursors.Approximately 35% of the observed cytogenetic abnormalities in T-ALL involve translocations that include the TCR alpha/delta chain at 14q11.2, the TCR beta chain at 7q34, and the TCR gamma chain at 7p14 (Table1).Among this group, rearrangements with the HOX11, HOX11L2, TAL1, TAL2, LYL1, BHLHB1, LMO1, LMO2, LCK, NOTCH1, and cyclin D2 genes are most frequently observed in patients [11].Overexpression of LMO1, LMO2, or TAL1 is caused by rearrangements to the TCR delta chain in 3-9% of patients.About 3% of pediatric T-ALL is caused by ectopic TAL1(1p32) expression due to the t(1;14)(p32;q11) rearrangement [17]- [21].Overexpression of HOX11(TLX1) is observed in greater than 30% of adult T-ALL when rearranged to the promoters of the TCR delta or TCR beta chains [22].About 3-5% of patients have HOXA-TCR beta rearrangements.For example, the inv(7)(p15q34) and t(7;7)(p15;q34) rearrangement which results in up-regulation of the HOXA9, HOXA10 and HOXA11 genes [23], [24].Rare translocations involving juxtaposition of the TCR gamma or the TCR alpha/delta chains to the LYL1 (19p13), TAL2 (9p32), or BHLH1(21q22) resulting in overexpression of these genes are also observed [25]- [28]. Several chromosomal translocations do not involve the TCR locus (Table1).In 10-25% of TAL1 positive T-ALL patients, TAL1 is expressed as result of an intrachromosomal deletion between the upstream ubiquitously expressed SIL gene as a result and TAL1 (SIL-TAL1) [29]- [31].20% of pediatric and 4% of adult cases of T-ALL have HOX11L2 (TLX3)-BCL11B fusion.This fusion causes ectopic expression of the HOX11L2/TLX3 gene [32], [33].8% of patients have the (10;11(p13;q14)/PICALM-MLLT10 rearrangement.In this case leukemogenesis is mediated through HOX gene upregulation via mistargeting of hDOT1l and H3K79 methylation [34], [35].ABL1, a cytoplasmic tyrosine kinase, fusion genes have been identified in approximately 8% of T-ALL case.The NUP214-ABL1 fusion, which results in a constitutively active tyrosine kinase with oncogenic potential, occurs in 6% of both adult and children patients and is the most frequent ABL1 fusion gene observed.EMl1-ABL1, BCR-ABL1, and ETV6-ABL1 gene fusions are rarely observed in T-ALL but are frequent in other hematologic malignancies [36], [37].ETV6, which is an important hematopoietic regulatory factor, fusion genes have been observed in both B-ALL (9.6%) and T-ALL patients (5%) [38], [39].A significant cytogenetically visible deletion on chromosome 9p involves CDKN2A and CDKN2B genes, incidence of which varies from being rare to 70% in T-ALL cases [40]- [42].In 5-10% of T-ALL patients, gene rearrangements involving MLL gene are observed.The MLL gene can fuse to at least 36 different translocation partner genes [43], [44].Although there are a wide variety of chromosomal aberrations, the number of genes affected is relatively small.All of these genes are important for normal T-cell development. Recurring genetic mutations Several genes associated with T-ALL pathogenesis have mutations which are not cytogenetically visible.Some of the most frequently mutated genes are NOTCH1, FBXW7, PTEN, CDKN2A/B, CDKN1B, 6q15-16.1,PHF6, WT1, LEF1, JAK1, IL7R, FLT3, NRAS, BCL11B, and PTPN2 (Table2).Many of these genes were identified by gene expression profiling using microarrays or by whole genome sequencing analysis.Below some of these genes and their role in T-ALL is described briefly. Notch1 signaling pathway in T-ALL Activating or loss of function NOTCH1 mutations are observed in ~34-71% of T-ALL and is one of the most significant T-ALL oncogene [45]- [49].NOTCH is involved in the regulation of several cellular processes including differentiation, proliferation, apoptosis, adhesion, and spatial development [50], [51].The importance of NOTCH1 in leukemogenesis was first discovered in a rare translocation t(7;9) that fuses the intracellular form of NOTCH1 to the TCR beta promoter and enhancer sequences.This rare fusion leads to a truncated and constitutively activated form of NOTCH1 termed TAN1 [52].Other Notch isoforms also show oncogenic activity.Notch2 sequences were able to induce leukemogenesis in cats and overexpression of Notch3 in mice resulted in multi-organ infiltration by T lymphoblasts [53], [54].The majority of T-ALL cases with active Notch1 arise due to mutations in the Notch1's heterodimerization (HD) domain and/or the PEST domain (proline-, glutamic-acid-, serine-, and threonine-rich domain) [46].Mutations in the HD domain appear to make the NOTCH1 receptor susceptible to ligand-independent proteolysis and activation (Figure 4b), whereas, mutations in the PEST domain interfere with recognition of the intracellular form of NOTCH1 by the FBW7 ubiquitin ligase (Figure 4c) [45], [46], [55]- [62].Notch1 is a single-transmembrane receptor with an extracellular, transmembrane, and intracellular subunits.Initially the cell-membrane-bound Notch protein is a single protein.After maturation when the protein is cleaved into two subunits the extracellular and intracellular subunits are linked non-covalently via the HD domains.On the extracellular domain multiple epidermal growth factor (EGF)-like repeats bind ligands namely, Delta-like ligand (DLL1), DLL2, DLL4, Jagged1 and Jagged 2. Ligand binding initiates two cleavage events by the ADAM family of metalloproteinases and the γsecretase complex to release the intracellular form of NOTCH from the membrane.Two nuclear localization domains in NOTCH lead to its translocation to the nucleus [62].Once in the nucleus, NOTCH associates with CSL (CBF1/suppressor of hairless/Lag1).Transcriptional activation of NOTCH-target genes begins once the NOTCH/CSL complex recruits the coactivator proteins like mastermind-like 1 and the histone acetyl transferase p300 (Figure 4a) [63].The C-terminal domain of NOTCH contains the PEST domain.This domain is targeted for ubiquitination by FBW7 and subsequent proteasome-mediated degradation.Mutations in the PEST domain can increase the half-life of NOTCH protein resulting in aberrant activation of NOTCH-target genes [58], [59], [61].Together, aberrant stabilization or activation of the intracellular form of NOTCH1 directly links to T-cell leukemogenesis.Because NOTCH1 plays a significant role in T-cell leukemogenesis, its regulation has been studied extensively.Nearly 40% of Notch-responsive genes are regulators of cell metabolism and protein biosynthesis [64].c-MYC, a master regulator of multiple biosynthesis and metabolic pathways, is a direct transcriptional target of Notch1.Notch1 binding sites in the MYC promoter have been shown to be important for MYC expression in T-ALL [64]- [67]. Constitutively active Notch1 was shown to activate the NF-κB pathway [68], an important regulator of cell survival, cell cycle, cell adhesion and cell migration.This activation can occur by the direct transcriptional activation of Relb and Nfkb2 as well as via a Notch1 and IKK complex interaction.Another Notch1 target is PTEN (phosphatase and tension homologue).PTEN is negatively regulated by Notch1 through the activity of HES1 and MYC, resulting in the deregulation of the PI3K-AKT metabolic pathway [69].Finally, Notch1 is also involved in the regulation of the NFAT signaling pathway, where it regulates the pathway by altering expression of calcineurin, a calcium-activated phosphatase [70].Overall, these findings emphasize the role of Notch1 in inducing T-cell leukemogenesis through multiple cell signaling pathways capable of regulating cell survival, proliferation and metabolism. As mentioned above, FBW7 (F-box and WD repeat domain containing 7), an E3 ubiquitin ligase located on chromosome 4q31.3, is observed to be mutated in T-ALL with a frequency ranging from 8.6% to 16% [59], [61], [71].FBW7 is part of the SCF complex (SKP1-Cullin-1-F box protein complex), which can target MYC, JUN, cyclin E, and Notch1 for ubiquitination coupled proteosomal degradation [60].The WD40 domain of FBW7 contains a degron-binding pocket domain.This domain recognizes phosphothreonine in the consensus sequence I/L/P-T-P-X-X-S/E of protein substrates.Roughly 20% of T-ALL patients have mutations in FBW7 that destroys the degron-binding pocket.Moreover, the degron sequence of Notch1 (LTPSPES) located in the distal portion of its PEST domain is found to be mutated in T-ALL, thus extending Notch1 half-life and altering downstream signaling cascades.Interestingly, T-ALL patients frequently have mutations in both the FBW7 degron binding pocket as well as in the Notch1 degron sequence (Figure 4c) [58], [59], [61].These combined mutations elevate intracellular Notch1 activity and therefore, enhances leukemia manifestation.Current studies suggest FBW7 mutations induce T-cell leukemogenesis by disrupting Notch1 regulation. PTEN (phosphatase and tensin homolog deleted on chromosome 10) is deleted or mutated in 6-8% of T-ALL cases.The major substrate of PTEN is PIP 3 (phosphatidylinositol-3,4,5triphosphate).PTEN activity prevents the accumulation of PIP 3, thus limiting or terminating activation of a cascade of PI3K-dependent signaling molecules.The expression of PTEN has been shown to be negatively regulated by Notch1.PTEN appears to be required for optimal negative selection in the thymus.Loss of PTEN is characterized by overexpression of the cmyc oncogene and induction of lymphomagenesis within the thymus [69], [72].Therefore PTEN appears to be an important tumor-suppressor involved in T-cell leukemogenesis. Cell cycle, apoptosis, and transcription regulators in T-ALL Deletions in CDKN2A and CDKN2B are significant secondary abnormities in pediatric T-ALL.Loss of the tumor suppressor CDKN2A/B expression is observed in 30-70% of T-ALL cases and can occur due to chromosomal translocation, promoter hypermethylation, somatic mutation, or gene deletions [40], [42].CDKN2A and CDKN2B are located adjacent on chromosome 9p21.CDKN2A encodes p16 INK4a (cyclin-dependent kinase inhibitor)/p14 ARF while CDKN2B encodes p15 INKb .These genes block cell division during the G 1 /S phase of the cell cycle by inhibiting cyclin/CDK-4/6 complexes [73], [74].The principle mode of CDKN2A inactivation occurs via genomic deletions which can usually be detected by FISH [41].Loss of function of CDKN1B (cyclin-dependent kinase inhibitor 1B) gene, located on 12p13.2,have been observed in 12% of T-ALL cases [75].Similar to CDKN2A and CDKN2B, CDKN1B acts as a tumor suppressor.Inactivation of CDKN1B leads to overexpression of D-cyclins, thereby inhibiting the cells ability to maintain quiescence in G0.Therefore, CDKN2/B and CDKN1B play an important role in abnormal T-cell biology by regulating cell cycle progression. 12% of pediatric T-ALL cases have deletion in 6q15-16.1 [75].The single most down regulated gene in this region is caspase 8 associated protein 2 (CASP8AP2).Deletion of CASP8AP2 probably interferes with Fas-mediated apoptosis.In gene expression profiling study, loss of CASP8AP2 was not observed in any pre-B-ALL samples [75], indicating deletions to 6q15-16.1 maybe a hallmark of T-ALL. The X-linked plant homeodomain (PHD) finger 6 (PHF6) gene has inactivating mutations in 16% of pediatric and 38% of adult primary T-ALL cases [76].Mutations in PHF6 are limited to male T-ALL cases.Consequently, this gene may be responsible for the increased incidence of T-ALL cases in males.Loss of expression of the PHF6 gene was associated with leukemia driven by abnormal expression of the homeobox transcription factor oncogenes.PHF6 gene encodes a protein with two plant homeodomain-like zinc finger domains.A recent study demonstrated that PHF6 copurifies with the nucleosome remodeling and deacetylation (NuRD) complex, implicating its role in chromatin regulation [77]. The WT1 (Wilms tumor) tumor suppressor gene is mutated in 13.2% of pediatric and 11.7% of adult T-ALL cases [78], [79].The WT1 is known to be a transcriptional activator of the erythropoietin gene.Loss of WT1 expression results in diminished erythropoietin receptor (EpoR) expression in hematopoietic progenitors, suggesting that activation of the EpoR gene by Wt1 is an important mechanism in normal hematopoiesis [80].WT1 mutations are frequently prevalent in T-ALL cases harboring chromosomal rearrangements associated with abnormal expression of the homeobox transcription oncogenes, HOX11, HOX11L2, and HOXA9 [79].This suggests that the recurrent genetic mutations in WT1 are associated with abnormal HOX gene expression in T-ALL period promote up-regulation of MYC expression.In this case LEF1 also relieves transcriptional repression of MYC to allow its maximum overexpression by Notch1 [81]. JAK/STAT signaling pathway in T-ALL About 18% of adult and 2% of pediatric T-ALL cases have activating mutations in the Janus Kinase 1 (JAK1) [38].The JAK family (JAK1, JAK2, JAK3, and TYK2) function as signal transducers to control cell proliferation, survival, and differentiation.They are nonreceptor tyrosine kinases that associate with cytokine receptors to phosphorylate tyrosine residues of the target proteins.This process regulates the recruitment and activation of STAT proteins. The JAK/STAT signaling cascade is an important regulator of normal T-cell development. Each JAK family member associates with a different subset of cytokine receptors.JAK1 regulates the class II cytokine receptors as well as receptors that use the gp130 or γ c receptor subunit.These class of cytokine receptors are involved in controlling lymphoid development [82], [83].The majority of the JAK1 kinase mutations observed in T-ALL cases result in unregulated tyrosine kinase activity.T-ALL cases with mutations in JAK1 appear to be associated with different T-ALL subgroups than patients harboring aberrant expressions of the homeobox transcription factors HOX11 and HOX11L2 [38].JAK1 is involved in the regulation of both interleukin 7 receptor (IL7R) and protein tyrosine phosphatase nonreceptor type 2 (PTPN2) [84], [85]. The interleukin 7 receptor (IL7R) has a gain-of-function mutation in exon 6 in 9% of T-ALL cases [85].Several lines of evidence suggest IL7R plays an important role in T-cell leukemogenesis.IL-7 and IL7R signaling are essential for normal T-cell development.Deficiency of IL-7 and IL7R in mice caused reduction of non-functional T cells and showed an early block in thymocyte development [86]- [89].Loss of IL7R function also results in severe combined immunodeficiency in humans [90].Increased expression of IL7R was associated with spontaneous thymic lymphomas in mice.Furthermore, Notch1 has been shown to transcriptionally upregulate IL7R receptor gene [91].Mutations in exon 6 of IL7R promotes de novo formation of intermolecular disulfide bonds between IL7R mutant subunits, which triggers constitutive activation of tyrosine kinase JAK1 regardless of regulation by IL-7, γ c , or JAK3.Gene expression profiles for IL7R mutations are generally associated with the T-ALL subgroup harboring HOX11L2 rearrangements and HOXA deregulation [85]. Inactivation of protein tyrosine phosphatase non-receptor type 2 (PTPN2) gene is observed in ~6% of T-ALL cases [84], [92].PTPN2 encodes a tyrosine phosphatase, located on chromosome 18p11.3-11.2, that negatively regulates JAK/STAT pathway and NUP214-ABL1 kinase activity.Loss of PTPN2 results in activation of the JAK/STAT pathway and increased T-cell proliferation by cytokines.Unlike JAK1 mutations, deletions in PTPN2 gene appear to be restricted to T-ALL cases which specifically overexpress HOX11 [84].Therefore mutations in PTPN2 probably play a role in T-cell leukemogenesis by deregulating tyrosine kinase signaling. Activating mutations in the FMS-like tyrosine kinase 3 (FLT3) gene are amongst the most common genetic aberrations in acute myeloid leukemia [93]- [95].In T-ALL, FLT3 mutations are relatively rare with a frequency of approximately 4% in adult and 3% in pediatric cases. [96]- [98].FLT3 encodes a class III membrane tyrosine kinase that is expressed in early hematopoietic stem cells.Normally FLT3 is activated when bound by the FLT3 ligand (FL).This interaction causes receptor dimerization and kinase activity resulting in activation of downstream signaling pathways such as Ras/MAP kinase, PIK3/AKT, and STAT5.The most frequent FLT3 mutation involves a duplication of the juxtamembrane (JM) domain.This mutation leads to dimerization of FLT3 in the absence of FLT3 ligand (FL), autophosphorylation of the receptor and constitutive activation of the tyrosine kinase domain, which triggers uncontrolled proliferation and resistance to apoptotic signaling though activation of the PIK3/AKT, Ras/ MAPK and JAK2/STAT pathways [98]- [100]. The B-cell chronic lymphocytic leukemia (CLL)/lymphoma 11B (BCL11B) gene has mutations in 16% of T-ALL patients with HOX11 overexpression.However, in unselected patients, deletions or missense mutations for BCL11B were observed in only 9% of cases.This suggests that BCL11 mutations probably occur across all subtypes of T-ALL [101].BCL11B is located on human chromosome 14q32.Approximately 10% of childhood T-ALL cases have mutations in NRAS oncogene located on chromosome 1p13.2,which is involved in the malignant transformation of many cells [107]. The recurrence of NRAS mutations in T-ALL cases suggests that NRAS is involved in abnormal T-cell biology. Gene expression profiles Whole genome sequencing and gene expression profiles provide a more comprehensive view of the genetic alterations involved in T-cell leukemia.A recent microarray-based gene expression study classified T-ALL cases into major subgroups corresponding to leukemic arrest at different stages of thymocyte differentiation.Currently there are 3 subtypes of T-ALL cases which include the HOXA/MEISI, TLX1/3 and TAL1-overexpressing subtype [108], the LEF1-inactivated subtype [81], and the early T-cell precursor phenotype [109] (Figure 5).Leukemic arrest at early pro-T thymocytes (DN2 cells) were characterized by high levels of expression of the LYL1 gene.Arrest in early cortical thymocytes (DN3 cells) were characterized by changes in HOX11/TLX1 expression.Arrest in late cortial thymocytes (DP cells) were characterized by changes in the TAL1/LMO1 expression.Aberrant HOX11L2/TLX3 activation was also identified as being involved in T cell leukemogenesis (Figure 4) [108].TAL1 and LYL1 are members of the basic helix-loop-helix (bHLH) family of transcription factors, LMO1 is member of the LIM-only domain genes (LMO), and HOX11 and HOX11L2 belongs to the homeobox gene family.Even though an average of 1140 sequence mutations and 12 structural variations in the genome were identified per ETP case, they were able to narrow down the number of affected genes to 3 group and 3 novel genes (DNM2, ECT2L, and RELN).67% of the cases were characterized by activating mutations in genes involved in the regulation of cytokine receptor and RAS signaling.These genes included NRAS, KRAS, FLT3, IL7R, JAK3, JAK1, SH2B3 and BRAF.58% of the cases were characterized by inactivating lesions that disrupted hematopoietic development.These genes included GATA3, ETV6, RUNX1, IKZF1, and EP300.48% of the cases were characterized by changes in histone modifying genes (EZH2, EED, SUZ12, SETD2, and EP300) [10].From gene expression profiling and whole genome sequencing we are beginning to obtain a more complete picture of the genes involved in abnormal T-cell biology. MicroRNA expression profiling found 10 detectable miRNAs in human T-ALL cells, five of these miRNAs (miR-19b, miR-20a, miR-26a, miR-92, and miR223) were predicted to target tumor suppressors genes implicated in T-ALL [110].These five miRNA's were able to accelerate leukemia development in a mouse model.Furthermore, it was shown that these five miRNAs produced overlapping and cooperative effects of the tumor suppressor genes IKAROS, PTEN, BIM, PHF6, NF1 and FBXW7 in T-ALL pathogenesis.miR223 appears to be the most overexpressed miRNA in leukemia.These results indicate the important role that miRNA's play in abnormal T-cell biology. The TAL1 gene, located on chromosome 1p32, encodes a class II basic helix-loop-helix factor [113].The protein binds DNA as a heterodimer with the ubiquitously expressed class I bHLH genes known as E-proteins such as E2A or HEB.These heterodimers recognize an E box sequence (CANNTG) [114].TAL1 positively and negatively modulates transcription of targets gene as a large complex consisting of an E-protein, the LIM-only proteins LMO1/2, GATA1/2, Ldb1, and other associated coregulators.This complex usually binds a composite DNA elements containing an E box and a GATA-binding site separated by 9 or 10 bp (Figure 6) [115]- [117].It was shown recently that in T-ALL cells TAL1, GATA-3, LMO1, and RUNX1 together form a core transcription regulatory circuit to reinforce and stabilize the TAL1-directed leukemogenic program [118].TAL1 expression is essential for hematopoiesis.It is required for specification of hematopoietic stem cells during embryonic development and it is necessary for erythroid maturation.Normal expression of TAL1 is restricted to the DN1-DN2 subset of immature CD4 -/CD8 -thymocytes with ectopic expression resulting in leukemic arrest in late cortical thymocytes [108]. Two models have been proposed for TAL1-induced leukemogenesis.In the prevailing model TAL1 acts as a transcriptional repressor by blocking the transcriptional activities of E2A, HEB, and/or E2-2 through its heterodimerization with these E-proteins.TAL1 may mediate its inhibitory effect by interfering with E2A-mediated recruitment of chromatin-remodeling complex which activate transcription [114], [119]- [121].It also been shown to associate with several corepressors including HDAC1, HDAC2, mSin3A, Brg1, LSD1, ETO-2, Mtgr1, and Gfi1-b (Figure 6) [122].In human T-ALL TAL1 transcriptional repression may be mediated by TAL1-E2A DNA binding and recruitment of the corepressors LSD1 and/or HP1-α [123].In the other model TAL1 induces leukemogenesis through inappropriate gene activation [124].At least two genes RALDH2 and NKX3.1 are transcriptionally activated by TAL1 and GATA-3 dependent recruitment of the TAL1-LMO-Ldb1 complex [125], [126].As a transcriptional activator TAL1 has been shown to associate with the coactivators p300 and P/CAF (Figure 6) [127], [128].Both of these complexes contain HAT activities.The prevalence of histone-modifying enzymes in TAL1 complexes suggests that one function of TAL1 is to regulate chromatin states of its target genes. TAL1 and the lymphoblastic leukemia-derived sequence 1 (LYL1) share 90% sequence identity in their bHLH motif [26].Like TAL1, LYL1 role in leukemogenesis was discovered by studying chromosomal rearrangements.It is expressed by adult hematopoietic cells and is overexpressed in T-ALL.Gene expression profiling showed that overexpression of LYL1 resulted in leukemic arrest at pro T-cell (Double negative) stage of T-cell differentiation (Figure 5) [108]. In mouse embryos LYL1 and TAL1 expression overlaps in hematopoietic development, developing vasculature and endocardium.At the molecular level LYL1 controls expression of several genes involved in the maturation and stabilization of the newly formed blood vessels [129].Therefore, bHLH proteins play an important role in abnormal T-cell biology. LIM domain proteins Aberrant expression of the LMO1 and LMO2 proteins is observed in 45% of T-ALL cases.The discovery of the LMO1 and LMO2 genes adjacent to the chromosomal translocations t(11;14) (p15q11) and t(11;14)(p13;q11) was the first indication that these proteins were involved in Tcell leukemogenesis [130]- [132].The LMO family (LMO1, LMO2, LMO3, and LMO4) encodes genes that have two cysteine-rich zinc coordinating LIM domains.The LIM domain is found in a variety of proteins including the homeodomain-containing transcription factors, kinases, and adaptors.Despite the presence of 2 zinc finger motifs, LMO1 and LMO2 genes do not appear to bind DNA.Instead the LMO proteins probably act as scaffolding protein to form multiprotein complex through their interaction with the LIM domain binding protein 1 (LDB1) (Figure 6) [116]. Leukemogenesis by aberrant expression of LMO1 or LMO2 is thought to occur via two mechanisms.In the first mechanism aberrant expression or abnormal LMO proteins forms a dysfunctional multiprotein complexes that alters the expression of the target genes by directly binding to their promoters [133]- [136].In the second mechanism abnormal LMO1 or LMO2 complexes displace the LMO4 complex.This results in arrest of T-cell development at the DP stage [137]. LMO2 function is necessary for normal T-cell development.In fact, LMO2 has been shown to interact with several factors involved in aberrant T-cell biology.As mentioned above TAL1 may regulate its target genes through the TAL1-LMO-Ldb1 complex (Figure 6).Ectopic expression of LMO1 and LMO2 leads to accumulation of immature DN T cells in mice with subsequent leukemia manifestation with a long latency, suggesting the role of LMO is important for the development of tumors but is not self-sufficient [26], [138], [139].Ectopic expression of both TAL1 and LMO1 in mice accelerated the progression to leukemogenesis (Figure 7).In this case thymic expression of the TAL1 and LMO1 oncogenes induced expansion of the ETP/DN1 to DN4 population and lead to T-ALL in ~120 days.The acquisition of a Notch1 gain-of-function mutation was proposed to be the rationale behind this increase in leukemia penetrance.In fact, thymic expression of all three oncogenes Notch1, TAL1 and LMO1 induced T-ALL with high penetrance in 31 days, the time necessary for clonal expansion (Figure 7) [140].These studies suggest that aberrant LMO proteins are key players in abnormal T-cell biology. Homeobox genes Dysregulated expression of HOX-type transcription factors occurs in 30-40% of T-ALL cases [23], [24], [32].The HOX genes play an important role in hematopoiesis [141].The majority of the HOXA, HOXB and HOXC genes clusters are expressed in hematopoietic stem cells and immature progenitor compartments.Furthermore, these genes are down regulated during differentiation and maturation of hematopoiesis [142], [143].In T-ALL dysregulation of the HOXA gene cluster is a frequent recurring aberration.Upregulation of HOXA9, HOXA10, and HOXA11 occurs in T-ALL cases when the TCR beta regulatory elements are juxtaposed with these genes [16]. Two orphan HOX proteins (HOX11 and HOX11L2) have been implicated in T-cell leukemogenesis [144].Overexpression of HOX11 is observed in 30% of T-ALL cases because of two recurring translocation events.This gene is also frequently overexpressed in T-ALL cases in the absence of genetic rearrangements.Mice deficient in HOX11 fail to develop a spleen, implicating its role in spleen organogenesis [145].Normally HOX11 is not expressed in thymocytes.Ectopic expression of HOX11 in T-cells caused a block at the DP stage of T-cell differentiation (Figure 5).This is consistent with genetic profiling studies which showed that overexpression of HOX11 results in leukemic arrest at early cortical thymocytes stage (Figure 5) [108].Overexpression of HOX11 in hematopoietic stems cells of mice developed T-cell leukemia.However, the long latency of tumorigenesis suggests other genetic abnormalities are required [146]- [148]. It should be noted that nearly all HOX11 T-ALL cases have activating NOTCH1 mutation.It has been proposed that HOX11 binding to the Groucho-related TLE corepressor was necessary for maximal transcriptional regulation of Notch1-responsive genes.This suggests that HOX11 and Notch1 may synergistically regulate transcription in T-ALL [149]. Epigenetic modifications Aberrant changes in DNA methylation and histone modifications occur frequently in all cancers.Estimates vary but studies suggest that there are approximately 100 epigenetic changes for every DNA based genetic mutation.Consequently epigenetic modifications will almost certainly play an important role in T-cell leukemogenesis. Comparative genomic hybridization data of T-ALL primary samples has shown recurrent deletions in 25% of T-ALL cases in EZH2 and SUZ12 genes.These genes are members of the polycomb repressor complex 2 (PRC2) and involved in establishing the repressive H3K27me3 mark.Activation of Notch1 was shown to cause the loss of the H3K27me3 mark by antagonizing the activity of PRC2.This data implicates histone modifications and PRC2 as important regulatory factors in T-cell leukemogenesis [150]. The CpG island methylator phenotype (CIMP) has been used to characterize T-ALL patients. The CIMP+ phenotype has a large number of hypermethylated genes with the CIMP -having a low number of hypermethylated genes.Analysis of the methylation status of 20 genes, the majority of which are implicated in abnormal T-cell biology, in 61 pediatric T-ALL patients and 11 healthy children showed a difference in the CIMP pattern.On average patients had 2.4 hypermethylated loci where none of the normal individual's loci where hypermethylated [151].Therefore changes in the patterns of CpG island methylation at critical genes can be associated with specific tumorigenesis and consequently may be playing an important role in T-cell leukemogenesis. Summary Although there are a large number of genes involved in the molecular morphogenesis of Tcell leukemogenesis, many of the genes act through related pathways.This has helped us clarify the different genetic subtypes of T-ALL improving our risk stratification of T-ALL.Furthermore understanding the different genetic subtypes is allowing for personalized chemotherapy.Powerful new tools such as next-generation sequencing aid in identifying more relevant recurring lesions in leukemogenesis.This is resulting in the development of better therapeutic agents and methods.Because of improved supportive care, better risk stratification and personalized chemotherapy the 5-year survival of pediatric acute lymphoblastic leukemia has increase to 85% [152].Even though we have made significant progress in the understanding of the molecular morphogenesis of T-ALL there are still significant gaps in our knowledge of the genes involved in leukemogenesis. Figure 1 . Figure 1.Stages in T-cell development.The different regions of the adult thymic lobule are indicated to the rights.The progression of hematopoietic stem cells (HSC), multipotent progenitors (MPP), and the common lymphoid progenitors (CLPs) are shown to the left in the bone marrow.Lymphoid progenitors migrated through the blood to the thymus.The migration and differentiation from immigrant precursor to early T-cell precursors (ETP), to double negative (DN), to double positive (DP), and to single positive (SP) stages is illustrated within the distinct microenvironments of the thymus.Complete commitment to the T-cell lineage is indicated with a line between the DN2b and DN3a stages.β or γδ selection is indicated between the DN3a and DN3b stages.This figure is modified form Aifnatis 2008 and Rothenberg 2008[9],[11] Figure 2 . Figure 2. Regulatory factors in early T-cell development.The different stages of the cell differentiation are shown in the center starting with hematopoietic stem cells (HSC) and progressing to single positive cells.Above and below the line regulatory factors involved in the progression from one stage to another are indicated.Red lines indicated negatively active factors.The triangles at the top of the illustration indicate regulatory factors which are either upregulated or downregulated at indicated stages.For example, Tal1 expression decreases from the DN2 stage to the DN3a stage whereas Lef1 expression increased during that same transition.The solid blue line indicates the β-selection checkpoint with the long blue arrow indicating the TCRβ-dependent stages.At the bottom of the illustration the different cell surface phenotypes are shown below the corresponding stage in T-cell development.This figure is modified from Rothenberg 2008 [9]. Figure 3 . Figure 3. Two mechanisms of aberrant activities caused by chromosomal translocations.A. A strong promoter or enhancer is rearranged next to a proto-oncogene resulting in abnormal expression of the proto-oncogene.The TCR loci elements and recurring gene targets involved in T-cell leukemogenesis are indicated to the left.B. Chromosomal rearrangement between two transcription factors result in a chimeric transcription factor with oncogenic activity.Recurring gene fusions in T-cell leukemogenesis are indicated in the center below the arrow. Figure 4 . Figure 4.The Notch1 signaling pathway and mutations involved in aberrant Notch1 activation. A. Depiction of normal Notch1 signaling.Binding of Notch ligand to the extracellular Notch1 triggers a conformation change in the heterodimerization domain (HD).This allows cleavage first by a metalloproteinase of the ADAM family and then by γ-secretase.These cleavages releases Notch1 from the membrane allowing it to translocate into the nuclease.Once in the nucleus, Notch1 associates with a transcriptional complex composed of CSL (CBF1/suppressor of hairless/lag1) and mastermind-like 1 (MAML1) to activate Notch1 target genes.Notch1 then becomes associated with FBW7 and is tagged for degradation following ubiquitination.B. Mutations in the HD domains (indicated by a red star) result in ligand independent cleavage allowing aberrant release of Notch1 from the membrane.C. Mutations in the PEST domain of Notch1 or mutations in FBW7 interfere with ubiquitination of Notch1.This allows accumulation of intracellular Notch1 by reducing its degradation.The figure is modified from Aifantis 2008 [11]. Lymphoid enhance factor 1 ( LEF1) gene is mutated in 15% of pediatric T-ALL cases [81].Inactivation of LEF1 was associated with increased expression of MYC and MYC targets, a gene expression signature consistent with developmental arrest at a cortical stage of T-cell differentiation.Interestingly, T-ALL cases with LEF1 mutation lacked overexpression of TAL1, HOX11, HOX11L2 and HOXA genes suggesting that LEF1 acts via different molecular pathways in T-cell leukemogenesis.In fact, The LEF family of DNA-binding transcription factors interacts with nuclear β-catenin in the WNT signaling pathway.The loss of LEF1 may result in the relief of transcriptional repression of MYC in T-ALL cases.It was reported that LEF1 probably contributes to T-ALL pathogenesis by acting in concert with NOTCH1 to T-Cell Leukemia -Characteristics, Treatment and Prevention 2 and encodes a kruppel-like C 2 H 2 zinc finger protein which acts as a transcriptional repressor.Loss of function mutations in BCL11B gene in mice leads to developmental arrest of T-cell in DN2-DN3 stage, acquisition of NK-like features, and aberrant self-renewal activity.Transcriptional activation of IL-2 expression in activated T-cell is mediated by BCL11B via its interaction with p300 co-activator at the IL-2 promoter [102]-[106].Because of BCL11B's role in normal T-cell development, it plays an important role in T-cell leukemogenesis. Figure 5 . Figure 5. Gene subtypes resulting in differentiation arrest at specific stages of T-cell development.The illustration shows the progression of T-cell development from the double negative stages to the mature single positive stage.The colored rectangles indicates stages of leukemic arrest.Overexpression of LYL, HOX11, TAL1, and HOXA lead to differentation arrest at the double negative stage, early cortical stage, late cortical stage, and positive selections stage, respectively.Loss of Lef1 expression results in early cortical leukemic arrest.The table below indicates the molecular subtypes leading to differentiation arrest at specific stages of T-cell development and the molecular subtypes occurring across all the stages of T-cell development.Recently whole genome sequencing of early T-cell precursor acute lymphoblastic leukemia (ETP-ALL) identified several genes involved in abnormal T-cell biology[10].15% of T-ALL cases are ETP-ALL.Phenotypically ETP-ALL is negative for the cell surface markers CD1a and CD8, has little to no expression of CD5, and has aberrant expression of myeloid and hematopoietic stem cell markers.This study performed whole genome sequencing on 12 children with ETP-ALL.The frequency of the mutations identified from these 12 cases was then accessed in 94 cases of T-ALL.Of these 94 cases 52 cases had ETP and 42 had a non-ETP pediatric T-ALL.Even though an average of 1140 sequence mutations and 12 structural variations in the genome were identified per ETP case, they were able to narrow down the number of affected genes to 3 group and 3 novel genes (DNM2, ECT2L, and RELN).67% of the cases were characterized by activating mutations in genes involved in the regulation of cytokine receptor and RAS Figure 6 . Figure 6.Model of TAL1 complexs and target sites.A. TAL1 complex binding to an E-box and GATA box.B. TAL1 complex binding to a double E-box.C. TAL1 complex binding to a single E-box.D. TAL1 complex binding to a single GATA site showing activation of either the RALDH-2 or NKX3.1 genes.E. TAL1 complex binding to a GC-box with activation of c-kit.The table to the lower right shows the different TAL1 regulator partners.The partners are divided into three categories transcription factors, co-activators, or co-repressors. Figure 7 . Figure 7. Model of progression to leukemia via TAL1, LMO1 and Notch1.The dashed line indicates the time of weaning.The number of days to differentiation arrest and finally T-ALL are shown above the cell stages.A. Shows the numbers of days to full T-ALL in mice with TAL1 and LMO1 oncogenes.Note the 70 day delay for a Notch1 gain of function mutation.B. Shows the number of days to full T-ALL in mice with TAL1, LMO1, and Notch1 oncogenes.Note the delay is ~30 days the time necessary for clonal expansion.This figure is modified from Tremblay et al 2010 [140]. Table 1 . Table of recurring translocation involved in T-ALL.The rearrangements are divided into those involving TCR and non-TCR loci. Table 2 . Table indicating recurring genetic alterations in T-ALL.The type of alteration and frequency of occurrence in T-ALL cases is indicated.
8,835.4
2013-02-20T00:00:00.000
[ "Medicine", "Biology" ]
Mathematical Modeling and Analysis Methodology for Opportunistic Routing in Wireless Multihop Networks Modeling the forwarding feature and analyzing the performance theoretically for opportunistic routing in wireless multihop network are of great challenge. To address this issue, a generalized geometric distribution (GGD) is firstly proposed. Based on the GGD, the forwarding probability between any two forwarding candidates could be calculated and it can be proved that the successful delivery rate after several transmissions of forwarding candidates is irrelevant to the priority rule. Then, a discrete-time queuing model is proposed to analyze mean end-to-end delay (MED) of a regular opportunistic routing with the knowledge of the forwarding probability. By deriving the steady-state joint generating function of the queue length distribution, MED for directly connected networks and some special cases of nondirectly connected networks could be ultimately determined. Besides, an approximation approach is proposed to assess MED for the general cases in the nondirectly connected networks. By comparing with a large number of simulation results, the rationality of the analysis is validated. Both the analysis and simulation results show that MED varies with the number of forwarding candidates, especially when it comes to connected networks; MED increases more rapidly than that in nondirectly connected networks with the increase of the number of forwarding candidates. Introduction Recently, opportunistic routing (OR) for wireless multihop networks has drawn much attention due to its robustness in practical dynamic environments with frequent transmission failures.Traditional routing protocols, that is, dynamic source routing (DSR) [1], just rely on a (preselected) single fixed path to deliver packets from a source to a destination; therefore, the performance is easily affected by the wireless link.While in OR a packet can be received independently with a certain successful probability from each forwarding candidate, the OR mainly exploits the inherent broadcast nature of wireless transmission to mitigate the impact of poor wireless links.This feature could guarantee the robustness of the transmission.As a result, OR can cope well with the unreliable and varying link quality that is typical of wireless networks [2]. In OR, each forwarding candidate is labeled with a priority which is set according to a certain metric, that is, the distance to the destination.Once a forwarding candidate receives a packet, it would store the packet in the local buffer and then start a timer.If this forwarding candidate receives an acknowledgement from any node with a higher priority before timer elapses, it means that the packet has been forwarded by other nodes.The forwarding candidate will drop this packet from the buffer.Otherwise, the node transmits the packet when the timer elapses [2].The buffering time, also called the queueing delay, which represents the duration from the time when one packet arrives at the node to the time when this packet is ready to be transmitted, is the major component of mean end-to-end delay (MED). Besides the robustness against communication failures, time efficiency is also of primary importance in wireless multihop networks due to the applications of real-time nature, that is, disaster relief, military operation [3].It is known that the MED is the most popular criterion for time efficiency.Moreover, the MED is inversely proportional to the average throughput, and the total average throughput can also be obtained by the derivation of MED [4].For this reason, we strive to find appropriate methodology and model to study the generative mechanism and characterization of the MED in this paper. Modeling the forwarding feature and analyzing the MED theoretically for OR in wireless multihop network are a great challenge.There are several reasons: distributed architecture, varying wireless environment, dynamic topology, and so on.To deal with these issues, we firstly introduce a regular OR, which avoids the interchannel interference by setting different orders of access channel.Then we analyze the queueing delay of OR by dividing the network topology into two categories as shown in Figure 1.One of the network topologies is directly connected network in which all the nodes are in the communication range of each other.Another network topology is nondirectly connected network in which not all the nodes could communicate with each other directly.This analysis may be the cornerstone of modeling the MED for OR strategies and could allow us to have a comprehensive understanding about queueing delay features.The main contributions of this paper are summarized as follows. (1) A new mathematical distribution called generalized geometric distribution (GGD) is proposed to model the forwarding feature of OR in wireless multihop networks. (2) A new methodology for analyzing the OR's MED is proposed.With the knowledge of priority rule and delivery probability, the forwarding probability could be calculated based on GGD.Afterwards, the generating function of the queue length distribution could be derived.According to the property of the multivariate generating function, closed form expressions of MED are derived.These analysis results could be applied to arbitrary directly connected networks and some special nondirectly connected networks. (3) An approximate analysis is also developed for the general cases in the nondirectly connected networks.Meanwhile, a large number of simulation studies have been performed.We have observed that analysis results coincide with the experimental data very well. The organization of this paper is as follows.The next section summarizes related works.The system model is described in Section 3. Section 4 introduces the GGD and some basic definitions.Based on the system model and basic definitions, we analyze the MED for different kinds of networks in Section 5.In Section 6, numerical results and experimental data are presented and discussed.In the last section, we discuss future research directions and conclude the paper. Reference [11] proposed a very general analytical model to describe OR and then derived a closed form expression about the average number of transmissions for successfully delivering a packet to the destination.In [12], the expected transmission count (ETX) of different candidate selection algorithms were profoundly evaluated based on a very useful discrete-time Markov chain.Similarly, a mathematical model was proposed to compute the total number of transmissions of the whole network in [13].It showed that the main reason behind retransmissions is that forwarder with lower priority is unable to hear the transmission from its neighbor with higher priority.Reference [14] formulated the end-to-end throughput bound as a linear programming problem.Then a heuristic algorithm was proposed to find a feasible scheduling of opportunistic forwarding priorities to achieve the maximum capacity.Considering the link-level interference among the nodes, a closed form expression of maximum achievable throughput was provided for directly connected multihop wireless networks in [15].Reference [16] mainly focused on calculating the end-to-end energy consumption of each potentially available route for both traditional routing and opportunistic routing.In summary, [11][12][13] focused on the number of transmissions for successfully delivering a packet to the destination.References [14,15] provided insights into the system throughput.Reference [16] studied the energy efficiency.These theoretical studies differ from our works. The works which are most related to ours are [4,17].Reference [17] was the first paper which analyzed MED by deriving the steady-state joint generating function of the queue length distribution.However, the analysis was limited to the tandem queueing network.Reference [4] analytically derived saturation throughput and MED for an interference aware opportunistic relay selection protocol.Unfortunately, it is only applicable to two-or three-hop networks.Here, our work well modeled the broadcast characteristic of wireless communication in OR and the analysis methodology for OR could be applied in a very general multihop network. System Model We consider that a network consists of + 1 nodes.Node 0 is assumed as the destination and other nodes could transmit fixed-length packets to node 0. The behaviors of all the nodes in the data forwarding are coordinated by an OR protocol.The system operates in the time slotted and synchronized fashion.The time is divided into slots of size corresponding to the transmission time of a packet.A packet arriving during a slot cannot be forwarded before the beginning of the next slot.Each node is regarded as a first come first serve (FCFS) server. The main principles of the forward protocol analyzed in this paper are summarized as follows. (P1) Forwarding candidates are coordinated based on a priority rule.In this paper, the priorities are set in accordance with the distance to the destination.The shorter the distance to the destination, the higher the priority that would be set for the forwarding candidate.In Figure 1, the priorities of the forwarding candidates from node to node 1 increase in turn. In the viewpoint of implementation, each node could obtain the global knowledge about the priority easily by the forward candidates discovery process as elaborated in [9]. (P2) Since the nodes share a common radio channel, to avoid the collision, channel access is controlled in accordance with the preassigned priorities.More specifically, a node is allowed to transmit in a given slot only if the nodes with higher priority within its communication range have empty queues. (P3) After the upstream node broadcasts a packet, each node within its communication range may hear the packet.To avoid the duplicate transmission, the packet is received and further forwarded by only one node.Current node would drop the packet which would be received by other node with higher priority.That is, a given packet should be received only once by one of the forwarding candidates according to its corresponding priority order from high to low. (P2) and (P3) could be easily implemented when designing a practical OR protocol.Take nondirectly connected network as a general example.Assume that the communication range is two hops for each node in, which means node 3 may only be influenced by nodes {5, 4, 2, 1}.When receiving a packet from node 4 or node 5, node 3 would not transmit the packet until node 2 and node 1 finish their transmission.What is noteworthy is that node 1 or node 2 may also receive the same packet from node 5 or node 4. If node 3 receives this packet from node 1 and node 2 during its waiting time, it would drop this packet because the packet has been forwarded by nodes with higher priority. In light of the property for multivariate generating function, the steady-state average queue length at node is where () is the steady-state joint generating function of the queue length distribution and its common definition is where we use the notation = ( 1 , 2 , . . ., ) and let () denote the number of packets at node at time .Here, we assume that the Markov chain { ()} =1 is ergodic; namely, (0) > 0. The normalization condition is (1, . . ., 1) = 1. Let denote the arrival rate of packets at node .The MED in the system is obtained by applying Little's law to the whole system and it is given by For convenience, we employ the following notations: where 1 ≤ ≤ + 1 and +1 () = (0, . . ., 0) = (0). Basic Definitions In this section, the GGD is presented.Based on the GGD, the forwarding probability of OR is calculated.Finally, some other basic definitions for the analysis are presented. Let denote the probability of event ; we have where = (1 − ) and it could be proved that The traditional geometric distribution is a specific case of GGD with 1 = 2 = ⋅ ⋅ ⋅ = and = −1 . GGD can be widely used to reveal the radio characteristics of wireless transmission.Specifically, could be the event of the th transmission between any two nodes and could be the event that these two nodes have transmitted for times before being successful.Applying to OR, could be the event that the th forwarding candidate forwards the packet successfully.If it failed, the ( + 1)th forwarding candidate would forward the packet and so forth. is the delivery ratio decided by the underline propagation model and is the actual forwarding probability. Other Basic Definitions. Let () be the number of packets generated at node in the interval (, + 1] and its steady-state joint generating function of the input process is expressed as On the basis of the property for multivariate generating function, we have Let () be the number of packets sent out by node at the beginning of the slot .Based on (P2), we get where () is a indicator function denoted as And ℎ is the number of nodes with higher priorities within node 's communication range.When ℎ = 1, this network becomes a -node tandem system where the packets are forwarded hop-by-hop.Generally, the number of forwarding candidates are more than two nodes in OR (ℎ ≥ 2). The explanation for (11) is that node could transmit a packet when its own buffer is not empty while the buffers of its neighbors with higher priorities ( − ℎ ≤ ≤ − 1) are empty. The third term of ( 15) represents the number of packets received from the neighbor nodes of node . Analysis If () is determined, MED could be calculated from ( 3) and (1).In this section we first focus on () for directly connected networks.Then we analyze that in nondirectly connected networks. Directly Connected Networks. Based on ( 2), ( 4)-( 15), and using a standard technique proved in Appendix A, we obtain The term () − +1 () represents the event that the buffer of node is not empty while the buffers of nodes − 1, − 2, . . ., 1 are empty and in such a case a packet is transmitted from node to one of the forwarding candidates or retransmitted by node as shown in the term . In general, delivery probabilities between any nodes and the generating processes of the packets are known; namely, () and (calculated based on Definition 1) in ( 16) are known.Thus, in the following part, we would derive () through determining (0) and − 1 boundary terms () (2 ≤ ≤ ). Given all the boundary terms, the joint generating function () is uniquely determined.Recalling the derivation, it is observed that () is mainly determined by the priority rule and delivery probability. Nondirectly Connected Networks. In this kind of networks, the analysis becomes quite complex because node and node + ℎ may succeed in their transmissions simultaneously.To simplify the analysis, we assume that it is a linear network, the packets are only generated at node , and all the nodes have the same communication range denoted as ℎ hops. In the following part, we first study MED for the case = ℎ + 2 shown in Figure 3.Then, we propose an approximate analysis for MED in the general scene with > ℎ + 2. The motivation for considering such a special case is threefold.Firstly, for the cases ≤ ℎ + 1, nodes , − 1, . . ., 2, 1, which are already analyzed in Section 5.1, are in the communication range of each other.Secondly, it is the simplest nondirectly connected networks in which channel could be reused (node and node 1 may succeed in their transmissions simultaneously if the other nodes are silent).Thirdly, it will serve us as a crucial building stone in developing our approximate analysis of general nondirectly connected networks. It is that the buffers of node and node 1 are not empty while the buffers of other nodes are empty.In such case node and node 1 transmit simultaneously.Since node is the sole source, it is easy to see that only node could have more than one packet at a time instant.Other nodes can have at most one packet at a time.Considering (2), for = 1, 2, . . ., − 1, we can define () = + where , are two polynomials consisting of 1 , . . ., −1 , +1 , . . ., −1 , 1 , 2 , . . ., +∞ .By setting = 0, 1 in () = +, two equations could be established over and .By solving the equations, we have Through substituting = 0 for = 1, . . ., − 1 in (27), we get By substituting (28) into (26), we obtain (25).The proof of theorem is completed. General Networks with 𝑁 > ℎ+2. As mentioned above, it is very difficult to calculate the MED in the scene with > ℎ + 2 since many nodes may succeed in their transmissions simultaneously.To circumvent this difficulty, we propose here an approximate analysis method. The rationale behind the proposed approximation is that for approximating the behavior of a node, it might suffice to consider the behavior of a substitute node which has a similar communication environment like that of the analyzed node.The behavior of a node in the general scene ( > ℎ+2) can be approximated as the behavior of substitute node in the special scene ( = ℎ + 2) in Figure 2. If nodes − ℎ − 2, − ℎ − 3, . . ., 0 are approximated as a destination node, nodes , − 1, . . ., − ℎ − 1 in the general case have the similar behaviors of the substitute nodes ℎ + 2, ℎ + 1, . . ., 1 in the special case.Assume that the corresponding transmission probabilities between the substitute nodes in the special scene is the same as that in the general scene; we get where , denote queueing delay of node in the general scene and special scene, respectively. Similarly, the upstream nodes , . . ., − ℎ − 1 could be approximated as a source node and the downstream nodes − 2ℎ − 3, . . ., 0 could be approximated as a destination node.Then, by substituting the corresponding transmission probabilities, we get Thus, the queueing delay of each node in general scene could be obtained by using our approach iteratively. Results and Discussion We validate the correctness of the theoretical derivation and approximation approach by comparing numerical results with simulation using MATLAB in this section.Beside the MED, an extra performance metric called saturation throughput is studied in our simulation.It is defined as the minimum value of arriving rate for which the MED becomes infinite. + 1 nodes (including the destination) with infinite buffers are used.Node is the source node where node 0 is the destination node.The external arriving process is the Bernoulli process with parameter .When considering directly connected networks, all the nodes are randomly distributed in a circular area.The diameter is 100 m to ensure that all the nodes in the network are directly connected to each other.When considering nondirectly connected networks, nodes are preassigned in a line.The distance between two adjacent nodes is the same and is set to 30 m.Consider Delivery probability based on the shadow propagation model in (40) is assumed.In the equation, () denotes the delivery The approximation source node The approximation destination node The approximation destination node Special scene probability for distance , is the transmission power, and are the transmission and reception antenna gain, respectively, is the signal wavelength (/, with the speed of light, = 3 × 10 8 m/s), is the path loss exponent, and is the system loss.Packets are correctly delivered if the received power is greater than or equal to ℎℎ.The delivery probability with varying distance is depicted in Figure 4.The corresponding simulation parameters are listed in Table 1.We implement the OR described in Section 3. The MED for experimental data is the average time of packets transmitted from the source node to the destination node.After each simulation, we record , ℎ , and the delivery probability among the nodes and then feed them into the mathematical model to obtain numerical results.Each data sample in the following figures is averaged over 100 runs. The MED for directly connected networks with two, three, and four nodes versus different arriving rates is plotted in Figure 5.We obtain the following observations.Firstly, theoretical calculation has high accuracy agreement with the computer simulation. Secondly, under little traffic ( ≤ 0.24), MED is all small.The MED in the four-node network is about one slot higher than that in three-node network and two slots higher than that in two-node network.Since network congestion does not exist under the little traffic, MED is mainly caused by the transmission time between nodes.The more the nodes in the network, the higher the MED.Thirdly, as the arriving rate increases, MED becomes large.When the traffic is heavy, the MED rises sharply.In OR, a low priority node is not allowed to access to channel until all the packets buffered in the nodes of higher priorities are transmitted successfully.Thus, network congestion is the main reason for the sharp rise of the MED. Finally, the saturation throughput for the two-, three-, and four-node networks is 0.72, 0.47, and 0.36, respectively.Obviously, more forwarding candidates introduce more coordination time overhead.On the other hand, in order to supply the spatial diversity and improve the transmission reliability, enough forwarding candidates are required in OR.Therefore, it is very important to discover a suitable forwarding list when designing the OR.In Figures 6-10, the MED versus the arriving rate for nondirectly connected networks with varying number of nodes is plotted.The results in Figure 6 show that the theoretical analysis for the case = ℎ + 2 is in good agreement with experiment data.Saturation throughput in the network with = 4 is about 0.45.More notably, saturation throughput does not significantly change with the increasing number of nodes.The reason is that under heavy traffic, the MEDs are mainly caused by the network congestion at the source node.However, in nondirectly connected networks, the source node is only directly affected by nearby nodes.Thus, adding the nodes at the end of the linear network would not change the behavior of the source node.The arriving rate of packets Experiment results_N = 5 Approximate value_N = 5 Note that the approximate value coincides closely with the experimental data shown in Figures 7-10.In particular, when ≤ 0.40, mean absolute error between approximate value and experimental results is controlled within 1.78916 slots regardless of the increasing number of nodes (varying from 5 to 8).These results verify the rationality of the approximate method. Obviously, when the arriving rate becomes larger than 0.45 packet/slot, all the MEDs in Figures 7-10 rise sharply.In addition, we would like to point out that the differences become wider when the arriving rate becomes greater than the saturation throughput.A possible explanation for this is that when network congestion happens, the behavior of a node cannot be approximated as that in a special case with network congestion.In such case, we are unable to determine the exact value of the MED but get a general trend. Conclusion This paper investigates the MED of a regular OR under different scenes.We first propose a new mathematical distribution which has many more exciting applications than the traditional geometric distribution.Then, we develop the MED calculation methodology for any directly connected network and some special cases of nondirectly connected network.We also propose an approximate analysis method for the general cases so that this analytical framework can be applied to more general scenario.By applying the MED calculation into actual networks, the relationship between the MED and the number of forwarding candidates is revealed.In directly connected networks, MED is quite sensitive to the number of forwarding candidates, while, in nondirectly connected network, this phenomenon is not apparent. This MED calculation methodology can be applied to an arbitrary OR with the knowledge of the priority rule and delivery rates between the nodes.In this case, the relationship between the MED and related parameters can be indicated clearly and specifically, which provides guideline for the OR design evaluation or OR optimization. Currently, our approach for nondirectly connected networks is limited to linear networks and our future work would extend to two-or three-dimensional network.
5,462.8
2015-03-08T00:00:00.000
[ "Computer Science", "Mathematics" ]
Metformin Modifies the Gut Microbiota of Mice Infected with Helicobacter pylori Metformin is widely prescribed to treat type 2 diabetes. Diabetes patients treated with metformin have a decreased risk of cancers, including gastric cancer. Among the factors influencing digestive carcinogenesis, gut microbiota interactions have been intensively studied. Metformin exhibits direct antimicrobial activity toward Helicobacter pylori, which plays a crucial role in gastric carcinogenesis. Mice were infected with H. pylori and treated for 12 days with either metformin or phosphate-buffered saline (PBS) as a control. At the end of the treatment period, the mice were euthanized and cecal and intestinal contents and stool were collected. The gut microbiota of the three different digestive sites (stool, cecal, and intestinal contents) were characterized through 16S RNA gene sequencing. In mice infected with H. pylori, metformin significantly decreased alpha diversity indices and led to significant variation in the relative abundance of some bacterial taxa including Clostridium and Lactobacillus, which were directly inhibited by metformin in vitro. PICRUSt analysis suggested that metformin modifies functional pathway expression, including a decrease in nitrate reducing bacteria in the intestine. Metformin significantly changed the composition and predicted function of the gut microbiota of mice infected with H. pylori; these modifications could be implicated in digestive cancer prevention. Introduction Metformin, also known as 1,1-dimethylbiguanide, is the most widely prescribed glucose metabolism regulator for the treatment of type 2 diabetes mellitus globally [1]. The pharmaceutical effect of metformin is partially determined by AMP-activated protein kinase (AMPK) activation [2]. In response, the digestive system modifies glucose absorption and enhances anaerobic glucose metabolism [3]. In animals, metformin is accumulated at very high concentrations in the wall of the intestine [4]. For several years metformin has also been studied intensively for its antitumor properties in different types of cancer [5]. In 2005, a Scottish study hypothesized that metformin treatment may reduce cancer risk in diabetic patients [6]. A meta-analysis of seven cohort studies of gastric cancer, which has the third greatest mortality among cancers worldwide [7], showed that gastric cancer risk decreased in diabetic patients treated with metformin [8]. Helicobacter pylori plays a crucial role in gastric carcinogenesis by promoting inflammation and degradation of the gastric epithelium [9]. This Gram-negative bacterium colonizes the stomach mucosa in more than 90% of all gastric cancer patients [10]. H. pylori infection is among the most prevalent infections worldwide [11]; however, only 1% of infected patients develop gastric adenocarcinoma [12]. Factors influencing gastric cancer occurrence in infected patients include genetic host factors and environmental factors including the host microbiota [13]. Alpha Diversity Indices Are Reduced in the Gut Microbiota of Mice Infected with H. pylori and Treated with Metformin The Chao1, Shannon, and phylogenetic diversity (PD) whole tree alpha diversity indices were used to characterize the richness and diversity of the bacterial community within each sample. At the end of the treatment period, alpha diversity indices were compared between the metformin and control groups for three different sample types: stool, cecal content, and intestinal content. In stool, metformin treatment induced a significant reduction in alpha diversity indices (p ≤ 0.002; Figure 1). These significant decreases were also observed in the other anatomical digestive sites studied (cecum and intestine). Thus, metformin treatment led to reductions in all alpha diversity indices studied in a homogenous manner throughout the mouse digestive system. Beta Diversity Analysis Shows Changes in the Gut Microbiota of Mice Infected with H. pylori and Treated with Metformin Beta diversity analysis of the three types of digestive samples clearly showed that the metformin and control treatment groups clustered separately at each digestive site (Figures 2 and S2). Weighted UniFrac distances showed that the total diversity values of the two principal coordinates were 74.91%, 75.05%, and 85.53% for stool, cecal, and intestinal content, respectively. ure 2. 2D principal coordinate analysis (PCoA) plots created using weighted UniFrac distances. Adonis statistical Table . permutations; R 2 = 0.224, 0.388, and 0.377 for stool, cecal, and intestinal content, respectively, p = 0.001). Beta Diversity Analysis Shows Changes in the Gut Microbiota of Mice Infected with H. pylori and Treated with Metformin Beta diversity analysis of the three types of digestive samples clearly showed that the metformin and control treatment groups clustered separately at each digestive site ( Figure 2 and Figure S2). Weighted UniFrac distances showed that the total diversity values of the two principal coordinates were 74.91%, 75.05%, and 85.53% for stool, cecal, and intestinal content, respectively. Beta Diversity Analysis Shows Changes in the Gut Microbiota of Mice Infected with H. pylori and Treated with Metformin Beta diversity analysis of the three types of digestive samples clearly showed that the metformin and control treatment groups clustered separately at each digestive site (Figures 2 and S2). Weighted UniFrac distances showed that the total diversity values of the two principal coordinates were 74.91%, 75.05%, and 85.53% for stool, cecal, and intestinal content, respectively. Table 999. permutations; R 2 = 0.224, 0.388, and 0.377 for stool, cecal, and intestinal content, respectively, p = 0.001). Table 999. permutations; R 2 = 0.224, 0.388, and 0.377 for stool, cecal, and intestinal content, respectively, p = 0.001). A comparison of bacterial profiles based on weighted and unweighted UniFrac and Bray-Curtis distances showed significant differences between the metformin and control groups ( Figure 2 and Figure S2). Adonis statistical tests showed that these observed differences were statistically significant (p = 0.001 for weighted UniFrac, unweighted UniFrac, and Bray-Curtis distances). Thus, metformin induced significant modification of the gut microbiota composition in H. pylori-infected mice. Metformin Treatment Changes Taxonomic Repartition in the Gut Microbiota of Mice Infected with H. pylori The microbial compositions of the three digestive sites at the phylum, class, order, and family levels are shown in Table 1. Only taxa with a relative abundance of ≥0.1% were computed. Bacterial taxa with the most significantly different microbial abundance between treatment groups (p < 1.10 −4 ) are highlighted. At the phylum level, fecal microbiota was dominated by Firmicutes in both groups, followed by Actinobacteria in the metformin group (8.52% control vs. 28.76% metformin, p = 5.6 × 10 -4 ) and Bacteroidetes in the control group (17.94% control vs. 6.50% metformin, p = 6.8 × 10 -5 ). The × same trend occurred in cecal microbiota composition. In the intestinal microbiota, the control group was dominated by Firmicutes (54.14% control vs. 22.63% metformin, p = 4.2 × 10 -5 ), whereas the metformin group was dominated by Actinobacteria (37.08% control vs. 70.97% metformin, p = 1.5 × 10 -5 ). Metformin treatment decreased abundance of Firmicutes and Bacteroidetes for the benefit of increased Actinobacteria, in the three digestive sites. Liner discriminant analysis (LDA) effect size analysis (LEfSe) was conducted to identify differentially abundant taxa in the three sample types. LDA scores were determined, and the specific taxa associated with metformin treatment were identified ( Figure 3b). Among the fecal samples, 16 bacterial taxa were identified, including 12 genera that were differentially abundant between treatment groups ( Figure 3b). Compared to the metformin group, 13 taxa were more abundant in the fecal microbiota of control mice. Liner discriminant analysis (LDA) effect size analysis (LEfSe) was conducted to identify differentially abundant taxa in the three sample types. LDA scores were determined, and the specific taxa associated with metformin treatment were identified (Figure 3b). Among the fecal samples, 16 bacterial taxa were identified, including 12 genera that were differentially abundant between treatment groups (Figure 3b). Compared to the metformin group, 13 taxa were more abundant in the fecal microbiota of control mice. In the metformin group, a higher abundance of Akkermansia, Anaerotruncus, and Bifidobacterium genera were observed in the fecal microbiota and also in the cecal content. Among them, Bifidobacterium was also more abundant in intestinal microbiota (Figure 3b). Within the genera Akkermansia and Bifidobacterium, the species Akkermansia muciniphila and Bifidobacterium pseudolongum were identified. In the metformin group, a higher abundance of Akkermansia, Anaerotruncus, and Bifidobacterium genera were observed in the fecal microbiota and also in the cecal content. Among them, Bifidobacterium was also more abundant in intestinal microbiota (Figure 3b). Within the genera Akkermansia and Bifidobacterium, the species Akkermansia muciniphila and Bifidobacterium pseudolongum were identified. Table S2 shows the LEfSe analysis results for the different digestive sites. Only bacterial taxa with LDA scores > 2 in at least two of the three digestive sites were included; this criterion was met by 12 bacterial taxa in the control group and three bacterial genera in the metformin group (Bifidobacterium, Anaerotruncus, and Akkermansia). Operational taxonomic units (OTUs) number associated to these taxa were listed in Table S3. Metformin Directly Inhibits the Lactobacillus and Clostridium Gut Bacterial Strains In Vitro To determine which bacterial strains are directly affected by metformin in the gut, we further examined those strains with significantly different relative abundance between the metformin and control groups and easily cultivable. Metformin treatment decreased the abundance of Lactobacillus, Aerococcus, and Clostridiales strains, and increased that of Bifidobacterium strains (Table S2). We observed no significant growth differences in Aerococcus sanguinicola, Bifidobacterium breve, or Bifidobacterium longum at different metformin concentrations (p > 0.01, Figure 4). Significant decreases in growth were observed in Lactobacillus harbinensis and Clostridium difficile strains incubated with metformin concentrations of 20 and 50 mM (p < 0.01) and in Clostridium perfringens strains treated with higher metformin concentrations (50 and 100 mM, p < 0.01, Figure 4). the abundance of Lactobacillus, Aerococcus, and Clostridiales strains, and increased that of Bifidobacterium strains (Table S2). We observed no significant growth differences in Aerococcus sanguinicola, Bifidobacterium breve, or Bifidobacterium longum at different metformin concentrations (p > 0.01, Figure 4). Significant decreases in growth were observed in Lactobacillus harbinensis and Clostridium difficile strains incubated with metformin concentrations of 20 and 50 mM (p < 0.01) and in Clostridium perfringens strains treated with higher metformin concentrations (50 and 100 mM, p < 0.01, Figure 4). Potential Functional Pathways of Gut Microbiota are Modified by Metformin Treatment in Mice Infected with H. pylori Functional analysis of the gut microbiota was performed using the Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) software based on closed-reference selection of operational taxonomic units (OTUs). We examined 159 pathways based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) reference database. Only KEGG pathways with a relative abundance >0.001% were considered; these represented 124, 119, and 132 KEGG pathways in stool, cecal, and intestinal samples, respectively. Pathways with a significantly different abundance between the metformin and control groups were identified for all three different digestive sites (p < 0.05, after Bonferroni correction). These differentially enriched KEGG pathways are shown in Fig Potential Functional Pathways of Gut Microbiota Are Modified by Metformin Treatment in Mice Infected with H. pylori Functional analysis of the gut microbiota was performed using the Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) software based on closed-reference selection of operational taxonomic units (OTUs). We examined 159 pathways based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) reference database. Only KEGG pathways with a relative abundance > 0.001% were considered; these represented 124, 119, and 132 KEGG pathways in stool, cecal, and intestinal samples, respectively. Pathways with a significantly different abundance between the metformin and control groups were identified for all three different digestive sites (p < 0.05, after Bonferroni correction). These differentially enriched KEGG pathways are shown in Figure 5 and Figure S3. Beta diversity analysis according to the predicted functional pathways for each group was performed. A comparison of the microbiota of the metformin and control groups based on metabolic function showed differential bacterial profiles on 2D PCoA plots (Figure 6a). Adonis statistical tests performed on these data showed significant differences among Bray-Curtis distances in all sample types (p = 0.001). In stool samples, five predicted KEGG pathways were significantly more abundant in the metformin group, compared with eight KEGG pathways in the control group. The differential KEGG pathways between the metformin and control groups represented 10.5% and 26.9% of all pathways examined for stool and cecal content, respectively. The highest number of differential KEGG pathways was found in intestinal content (53.8%, Figure 6b). In intestinal samples, the most significantly enriched KEGG pathways in the metformin group were nicotinate and nicotinamide metabolism, peptidoglycan biosynthesis, secondary bile acid biosynthesis, and streptomycin biosynthesis ( Figure 5). Carbohydrate metabolism was predicted to be higher in the microbiota of control group mice than in metformin-treated mice. Pathways implicated in carbohydrate metabolism (green text, Figure 5) including pyruvate, propanoate, ascorbate, butanoate, and glyoxylate metabolism were predicted to be overexpressed in the control group. Beta diversity analysis according to the predicted functional pathways for each group was performed. A comparison of the microbiota of the metformin and control groups based on metabolic function showed differential bacterial profiles on 2D PCoA plots (Figure 6a). Adonis statistical tests performed on these data showed significant differences among Bray-Curtis distances in all sample types (p = 0.001). In stool samples, five predicted KEGG pathways were significantly more abundant in the metformin group, compared with eight KEGG pathways in the control group. The differential KEGG pathways between the metformin and control groups represented 10.5% and 26.9% of all pathways examined for stool and cecal content, respectively. Finally, nitrate and nitrite-reducing bacterial species were specifically studied in intestinal content by focusing on the KEGG gene expression of k02575, k00370, k00363, and k03385, which code for nitrate or nitrite-reductase enzymes. Intestinal bacteria in the microbiota of metformin-treated mice showed significantly decreased nitrate and nitritereductase functions (p < 0,05, Mann-Whitney U test) (Figure 6c). PICRUSt analyses showed that metformin treatment led to significant changes in predicted metabolic functions by gut bacteria in infected mice, specifically in intestinal sites. Discussion In the present study, we examined changes in the gut microbiota at three different digestive sites, represented by stool, cecal, and intestinal samples, induced by oral metformin treatment of mice infected with H. pylori. We performed 16S rRNA gene sequencing and characterized the gut microbial profiles of mice infected with H. pylori and treated with metformin. Our results showed that metformin decreased richness and diversity of the microbiota of mice. High microbiota diversity and richness are usually considered to be markers of a healthy microbiota; however, decreases in the abundance of any bacterial taxa may lead to the relative emergence of metabolically beneficial microorganisms such as Akkermansia muciniphila, which is associated with metabolic improvement [20]. In this study, Akkermansia muciniphila was found to be more abundant in metformin-treated mice than in control mice, which is consistent with the results of a previous study of diabetic patients that suggested that these effects could contribute to the therapeutic effect of metformin in diabetes treatment [21]. Beta diversity analyses showed that microbial features depended on the treatment received. Other studies have reported significant changes in the microbiota of obese mice, healthy mice, and diabetic humans following metformin treatment [21][22][23][24]. The direct effect of metformin on bacterial growth was measured on six bacterial strains. Thanks to taxonomic composition analysis, we selected six bacterial species that were available in the laboratory, easily cultivable, and with relative abundance either positively or negatively influenced by metformin treatment. This experiment showed that metformin directly inhibited the growth of Lactobacillus and Clostridium gut bacteria. Metformin can also indirectly modify the microbiota by acting on host physiology; for example, metformin increases the bile acid pool within the intestine [25], which may affect stool consistency and the microbiome [26]. More recently, metformin treatment was revealed to enhance the release of glucose into the intraluminal space of the intestine in humans [27]; therefore, high glucose concentration in the intestinal lumen may impact bacterial development. Thus, metformin-induced microbiota changes are probably the result of both direct and indirect effects. The effects of metformin on human health have been intensively studied in recent years. Beyond its implication in diabetes treatment, metformin represents a promising anticancer drug in combination with conventional chemotherapies for different types of cancer [28]. Recent studies have also demonstrated the antiaging effects of metformin [29] and its direct antimicrobial effect against H. pylori, which has opened new avenues of research [19]. In cancer prevention, metformin has been shown to reduce cancer incidence in diabetic patients [6]. Microbiota composition and function are well recognized as influencing carcinogenesis through different mechanisms [30]. H. pylori is the best example of a specific bacterial pathogen that can trigger carcinogenesis by promoting inflammation and degradation of the gastric epithelium [9]. The bacterial microbiota may also influence intestinal barrier preservation, inflammation modulation, and the production of cancerpromoting metabolites [31]. In this context, we investigated the influence of metformin treatment on microbiota mechanisms potentially implicated in gut carcinogenesis. Concerning specific bacterial taxa, despite decreases global richness, metformin treatment led to increases in Bifidobacterium abundance. Bifidobacterium species have demonstrated anti-colorectal cancer activity by producing metabolites that directly inhibit the growth of colon cancer cells in vitro [32]. Bifidobacterium species are often integrated into probiotic products for health treatments, including cancer prevention. It has been suggested that probiotics containing Bifidobacterium species can contribute to colorectal cancer prevention and improvement of safety and effectiveness of colorectal cancer therapy [33]. A recent study of diabetic and non-diabetic mice with induced colorectal cancer showed that metformin treatment in association with probiotics containing Bifidobacterium species actively prevented inflammatory and carcinogenic processes [34]. Furthermore a study of H. pylori-related gastric lesions showed a higher relative abundance of Firmicute in gastritis and gastric metaplasia patients [35]. Interestingly, our results showed a decreased relative abundance of Firmicute bacteria in H. pylori-infected mice in response to metformin treatment. Functional features of the microbiota of mice in this study were examined using the PICRUSt software [36]. The resulting bacterial predicted profiles showed that the metformin and control groups had distinct metabolic functional signatures. The intestinal microbiota showed the highest expression among differential metabolic KEGG pathways between groups, indicating that metformin treatment leads to significant modification of the functional properties of the digestive microbiota, particularly in intestinal sites. Specifically, metformin treatment decreased nitrate and nitrite reductase functions in intestinal bacteria. The nitrate-reducing bacterial pathway was analyzed because it has been suggested to participate in the increase of intragastric concentrations of nitrite and N-nitroso-compounds [37]. N-nitroso-compounds promote mutagenesis and protooncogene expression, and inhibit apoptosis; they can also contribute to gastric carcinogenesis [38,39]. Increased functional activity of nitrate reductase has been observed in the gastric microbiota of gastric cancer patients in comparison with chronic gastritis patients [40]. In the present study, KEGG pathways involved in carbohydrate metabolism were enriched in the control group, which comprised mice infected with H. pylori but not treated with metformin. These pathways are predictive of bacterial production of short-chain fatty acids [41], which have been linked to cell hyperproliferation in colorectal and esophageal cancer [42,43]. These pathways have also been found to be enriched in the gastric microbiota of gastric cancer patients [18]. Together, these findings demonstrate the potential contribution of bacteria-producing short-chain fatty acids to digestive tumorigenesis. These first results suggest that metformin, by modulating microbiota function, could be considers as a potentially interesting agent for digestive cancer prevention. Molecular mechanisms that sustain the anticancer effect of metformin through the regulation of glucose metabolism have been reported [44]. Thus, host physiology and the microbiota constitute different potential targets for metformin action in preventing cancer occurrence. In diabetic patients, metformin reduced the incidence of adenomas that could transform into colorectal cancer; therefore, metformin may be useful for the prevention of colorectal cancer in patients with type 2 diabetes [45]. In mice, metformin use in association with probiotics reinforces beneficial effect on colorectal cancer prevention [34]. The limitations of this study were the lack of information about gastric microbiota; more experiments should be performed to understand the metabolic modifications induced with metformin microbiota changes. We used female mice, which are less aggressive than male and easier to use in animal facilities. Consequently, results obtained are only valid in female and cannot completely be extrapolated to male as there are few differences in the composition of gut microbiota between genders and between female of different hormonal status [46,47]. However, a female from either the control or metformin group had the same age at the beginning and during all the length of the experiment; therefore, mice from the two groups were exposed to the same sexual hormonal modifications, allowing the groups comparison. In conclusion, the results of this study show that metformin significantly alters the composition and predicted function of the gut microbiota of mice infected with H. pylori. These modifications could be implicated in gut cancer prevention. Animal Protocol and Sample Collection The animal protocol used in this study was previously described [19]. Five-week-old Specific Pathogen-Free C57Bl6J female mice were chosen for their better ability to live with partners. Mice were infected intragastrically on 3 consecutive days with 0.1 mL of a highly concentrated suspension of mouse-adapted H. pylori strains SS1 and B47 (Mc Farland 7 opacity standard) [48,49]. Three days after the last infection, the mice were divided randomly into two groups: an infected group treated with PBS as a control (n = 18, two mice died before the beginning of treatment) and an infected group treated with metformin (Sigma Aldrich, St. Louis, MO, USA) at 10 mg/mouse (n = 20). This dosage was determined using the method of dose conversion between human and animal studies [50]. With this method, 10 mg/mouse/day corresponds to 2.4 g of metformin/day for a human adult. The maximal dosage of metformin used to treat Type 2 diabetes patients is 3 g/day. Each group received a daily treatment (0.1 mL) for 12 days by gavage. During this period all mice had access to water ad libitum and a normal diet. Stool samples were collected before infection with H. pylori and after 12 days of treatment. Weight was controlled during the study. After the treatment, no differences were observed in mice weight between the two groups (data not shown). At the end of the treatment, mice were euthanized by cervical dislocation. Cecum and intestine were aseptically taken to collect cecal and intestinal content separately. Intestinal content corresponds to the entire content of mice intestine, no specific region in intestine was selected. Gastric samples were not available for use in this study. All collected samples were immediately stored in a sterile tube at −80 • C. Mouse experiments were performed in level 2 animal facilities at Bordeaux University with the approval of the local Ethical Committee, and in conformity with the French Ministry of Agriculture (approval no. 4608). Initially, alpha and beta diversity were analyzed using mouse stool samples collected from both groups prior to infection and treatment. This analysis confirmed that the groups were comparable and presented no differences in alpha or beta diversity ( Figure S1). Same analysis was also performed on stools collected after infection with H. pylori and before any treatment with the same results (data not shown). DNA Extraction and rRNA Gene Sequencing DNA was extracted from samples using the QIAamp PowerFecal Pro DNA kit with a PowerLyzer (Qiagen, Hilden, Germany) according to the manufacturer's protocol. Quantification, sequencing of the V3-V4 region of 16S rRNA, and assembly were performed by Genoscreen (Lille, France; further details provided in Supplementary material S1). The 16S rRNA sequencing datasets generated in this study can be found in the SRA database (http://www.ncbi.nlm.nih.gov/bioproject/701274, accessed on 3 April 2021). Functional Metagenome Predictions Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) 1.1.0 software was used to predict virtual metagenomes for each sample using the 16S rRNA gene sequencing results [36]. The Kyoto Encyclopedia of Genes and Genomes (KEGG) was used as a reference database. Based on the predicted metagenomes, the relative abundance of KEGG genes or KEGG pathways (ko) within each sample was determined. Bioinformatics Analysis Alpha and beta diversity were computed using the QIIME v1.9.1 software. The samples have been rarefied to 32,606 sequences for these analyses. Alpha diversity was calculated in terms of the Chao1, Shannon, and phylogenetic diversity (PD) whole-tree metrics. Beta diversity was calculated using weighted and unweighted UniFrac or Bray-Curtis distances. Ordination was performed using principal coordinate analysis (PCoA). The strength and statistical significance of beta diversity were computed using the Adonis method with QIIME. Statistically significant differences in the relative abundance of taxa associated with the treatment groups were detected using linear discriminant analysis (LDA) effect size (LEfSe) [51]. Only taxa with LDA > 2 and p < 0.05 were considered significantly enriched. Predicted functional genes were compared between groups and the results visualized using the STAMP v2.1.3 software [52]. Statistical differences in KEGG pathway frequencies were determined using White's nonparametric t-test, followed by Bonferroni correction to adjust p values. In Vitro Bacterial Growth Experiments Strains of Aerococcus sanguinicola, Lactobacillus harbinensis, Bifidobacterium longum, Bifidobacterium breve, Clostridium difficile, and Clostridium perfringens were obtained from a collection at the University Hospital of Bordeaux. Strains were identified using matrix-assisted laser desorption ionization-time-of-flight (MALDI-TOF) mass spectrometry. All strains were pre-cultured for 24 h by inoculation on Columbia blood agar (Thermo Scientific, Waltham, MA, USA) under anaerobic conditions (5% H 2 , 10% CO 2 , and 85% N 2 ) at 35 • C. Pre-cultures were resuspended in sterile water at a concentration equivalent to the MC Farland 4 opacity standard for each strain. These solutions were mixed at 1:4 dilution with BH broth (Thermo Scientific) containing various concentrations of metformin (0, 5, 10, 20, 50, and 100 mM). Solutions were then inoculated into a 96-well microplate and incubated under anaerobic conditions. The effect of metformin on bacterial growth was analyzed in terms of the optical density at a wavelength of 600 nm (OD 600 ) using a Nanodrop microplate reader (BMG Labtech, Champigny-sur-Marne, France) after 24 h of incubation. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ph14040329/s1. Figure S1: Alpha and beta diversity comparison of fecal microbiota of the metformin and control treatment groups before the beginning of treatment. Figure S2: Principal coordinate analysis (PCoA) plots created using unweighted UniFrac and Bray distances. Figure S3: Differentially enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in fecal and cecal microbiota. Figure S4: Rarefaction curves showing the number of observed OTU as a function of the number of sequences per samples. Table S1: Comparison of the relative abundance of bacteria between the metformin and control treatment groups at species level. Table S2: Bacterial taxa with LDA scores > 2 in at least two of the three digestive sites in the metformin and control groups. Table S3: OTUs number of bacterial taxa with LDA scores > 2 in at least two of the three digestive sites in the metformin and control groups Table S4: Number of reads obtained in each sample after preprocessing. Supplementary material S1: rRNA gene sequencing details.
6,213.2
2021-04-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Evaluation of Wirelessly Transmitted Video Quality Using a Modular Fuzzy Logic System Video transmission over wireless computer networks is increasingly popular as new applications emerge and wireless networks become more widespread and reliable. An ability to quantify the quality of a video transmitted using a wireless computer network is important for determining network performance and its improvement. The process requires analysing the images making up the video from the point of view of noise and associated distortion as well as traffic parameters represented by packet delay, jitter and loss. In this study a modular fuzzy logic based system was developed to quantify the quality of video transmission over a wireless computer network. Peak signal to noise ratio, structural similarity index and image difference were used to represent the user’s quality of experience (QoE) while packet delay, jitter and percentage packet loss ratio were used to represent traffic related quality of service (QoS). An overall measure of the video quality was obtained by combining QoE and QoS values. Systematic sampling was used to reduce the number of images processed and a novel scheme was devised whereby the images were partitioned to more sensitively localize distortions. To further validate the developed system, a subjective test involving 25 participants graded the quality of the received video. The image partitioning significantly improved the video quality evaluation. The subjective test results correlated with the developed fuzzy logic approach. The video quality assessment developed in this study was compared against a method that uses spatial efficient entropic differencing and consistent results were observed. The study indicated that the developed fuzzy logic approaches could accurately determine the quality of a wirelessly transmitted video. Introduction An ability to quantify the quality of a video transmitted over wireless computer networks is important in evaluating the networks' operation and performance. Two interrelated parameter groupings characterize wirelessly transmitted video quality. A grouping is related to the traffic and can be characterized by packet transmission delay, jitter and loss. These traffic measures can be accommodated as part of a quality of service (QoS) assessment [1,2]. The second grouping indicates the effects of noise and associated distortion on the images making up a video. These directly influence the user's perception of the video and can be accommodated as part of the quality of experience (QoE) assessment [3,4]. The overall quality of the received video can be quantified by combining the QoS and QoE measures [5][6][7]. QoS parameters (delay, jitter and packet loss) can be measured using traffic monitoring tools. Subjective QoE measures are acquired by allocating scores provided by the users under controlled laboratory conditions. Users are provided with the test video and are asked to assess and score it by considering a set of predefined indicators of quality [8,9]. The Video Quality Expert Group (VQEG) [7] has recommendations for conducting subjective video quality tests. These are categorized into a single video stimulus that the viewers are shown either a single video at a time or double stimuli that the viewers are shown two videos, i.e., the original and test videos, simultaneously on a split-screen environment. This approach however has some shortcomings as it needs specialized software for the video player and interpretation of the scores and careful selection of the reviewers to provide a representative evaluation. The limitations of subjective video evaluation tests could be mitigated by objective QoE tests. These tests can be either full-reference, no reference or reduced reference, depending on the approach [8,10]. In a full-reference test, a frame-by-frame comparison of the reference (original or transmitted) and test (received or distorted) videos is performed. In a no reference video quality evaluation, quality is assessed based on the extracted features that characterize the image quality. The selection of appropriate test features can greatly affect the reliability of the assessment. A reduced reference video quality evaluation is a hybrid between the full-and no reference methods whereby representative features from the reference video are extracted and compared to the corresponding features in the test video. These features can be spatial and motion information. Prior to reviewing the related studies in the next section, a brief description of the main QoS and QoE measures used in this study is provided to make the article more complete. QoS was determined by measuring packet delay, jitter and percentage packet loss ratio. Delay is the time a packet takes to reach its destination from its source. Jitter is the magnitude of the variations in the delay. Percentage packet loss ratio (%PLR) is the ratio of the number of packets lost during transmission (due to for example noise) over the total number of packets transmitted, multiplied by 100. In this study a full-reference QoE measurement was adapted. QoE measures were peak signal-to-noise (PSNR), structural similarity index measurement (SSIM) and image difference (ID). PSNR in dB is determined by [11,12]. ( , ) = 10 log where MPP = 2 n -1 is the maximum possible pixel value of the image and n is the number of bits used to represent each sample, e.g., when n is 8 bits per sample, MPP = 255. Larger values of PSNR signify a smaller distortion and thus a higher video quality. MSE is the mean square error between images X and Y determined by: where m and n represent the image dimensions and i, j represent a pixel's location on the image. The structural similarity index (SSIM) for measuring image quality for image windows (x and y) of the same dimension from an image is determined by [11,13]: where and are the means of the pixel values within the measurement windows x and y respectively, and are the pixels' variances, is the covariance of the pixels within the x and y windows. The variables c1 and c2 stabilize the division with weak a denominator. They are defined as = ( ) and = ( ) where the dynamic range = 2 − 1 and n is the number of bits per pixel. The factors k1 and k2 are by default 0.01 and 0.03. SSIM takes a value of 0 to 1 with values closer to 1 indicating a higher similarity. Image difference (ID) is a measure of overall pixel-to-pixel differences between two images. It can be determined in different ways, but in this study it was obtained from the histogram of pixel values of the images being compared and using Euclidean distance to determine the histograms' overall difference. As the processing in this study was on grey images, the pixel values ranged from 0 to 255. The difference between the two histograms was determined by using Euclidian distance as: where FXi and FYi are the occurrence frequencies of a pixel with value i associated with images X and Y respectively. An ID value of zero indicates identical images. Video quality evaluation using MSE, PSNR or SSIM has been reported previously but combining them with the traffic measures (delay, jitter and percentage packet loss ratio (%PLR)) needed further development [14][15][16][17]. This study achieved this goal by devising a modular fuzzy logic system consisting of three fuzzy inference systems (FIS). These systems processed and combined the values of delay, jitter and %PPR with PSNR, SSIM and ID and indicated the overall received video quality. A FIS uses fuzzy logic concepts to map its numerical input values through a reasoning process to its output. FIS is an effective mean of analysing computer network data [18,19]. The structure of a FIS is shown in Figure 1. The numerical inputs to the FIS are fuzzified by a set of membership functions that indicate the degree (a value between 0 and 1) that an input belongs to a predefined fuzzy set. The knowledgebase contains the information about the process (in this study relationships between the inputs delay, jitter, %PLR, PSNR, SSIM and ID with the output that indicates the quality of the video. The inference engine performs reasoning by comparing the input values with the domain knowledge coded in the knowledgebase by a series of IF-THEN rules to indicate the output. De-fuzzification is a process whereby the outcomes of the rules are combined to produce an aggregated membership function from which the FIS output is determined [20]. Related Studies QoE has been previously measured using PSNR [21] and SSIM [22]. A study showed that the PSNR is more sensitive to additive Gaussian noise than the SSIM [23]. SSIM and PSNR mainly differ on their degree of sensitivity to image degradations. PSNR is one of the most commonly used objective measures but it often has been critiqued for providing results that are not fully consistent with subjective quality assessments [1]. However, its ease of implementation and interpretation make it valuable [24]. Objective video quality assessment methods can reduce the cost and time of video evaluation [25]. A video assessment method based on delay, jitter, PLR and bandwidth found QoS/QoE was closely related to video quality degradation [26]. A method that processed delay, jitter, packet loss rate and bandwidth to determine four types of video quality degradations was reported [1]. They analysed the impact of different levels of video degradation. A model based on a random neural network (RNN) was proposed to assess the impact of different media access control-level parameters on video QoE in IEEE 802.11n wireless networks [25]. In their study subjective tests were performed to correlate MAC-level parameters such as queue size, aggregation, traffic load and bit error rate with the user's perception of video. The proposed RNN based approach estimated the impact of these parameters on the video QoE. RNN was trained with a subjective dataset to estimate QoE. Their results showed objective and subjective QoE were related. However, their study did not investigate the influence of traffic parameters such as delay, jitter and packet loss ratio. A study proposed a QoE prediction mechanism for streaming videos [8]. It evaluated quality degradation due to perceptual video presentation impairment, playback stalling events and instantaneous interactions between them. Their experimental results were close to a subjective QoE test method. A video QoE evaluation method that synchronized the reference and distorted videos to avoid an erroneous match was reported and was validated with a subjective video database [24]. Video streaming services in radio-over-fibre (RoF) networks were studied [27]. The sensitivity of the QoE measure was investigated and their results indicated that packet delay affected video quality less than jitter. When frames are lost during transmission, the order of frames sent with those received no longer matches. The resulting mismatch in the frame sequence numbers results in inaccuracies when comparing the original and received frames to establish video quality [24,28]. Another limitation of current objective QoE methods is that they typically rely on peak signal to noise ratio (PSNR), structural similarity index (SSIM) or video quality metric (VQM), which do not always provide consistent assessment [1,9,26,29]. Therefore, in our study, a further video quality measurement parameter called image distance (ID) was included. When dealing with wireless computer networks where interference and other contextual factors affect network services, QoS assessment on its own may be insufficient [27]. Thus, the performance evaluation of lossy wireless networks needs to take into account not only the physical network characteristics (QoS) but also how these affect the end-user application (QoE). Thus integrating QoS and QoE as is achieved in our study is valuable. A number of image and video quality assessment methods were reported in [30]. In our study we have compared the developed method against [31] as it was closest to the approach we followed. The features of this study are (details of each part are explained in the following sections): • Frame losses meant that frame arrival and transmission would not match correctly. Labelling each frame dealt with the issue of correct pairing of transmitted (original) and received (distorted) frames and thus improved the QoS/QoE evaluations. • Computational demand on processing was reduced through inclusion of systematic sampling of the images. This resulted in a subset of received that represented the overall video to be processed. • The sensitivity of measuring image distortion was improved through a new method called image partitioning. This enabled localized distortions to be more precisely represented. • Traffic parameters (delay, jitter and %PLR) that quantified QoS and were successfully combined with image distortion measures (PSNR, SSIM and ID) that quantified QoE to produce a signal measure of received video quality. • These evaluations were performed using a modularized structure that consisted of three separate fuzzy inference systems. This modularization improved transparency in operation and made future modifications easier. • Subjective video quality evaluations tests to determine mean opinion score (MOS) were performed by enrolling 25 participants. The results were compared with the fuzzy logic approach. • The devised methods were compared video quality assessment reported in [31] that uses an approach referred to as the spatial efficient entropic differencing for quality assessment (SpEED-QA) model. It computes perceptually relevant image/video quality features by relying on the local spatial operations on image frames and frame differences. Wireless Network Set Up The wireless network set up used in the study is shown in Figure 2. It incorporated two wireless Cisco© Access Points (APs) AIR-AP1852 (Cisco, place of origin: China) that have four external dual-band antennae. Cisco© Catalyst 3560 (Cisco, place of origin: China) switch connected via 1 Gigabit Ethernet (GE) the APs and the network emulator (NetEm), installed on a laptop computer acting as a server. The arrangement established point-to-point protocol (PPP) link between the PC-1 and PC-2. NetEm was situated in between the PPP connections to allow the traffic parameters, i.e., delay, jitter and %PLR to be controlled and thus drive the network toward different transmission quality. The control provided a mean of creating transmission environments for good, medium and poor video qualities. The video was sent over a PPP link such that the traffic from PC-1 transmitted to PC-2 through NetEm server [32]. The selected video was a Big Buck Bunny [33] clip, duration 90 seconds and consisting of 1350 frames. The video format was MPEG-2, all encoded using H.264. The frame pixel resolution was 1280 pixels × 720 pixels. The video was streamed from PC-1 to PC-2 using VideoLAN Client (VLC) media player with the UDP/RTP (user datagram protocol/real-time transport protocol). This allowed time-stamp and sequence number features in RTP for actual end-to-end delay and %PLR measurements to be used. Through the NetEm software, traffic delay, jitter and %PLR were increased in three stages during the transmission. In the first stage, these parameters had a lower range of values, increasing to larger values in the third stage. The QoS, QoE and overall quality of video were measured during each stage. Wireshark was used to capture video streaming traffic packets between PC-1 and PC-2 that allowed delay, jitter and %PLR to be determined. Mechanism Determining Video Quality The stages in determining the received video quality are shown in Figure 3. Three similarly structured fuzzy inference systems (FIS) performed the required evaluation processes as described in this section. Although the complete video quality evaluation could have been achieved using a single FIS, three separate FIS models were adapted to allow for a modular structure, thus making the design and implementation easier and its operation more transparent. The first FIS (FIS1) processed the traffic parameters (delay, jitter and %PLR) to indicate QoS. The second FIS (FIS2) processed PSNR, SSIM and ID to indicate QoE. The third FIS (FIS3) combined the outputs from FIS1 and 2 to provide the overall received video quality. The details of the tasks to develop these FIS structures are explained next. (a) Each transmitted image was labelled by software with an image serial number, starting with 1 and sequentially increasing to the last image. Two very small identical labels were used. The labels were inserted on the top left and right corners of each transmitted image. Its repetition on two corners was to provide an alternative label in case one label became unreadable due to distortion by noise. The labelling was required to allow the received images to be compared with the corresponding transmitted images. Systematic sampling (a process whereby an image is chosen from the video at a predefined interval) was applied to the received images to reduce their number and thus the processing requirement. The time interval between the selected images was 1 second, i.e., an image was selected every second. This resulted in the reduction of images from 1350 (original number) to 90 (i.e., 1 image from every 15 images was selected). The selected images were compared with the corresponding transmitted images by using the inserted labels. (b-i) Traffic parameters (packet delay, jitter and %PLR) were measured for the received packets and were processed by FIS1 to determine the network QoS (the development of FIS is explained in a later part). (c) The QoS and QoE values determined from steps (b-i) and (b-ii) were combined in FIS3 to obtain the overall video quality. Implementation of FIS1 QoS was determined using FIS1 that received delay, jitter and %PLR. Three membership functions were used to represent each of the three traffic inputs and three membership functions represented the FIS1 output. Nine rules were coded into the FIS1 knowledgebase using a series of If-Then rules. The rules are outlined in Table 1. The type of membership function for the input and output for FIS1 was Gaussian as it provided flexibility to represent the measurements. The membership functions' ranges were chosen based on the International Telecommunication Union (ITU) recommendations for video transmission parameters delay, jitter and %PLR as shown in Figure 4 [18]. The membership functions shown in blue, red and green colours represent fuzzy sets of low, medium and high QoS respectively for the parameters considered. Each fuzzy rule was applied to the associated membership functions and the rules' consequences were mapped to the associated output membership functions. The output membership functions were aggregated and the centroid approach was used to perform de-fuzzification that in turn provided the output of FIS1. Implementation of FIS2 FIS2 processed the values for PSNR, SSIM and ID obtained from the transmitted and received images and provided a value between 0 and 1 for QoE. QoE measurements were performed in two approaches. In the first approach, PSNR, SSIM and ID were determined from the whole image. In the second approach, each image was partitioned into four equal parts (top-left, top-right, bottom-left and bottom right) and the values PSNR, SSIM and ID for each part were separately determined and for PSNR and SSIM the smallest and for ID the largest value amongst the four partitions were selected. Four partitions were chosen as a compromise between a higher sensitivity in localizing distortions (that requires a larger number of partitions) and a reduction in overall image distortion estimation (that requires a smaller number of partitions). The approach was aimed at providing greater sensitivity as compared to processing the image intact. The effectiveness of these two approaches in determining QoE was compared. The justification for partitioning the images was to explore whether localized distortions could be better identified and represented. The inputs to the FIS2, i.e., the PSNR, SSIM and ID were fuzzified using three Gaussian membership functions referred to as low, medium and high. There are shown in blue, red and green in Figure 5. The output was defuzzified by three membership functions that represented low, medium and high QoE. These membership functions are shown in Figure 5. The ID results were normalized between 0 and 1 by identifying the highest and lowest values. The knowledgebase for FIS2 had eleven rules as shown in Table 2. They mapped the inputs to FIS2 to its output and indicated QoE in form of degrees of memberships belonging to high, medium and low. The rules conformed to the previous related studies [13,17,21,34,35]. Implementation of FIS3 FIS3 combined the QoS and QoE values determined from FIS1 and FIS2 to indicate the overall quality of the received video. The output was in the range of 0 (lowest quality) to 5 (highest quality). The QoS and QoE values were fuzzified using three Gaussian membership functions referred to as low, medium and high. These are shown in blue, red and green plots in Figure 6. Five rules were coded in the FIS3 knowledge base. These are shown in Table 3. These mapped the two inputs to the overall video quality in the form of high, medium and low. The PSNR, SSIM and ID were measured for the transmitted and received video images identified through systematic sampling. In some images the distortion only was localized to a specific part of the image. As PSNR, SSIM and ID consider the overall effect of the distortion, localized distortions can become less precisely represented. Figures 8a and b show the transmitted and received (distorted) images at time 65 seconds. The distortion is visible at the bottom edge of the received image. The PSNR and SSIM from the complete (intact) image were 36.08 dB and 0.999 respectively and ID was 0.48. Table 4. Partitions 1 to 4 represent top left, top right, bottom left and bottom right parts of the image respectively. For Figure 9, the selected PSNR, SSIM and ID were 28.13 dB, 0.977 and 0.60 respectively. Table 4. Peak signal-to-noise (PSNR), structural similarity index measurement (SSIM) and image difference (ID) for full and partitioned images (these were obtained from the image shown in Figure 8a and its partitions in Figure 9). Parameter Complete Image Figure 10d shows the QoS determined by FIS1 for the video UDP/RTP traffic. The increase in delay, jitter and %PLR was facilitated by the NetEm software, which facilitated degradation in QoS. At the beginning of the transmission, QoS was high at 90% until time 7 seconds where QoS decreased to 62% due to an increase in %PLR (Figure 10c). Curve fitting (4th degree polynomial) was used to indicate the trends for the measures. QoS changed based on the changes in the delay, jitter and %PLR and as defined by the membership functions in Figure 4. The values correspond to 0%-34% for low QoS, 35%-65% for medium QoS and 66%-100% for high QoS. The PSNR and SSIM were at their peaks at the start of the transmission and they reduced as the magnitudes of the traffic parameters (delay, jitter and %PLR) were increased by using the NetEm software. The ID was close to zero at the beginning of the transmission as received images were very similar to those transmitted but by increasing network parameters delay, jitter and %PLR, the ID increased correspondingly. For evaluations that used complete (not partitioned) images, the results for PSNR, and SSIM were partially related to QoS but at time 80 seconds PSNR and SSIM were high and QoS at the time was low. However, for the evaluations that used partitioned images, PSNR and SSIM values were related to the determined QoS. Even toward the end of transmission between time 80-90 seconds, for the partitioned image approach, PSNR and SSIM had similar behaviour to QoS values. The ID in both cases (i.e., the full image and partitioned image evaluation approaches) was close. In addition, the behaviour of ID was also related to determine QoS. According to the results, the partitioned image approach was more effective in representing the quality of images than the approach that used whole images. Figures 14a,b shows plots for QoE determined by FIS2 for the full and partitioned images respectively. The partitioned image method represented the video quality more precisely as Figure 14b relates to the QoS plots in Figure 10 more closely. In order to demonstrate the manner the QoS and QoE values related to typical images with various levels of distortions, a number of images and their measurements are provided in Figure 15. The values were provided for the evaluations that were based on full and partitioned images. The range of both QoS and QoE from 0 (lowest quality) to 1 (highest quality). The partitioning of images into four parts had improved quantification of image distortion and thus more precisely represented the QoE. To illustrate this point, Figure 15a has a small distortion. The QoE obtained from the full and partitioned images were 0.78 and 0.77 respectively. In Figure 15d, the image was severely distorted. The QoE obtained from the full image was 0.70 while the QoE obtained for the partitioned image was 0.17 signifying a higher sensitivity for the partitioned method. Figure 16 shows a plot for the overall video quality assessed by FIS3 using the image partitioning method. Scores 1 to 5 represent lowest to highest quality for the received video respectively. The quality of the received video was highest during the first 5 seconds and reduced to its lowest toward the end of transmission. This trend correlated well with QoS, QoE and their associated parameters thus indicating the approach had correctly performed the evaluation. In order to further demonstrate the effectiveness of the devised FIS based video quality evaluation system, a subjective test involving 25 participants was organized whereby they, after watching the transmitted (original) video, scored each received image from 1 (lowest quality) to 5 (highest quality). The duration of the video was 90 seconds, corresponding to 90 images. The distorted video was initially shown to each participant. As scoring of the individual images while the video was being played was not practical, its individual images were shown sequentially using windows photo viewers tool, and once the scoring of an image was achieved, the next image was displayed. The scores where averaged and the resulting mean opinion score (MOS) is shown in Figure 17. The scoring was based on ITU-T Recommendation P.800 [25]. This opinion score allocates values from bad to excellent by mapping the quantitative MOS as excellent (5), good (4), fair (3), poor (2) and bad (1). A comparison of Figures 16 (from FIS3, objective test) and 17 (MOS, subjective test) show that they had a similar quality trend over time. There were however few differences, for example from 80 to 90 seconds, the subjective test indicated video quality from 1.4 to 2.2, which was higher than the objective test that indicated video quality close to 1. These differences were related to variations in the participants' perceptions of the quality of the individual images. In order to have an independent comparison of the results from this study, a recently reported image and video quality assessment reported in [31] was chosen. The authors in that study used a video quality assessment that was termed as the spatial efficient entropic differencing quality assessment (SpEED-QA). This assessment method is an efficient natural scene statistics-based model that applies local entropic differencing between the tests and reference data in the spatial domain [31]. They reported that SpEED-QA had a highly competitive performance against other objective image video quality assessment methods. SpEED-QA was calculated by first determining the conditional block entropies of the reference and distorted images. The differences between the entropies of the corresponding blocks were then obtained and averaged for all blocks [31]. A single scale (SPDss) and multiscale version of SpEED were reported in [31]. Figure 18 shows the plot SPDss measure for the video used in our study. The multiscale plot looked similar to the single scale and thus is not shown. The red graph through the plot shows its trend obtained by a 4 th order polynomial. A relationship can be observed when comparing the trends in Figure 18 and those in Figure 14b (i.e., FIS2 output that was generated from PSNR, SSIM and ID) and the overall video quality obtained by FIS3 (that integrated PSNR, SSIM and ID with delay, jitter and %PLR) shown in Figure 16. In Figures 14b and 16, larger values represent higher quality but in Figure 18 lowest values represent higher quality, thus, the trends are inverted. In Figures 14b and 16, the images corresponding to times at 30 and 59 seconds had very low quality as indicated by a drop in the plot. The corresponding images in Figure 18 had also low quality as indicated by a large increase in the plot. The images corresponding to the time between 85 and 90 seconds had the lowest quality in Figure 16. These images provided the lowest QoS as indicated in Figure 10d. However, the corresponding images when assessed using SPSss did not have the lowest quality. This signifies an advantage of the FIS method reported in this paper that integrates QoE and QoS to provide an overall measure. In order to further compare SPDss and FIS methods, the values of PSNR, SSIM, ID, SPDss, FIS2 output and FIS3 output are tabulated in Table 5 for images corresponding to 1 second and then every 10 seconds. 19a,b shows plots of FIS3 output (overall video quality) and SPDss against PSNR respectively. PSNR was used as it was a more sensitive measure for qualifying video quality as compared with SSIM and ID. FIS3 shows a closer correlation to PSNR than SPDss. The correlation is indicated in the figures by the coefficient of determination (R 2 ) obtained from the best fit through the data points. The values of R 2 were 0.945 and 0.623 for Figures 19a,b respectively. R 2 indicates the proportionate amount of variation in FIS3 output and SPDss in response to PSNR. Larger values of R 2 explain a greater variability in the linear regression model. Figure 19c shows a plot of FIS3 output against SPDss. The two were closely related at high quality images. For very low quality images, FIST3 graded them as 1 but SPDss had different measures for them (i.e., SPDss values = 30, 40 and 50) therefore the relationship between the two methods for these images was not as obvious as for the higher quality images. The described results were for a typical video chosen carefully to be a good representative from the points of richness of its contents and variability of information in successive frames. This video had also been used in a number of other related studies due to its suitability. The evaluations however could be further extended to considering multiple videos. In this study systematic sampling was used, however a more traffic and video content aware sampling method might make the process more optimal thus further reducing the computational load and might improve the evaluation accuracy. In this study a video was wirelessly transmitted, however different multimedia applications have their own specific QoS and QoE requirements. These requirements need to be adapted into the knowledgebase of the FIS for quality of their transmission to be determined. This study devised and evaluated a modular fuzzy logic system to assess the quality of video transmitted over a wireless network. Developments in this area can help both the network users and network service providers and assist in improving multimedia communication. The merits of the proposed approach were: • Modular design in determining QoS, QoE and combining the two measures to a single video quality value. This modular approach made the evaluations more transparent in operation and possible future updates easier to realize. • The use of FIS enabled mapping of the traffic parameters (delay, jitter and packet loss ratio) to QoS and similarly mapping of user's perception (based on peak signal to noise ratio, structural similarity index and image difference) to QoE to be carried in an effective and flexible manner. • Adaptation of image partitioning proved valuable in determining QoE and made its calculation more accurate. • Inclusion of a subjective test to obtain MOS provided a further demonstration of the method's efficacy. Conclusions A modular fuzzy logic system to objectively evaluate the quality of video transmitted over a wireless computer networks was devised and its performance was evaluated. The system consisted of three fuzzy inference systems that quantified the quality of service (QoS) from the packet delay, jitter and percentage packet loss ratio, qualified quality of experience (QoE) from the peak signal-to-noise ratio, structural similarity index measurement (SSIM) and image difference (ID), and combined these values into a single video quality measure. The modularity of the system ensured ease of implementation and transparency in its operation. It was demonstrated that by partitioning the images a more precise mean of assessing their quality could be achieved. The determined QoS, QoE and overall received video quality related well together for the approach that used partitioned images. They also related well to the traffic measures, delay, jitter and packet loss ratio. The efficacy of the developed video quality evaluation was further demonstrated by carrying out a subjective test based on 25 participants scoring the video and observation of the correlation between the subjective and objective methods. An independent comparison of the video quality assessment method developed in this study was carried out against a method that used spatial efficient entropic differencing and comparable results were obtained. The developed video quality evaluation is valuable in evaluating the quality of videos in multimedia computer networks.
7,651
2019-09-14T00:00:00.000
[ "Computer Science" ]
ALIGNING VIDEO-AND STRUCTURED DATA FOR IMAGING OPTIMISATION Abstract Imaging optimisation can benefit from combining structured data with qualitative data in the form of audio and video recordings. Since video is complex to work with, there is a need to find a workable solution that minimises the additional time investment. The purpose of the paper is to outline a general workflow that can begin to address this issue. What is described is a data management process comprising the three steps of collection, mining and contextualisation. This process offers a way to work systematically and at a large scale without succumbing to the context loss of statistical methods. The proposed workflow effectively combines the video and structured data to enable a new level of insights in the optimisation process. Angiographic equipment for interventional procedures is used for a range of different treatments. However, the utilisation of these technologies is complex, and methods for optimisation must be considered for each procedure separately. Along with the emitted radiation, modern imaging equipment will also generate massive data streams about their settings and functioning. The collection, monitoring and assessment of this information are central to the optimisation process (1)(2)(3)(4) . One challenge is that when the observed interventions are different from one another, variations in the structured data reports may be hard to interpret. Whereas one can discern the general trends, outliers and individual data points become impossible to fully understand because the unique conditions of these events are lost in the collection process. There is thus a need to develop a generally applicable strategy to overcome this issue. The purpose of the paper is to outline a general workflow that can begin to address the difficulties with imaging optimisation by augmenting the stream of structured data with an additional source, that is, with qualitative data in the form of audio and video recordings. If these alternative sources can be combined intelligently and efficiently, it might offer a new prospect for the large-scale optimisation process of image-guided treatments. The separate steps of a workflow are described and exemplified with data from a research project. The focus for these materials will be on data processing matters, where the subsequent optimisation decisions and implementations are reported elsewhere (5) . VIDEO IN THE OPERATING ROOM Many modern operating rooms are now equipped with recording instruments and ceiling-mounted cameras. The reasons for these installations may vary, but the sharing of knowledge is a recurrent argument. The very notions of the operating theatre, or the surgical amphitheatre, speak of a history where surgical suites were built with a dual purpose: the surgery itself and the teaching or performance in front of peers and students. With a developing understanding of asepsis, large live audiences not wearing scrubs were no longer invited to observe the events (6) . In this regard, video can nowadays displace the observer of a surgical procedure both in time and in space (7) . By recording surgeries, detailed analyses of the use of medical technologies (8,9) and specific work practices are enabled (10,11) . If such recordings are indeed collected, they may themselves serve multiple purposes. Technically, video uptakes should allow for the evaluation of imaging information in image-guided treatments to be also based on the physical and communicative work conducted. If analysed appropriately, these records may offer invaluable insights for imaging optimisation (12) . However, in practice, any systematic analysis of video-based materials is severely challenging and a very time-consuming process. A WORKFLOW FOR ALIGNMENT The following section outlines a possible workflow for allowing the incorporation of video materials in the optimisation process. The proposed procedure follows the three steps of collection, mining and contextualisation. Step 1: collection The first step leading up to the analysis is the collection and preparation of two separate datasets: structured data reports and video recordings, respectively. What is critical at this stage is to establish matching timelines for the separate datasets. In the simplest case, this only means that the video timestamps should be correctly set to enable synchronisation later. However, if multiple video-feeds are being collected, it might be necessary to carry out some more elaborate video-editing work to harmonise the separate streams. Structured data reports (e.g. the Digital Imaging and Communications in Medicine (DICOM) standard) are standardised but may contain different parameters. Typically, they hold detailed information about the technique and dose parameters along with the precise time for each event. The suggestion is to convert this information into Time Series Data Frames in pandas, the Python Data Analysis Library (pandas.pydata.org), or something similar. Such a move enables subsequent manipulation and analysis. Step 2: mining The second step constitutes the first part of the analysis. Here, only the structured data are examined and mined for potential insights. At this stage, a wide range of methods can be deployed. It is possible to work with everything from various machine learning techniques (13) and statistical methods to mere visual inspections of different plots. The overarching purpose should be to identify regions of interest and to search for anomalies or systematic differences. Such analyses may be sufficient in and of themselves. However, in a traditional approach, relying solely on structured data, the researcher would be barred from conducting in-depth analyses of many issues identified. The structured quantitative data tend to provide evidence that something occurred but not necessarily about why it happened on a specific occasion. The purpose of the proposed workflow is to disable this deadlock. Given the additional video-based dataset and the aligned timelines, additional analytic avenues are opened. With this set-up, any identified occurrence in the structured dataset can be quickly located in the video materials and subjected to further scrutiny. Step 3: contextualisation The final step is to make focused analyses of brief instances leading up to the identified occurrences of interest. Once such unique instances have been singled out from the structured data material, it is feasible to contextualise them with the help of the video. The circumstances surrounding the use of a specific protocol or the reasoning accompanying retaking an imaging sequence are now made available for analysis. By reviewing the choices made in situ, one can assess their significance concerning the larger picture. This root cause analysis provides valuable input to optimisation and can significantly advance our understanding of the situated use of different protocols and settings. ILLUSTRATIONS The data for the reported case During 18 months, 70 procedures of endovascular aortic repair (EVAR) were carried out and documented. Out of these, 12 procedures were randomly selected to be additionally recorded on video. In total, 58 h were recorded with the aid of a ceiling-mounted camera in the operating room, a microphone near the operator and by capturing the operators' screen. These separate video streams were combined into a single timeline using Final Cut Pro X (Apple Incorporated) software. An Artis Zeego angiographic system (Siemens Healthineers) provided the structured data, such as, for instance, dose rates, time and the current settings. Digital subtraction angiographies protocols The analysis started with a summary overview where it became evident that there were substantial differences in the levels of radiation used. The medical procedures varied in complexity, and the procedure time ranged between 1 and 12 h (see Figure 1). However, these variations could not fully account for the differences in radiation levels between the procedures (see Table 1). The treatments that had also been recorded were plotted as a function of the timewise accumulation of radiation. For each treatment, this would yield a unique pattern or roadmap (see Figure 2). The stepwise increases evidenced most clearly in the procedures displaying the highest values were the results of digital subtraction angiographies (DSAs) and cone beam computed tomographies. While it is already known that DSAs generally come with a higher dose than fluoroscopy, the former were examined further. The entire material contained 667 DSA acquisitions sequences that relied on one specific protocol in 47% of the cases. This half, however, accounted for 79% of the total Incident Air Kerma (in the reference point) produced during all these acquisitions. It thus became relevant to identify where and when this protocol could be replaced by one with lower settings. Thus far, the examination had mainly worked from the structured data, but at this point, the analysis would need additional information. If different protocols and sensor settings should be recommended, those recommendations would also need to consider the differences in image quality that would be the result. In addition, the assessment of whether a single dose should be deemed too high or appropriate must build on the understanding of what problem the operator was trying to solve with that specific image acquisition. Hypothetically, for each case, conceivably, a higher dose was indeed needed to ground a diagnostic decision there and then. Alternatively, perchance the higher dose was avoidable. These are precisely the kinds of questions that we can now begin to address. To advance to this next level of analysis, it becomes necessary to situate specific image acquisition sequences in the unique circumstances of their use. What were the problems encountered there, and what did it take to solve those? Based on this understanding, it then becomes possible to make recommendations for optimised use. Locating points of interest A related example of how to work with contextualisation can be given by studying the roadmaps of individual treatments. The features of the plots can help guide the search for critical moments in the data. As with the example of Figure 3, through visual inspection, it can be seen that the slope of the graph signifies the radiation dose rate. Three-dimensional CT and DSA appear as vertical lines, while the remaining sloping lines indicate different settings for fluoroscopy used by the operator. By studying this graph's profile, it becomes possible to identify different phases of the procedure, and, within those phases, moments where the operator changes between settings can be located. Guided by this search, points of interest can be identified-for instance, moments where there are significant changes in the profile of the graph. Having first identified these points, it becomes possible to interrogate the video materials and analyse the exact circumstances in which individual decisions were made. With this method, it is suddenly possible to examine singular events that significantly affect the outcome in terms of radiation. These short episodes can be examined with video-based methods focussing on task-specific communication (14)(15)(16) . Concerning the example, the prevalence or absence of accounts motivating the change in dose would be relevant study objects. Also, it becomes possible to identify instances where some change in image acquisition settings was made but where the increase in image quality could not solve the actual problem encountered in that situation. By enabling the identification of such occurrences, the proposed workflow opens for new areas of improvements. In this way, the outlined approach can be considered as a form of situational optimisation. Prerequisites and limitations The proposed workflow comes with its prerequisites and limitations. First, the imaging equipment's structured data reports should be systematically collected, which entails that relevant information is stored and made retrievable. Second, additional obstacles are found with the production and management of video 137 recordings. At hospitals where cameras are present, it can be difficult to save those recordings to disc. Furthermore, even with such systems in place, there may be a lack of infrastructural support for large video materials' long-term storage. CONCLUSION The described situational optimisation can draw on the rich and contextual information that video records afford while simultaneously avoiding some of the considerable drawbacks with this method. Video materials are complex, and any work on them is time-consuming. The outlined approach, however, can significantly reduce the time spent on convoluted materials. It offers a way to work systematically and at a large scale without succumbing to the context loss of statistical methods. The proposed workflow could thereby enable a new level of insights to inform and guide the optimisation process.
2,780.4
2021-05-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Surgical navigation system for temporomandibular joint ankylosis in a child: a case report Background Computer-assisted surgical navigation systems were initially introduced for use in neurosurgery and have been applied in craniomaxillofacial surgery for 20 years. The anatomy of the oral and maxillofacial region is relatively complicated and includes critical contiguous organs. A surgical navigation system makes it possible to achieve real-time positioning during surgery and to transfer the preoperative design to the actual operation. Temporomandibular joint ankylosis limits the mouth opening, deforms the face, and causes an increase in dental caries. Although early surgical treatment is recommended, there is controversy regarding the optimal surgical technique. In addition, pediatric treatment is difficult because in children the skull is not as wide as it is in adults. There are few reports of pediatric temporomandibular joint ankylosis surgery performed with a navigation system. Case presentation A 7-year-old Japanese girl presented severe restriction of the opening and lateral movement of her mouth due to a temporomandibular joint bruise experienced 2 years earlier. Computed tomography and magnetic resonance imaging demonstrated left condyle deformation, disappearance of the joint cavity, and a 0.7-mm skull width. We diagnosed left temporomandibular joint ankylosis and performed a temporomandibular joint ankylosis arthroplasty using a surgical navigation system in order to avoid damage to the patient's brain. A preauricular incision was applied, and interpositional gap arthroplasty with temporal muscle was performed. After the surgery, the maximum aperture was 38 mm, and the limitation of the lateral movement was eliminated. Conclusions A navigation system is helpful for confirming the exact target locations and ensuring safe surgery. In our patient's case, pediatric temporomandibular joint ankylosis surgery was performed using a navigation system without complications. Background Temporomandibular joint ankylosis (TMJa) is characterized by immobility of the temporomandibular joint together with the formation of an osseous, fibrous, or fibro-osseous mass fused to the base of the skull. TMJa is commonly caused by trauma, local or systemic infection, or systemic disease such as ankylosing spondylitis, rheumatoid arthritis, or psoriasis, and it may also arise after TMJ surgery [1]. TMJa may induce oral dysfunction and, especially during the growth stage, may cause deformities of the mandible and maxilla. TMJa in children is uncommon, and it is especially challenging for oral surgeons not only because of the technical aspects of the surgery but also because of the difficulty of predicting any impact of the surgery on the patient's growth [2]. Regardless of the type of surgery selected for TMJa, the first step in the surgical treatment of TMJa is an extended resection of the ankylosed bone. However, the removal of a sufficient amount of ankylosed bone is extremely difficult and highly risky [3]. In addition, bone adhesion further complicates the anatomical structure of the TMJ. Recent technological advantages have contributed significantly to surgical outcomes. For example, improved navigation systems can accurately indicate critical anatomical structures and identify the safest way to approach the target and the best orientation for safely performing surgery [4]. TMJ surgery carries a risk of brain damage, but computer-assisted navigation systems have recently been reported to be useful guides for complicated oral and maxillofacial surgeries [5]. Real-time intraoperative positioning can be tracked with such a navigation system, and therefore correlations between the preoperative design and the intraoperatively encountered anatomy can be assessed. We here report our use of a computerassisted navigation system in TMJa surgery in a 7-yearold girl. Case presentation In 2012, a 7-year-old Japanese girl was referred to our hospital owing to difficulty opening her mouth following a facial bruise caused by a fall from a pull-up bar that occurred in August 2010. Initially, she underwent observation, and gradually her mouth-opening became more restricted. At her first visit, her maximum aperture was 13 mm and the movement of the right mandible was severely restricted (Fig. 1). The opening mainly involved a rolling movement, and no gliding was observed. There were no special medical, family, or psychosocial histories related to this patient. Her panoramic radiography and computed tomography (CT) scans demonstrated left condyle deformation caused by bone addition as well as a severe loss of joint space (Fig. 2). On magnetic resonance imaging, the joint cavity and articular disk were not visible. Based on these findings, we diagnosed left TMJa. Taking the risk of jaw undergrowth into consideration while also taking care to avoid the risk of brain damage, we performed TMJ arthroplasty using a surgical navigation system (Figs. 3, 4). First, a preauricular incision was made to reveal the TMJa region. Preoperative CT indicated that the skull width was only 0.7 mm at the thinnest point, and the error of the navigation system was confirmed to be 0.3 mm. Therefore, after confirming the position of the medial cranial fossa and the distance from the glenoid fossa to the skull base with the assistance of the Medtronic StealthStation S7 workstation with Synergy Fusion Cranial software (Medtronic Navigation, Louisville, CO), we performed a 10-mm-wide osteotomy and TMJ release. At that stage, the maximum aperture was 32 mm. Finally, the temporal muscle and fascia were inserted into the glenoid fossa created by the surgery. No complications occurred. Mouth-opening training was initiated the day after surgery. Six months after surgery, the maximum aperture was 38 mm and there were no longer any impediments to side mandibular movement (Fig. 5). In addition, CT demonstrated a loss of bony adhesions in the condyle and glenoid, improvement of the condylar deformity, and the joint cavity intervention (Fig. 6). Discussion The goal of treatment and early surgical intervention for TMJa is to restore joint function, improve the patient's aesthetic appearance and quality of life, and prevent any recurrence and growth disturbance [6]. Over the last few decades, a number of surgical methods for treating TMJa have been developed, including gap arthroplasty (GA), interpositional gap arthroplasty (IGA), reconstruction arthroplasty, and distraction osteogenesis [7,8]. However, controversy remains regarding the ideal treatment choices and materials. A recent study found that there was no significant difference in the 24-month recurrence rate between GA and IGA, but that after 24 months significantly fewer recurrence events were seen in patients who underwent IGA compared with those who underwent GA [9]. Although various autogenous and alloplastic materials are used in an IGA, alloplastic materials may induce heterogeneity. In addition, the recurrence rate is significantly higher in patients who underwent IGA using alloplastic materials than in those who underwent IGA using autogenous materials [9]. The temporal muscle is the most commonly used interpositional material because the procedure is convenient and there is little or no risk of heterogeneity. Computer-assisted surgical navigation systems were initially introduced for use in neurosurgery and have been applied in craniomaxillofacial surgery for 20 years [5,10]. The first use of a navigation system for TMJa was reported in 2002; the system was found to improve the safety of the operation and to reduce the incidence of complications [11]. Compared with non-navigation surgery, navigation-assisted surgery has shown a significant difference in the lowest thickness of the postoperative skull base, demonstrating that the joint ankylosis procedure could achieve a more extensive removal of ankylosed bone with navigation surgery [3]. Navigation systems also help the surgeon to control the amount of bone removed [3]. In addition, navigation systems are reported to improve quality and reduce risk in skull base surgeries [12]. The skull base is stable, and higher precision can be obtained compared with other neurosurgeries [13]. Therefore, navigation systems can contribute to the accuracy of TMJ surgeries and can minimize their invasiveness. There are two types of navigation systems: optical and electromagnetic. In the present patient's case, we used an electromagnetic system because no head fixation was necessary. First, a reference point was set on the patient's forehead, and a magnetic field generator was placed on the side of her head. Registration was then performed with the tracer probe, marker-free. During the operation, the error was confirmed to be 0.3 mm. During the osteotomy, the position of the bone removal site was confirmed by the navigation system, and the surrounding tissue was preserved. The navigation system used to treat the TMJa in the present pediatric patient helped confirm the exact location of the surgery and helped us perform a safe operation without complications. There have been few reports to date of pediatric TMJa surgery performed using a navigation system. This method has certain disadvantages such as the error value and setting time, and our case required setting time after the general anesthesia had been started. However, our present case indicates that, if the surgery is performed with these points in mind, this method can be beneficial in the treatment of pediatric TMJa. Conclusion We report a case of pediatric TMJa that was treated surgically using a navigation system. The surgery was performed safely, and there was no damage to nearby vital Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: structures. The use of navigation systems in pediatric TMJ surgeries is beneficial.
2,101.4
2021-09-10T00:00:00.000
[ "Medicine", "Engineering" ]
A Study on the Expert System of Internal Medicine Diagnosis In recent years, medical technologies are gradually developed and various kinds of expert diagnostic systems are gradually proposed. The expert system of internal medicine diagnosis is one of the important systems. In this paper, the design principle and functional modules of the expert system of internal medicine diagnosis are introduced. The calculation steps and the implementation procedures of computational calculation are also provided. With the expert system of BP neural network as the major research objects, this paper briefly explains the concept of BP neural network, provides the actual example with iron-deficiency anemia as the diagnostic object, and verifies the authenticity of this expert system diagnosis. INTRODUCTION With the increasing progress of science and technology, those advanced technologies such as computers are becoming more and more widely-used in real life.In the medical field, diagnostic methods are not merely looking, listening, questioning and feeling the pulse.Nuclear magnetic monitoring and the CT technology have become commonly-used means in the medical field. In 2014, Shenglan Chen et al discussed diagnostic problems of pulmonary embolism and pointed out clinical features of pulmonary embolism.Problems such as expiratory dyspnea, palpitation and fast heartbeat are primary diagnosis characteristics.The author chose Shaoxing Center Hospital and Taizhou Hospital as experimental sites so as to avoid the contingency of data as far as possible.Patients are taken as research objects, of which clinical characteristics, therapeutic methods and diagnosis essentials are research directions.Experimental results show that the death rate of patients treated by thrombolytic therapy is 6.25%, the death rate of patients treated by anticoagulant therapy is 12.5%, the death rate of patients treated by thrombolytic and anticoagulant therapy is 2.5%, indicating that means combined with diagnostic and treatment should be an important diagnostic means of this disease.In 2014, Yu Zhang et al analyzed the diagnosis of Alzheimer's disease and discussed the diagnostic method of neurology.Second Affiliated Hospital of Mudanjiang Medical College is taken as the site for sample data extraction.Diagnosis and treatment data of 40 patients is collected from Sept 2011 to Sept 2013.The average length of stay is 21.There are 26 cases with healing effect, 13 cases with non-healing effect and one death case.The death cause is systemic failure.This disease is frequently seen in elderly people, including obvious hypophrenia, significant memory deterioration and clear aggressive behaviors.Treatment means mainly include drug treatment, daily care and non-drug treatment.The purpose of drug treatment is to improve memory and psychological status.Daily care mainly includes psychological guidance such as conversation and positive mental stimulation on nerve cells.The cause of the disease is relatively complex and there are no good means for radical cure at present.Comprehensive treatments are usually taken so as to improve the cognitive ability of patients, reduce bother of patients brought by the disease and improve the quality of life. Guoyan Luo et al studied the internal medicine diagnosis problems of ulcerative colitis and analyzed the therapeutic schemes.The author chose a municipal hospital of Yunnan Province as the research site and conducted case analysis on 72 patients with ulcerative colitis.Among them, 55 patients are cured and 13 patients are improved in their physical conditions after being treated with internal western medicines.No patient has an untoward effect in the process of treatment.Thus, it suggests that internal medicine diagnosis and treatment are suitable for ulcerative colitis and the rate of untoward effect of patients is relatively low on the premise of applying this method. This paper verifies the authenticity of the system from such aspects as algorithm design and algorithm implementation of internal medicine diagnosis system. INTERNAL MEDICINE DIAGNOSIS It is the key of internal medicine diagnosis that determines the disease type of a patient by observing the characteristics of the patient.As for digestive system diseases, symptoms such as stomachache, vomit and nausea are characteristics for diagnosis.As for diseases of respiratory system, pharyngitis, cough and fever are characteristics for diagnosis.As for circulation A Study on the Expert System of Internal Medicine Diagnosis Xiaohui Zhang Central Laboratory, Langfang Health Vocational College, Langfang, Hebei, China ABSTRACT: In recent years, medical technologies are gradually developed and various kinds of expert diagnostic systems are gradually proposed.The expert system of internal medicine diagnosis is one of the important systems.In this paper, the design principle and functional modules of the expert system of internal medicine diagnosis are introduced.The calculation steps and the implementation procedures of computational calculation are also provided.With the expert system of BP neural network as the major research objects, this paper briefly explains the concept of BP neural network, provides the actual example with iron-deficiency anemia as the diagnostic object, and verifies the authenticity of this expert system diagnosis.system diseases, there are substernal squeezing pain diagnostic characteristics of the disease.In addition, systemic diseases such as urinary system diseases and hematological system diseases have certain diagnostic characteristics.It is necessary to determine a specific disease after determining a certain systemic disease.For example, urinary system diseases include hyperthyroidism, diabetes and thyroid tumor.Further examination of the urinary system is shown in Figure 1. EXPERT SYSTEM It is a system design concept to complete the system design with existing resources so as to finish diagnostic tasks of the expert system.Any kind of system development is not simply an application of a certain model.It is the comprehensive application of several kinds of models so that the system can effectively function.It can be seen from Figure 2 that the system includes models of three layers.And purposes of models are different so that diagnostic tasks of the expert system can be completed. It can be concluded from Figure 3 that the expert diagnostic system of internal medicine includes an administrator operating system and an attending physician system.Each system includes different functional modules.The administrator operating system includes information of administrators, information of physicians, parameters of the mathematical model and parameters of the system.The assistant administrator includes detailed information of physicians, parameters of the mathematical model and parameters of the system. Figure 4 is the system algorithmic process.Initial data should be set before the execution of the algorithm such as calculation error, neural network layer and so on. BP NEURAL NETWORK 4.1 Concept of neural network model Neural network originates from neurobiology, the computational process of which is similar to the reaction process of nerve cells in biology. In the neural network, axon terminals in many dif-ferent neurons are able to enter dendrites of the same neuron to form a large number of synapses.Neurotransmitters released by all synapses of different sources are able to affect membrane potential changes of the same nerve cell.It reflects the ability of spatial information integration of nerve cells, which means that neurons can be integrated with input information of different sources on dendrites.According to this ability, the artificial neuron model is created by simulating the reaction process of neurons, which is shown in Figure 6.Symbol descriptions are provided in Table 1. . Images of two excitation functions are shown in Figure 7.The model adopted in this paper uses the second excitation function. Where, Formula ( 2) is the complete mathematical model expression of a single neuron. Calculation steps of the BP neural network model BP neural network is a forward multilayer network calculated in the minimum mean square error.Sigmoid is adopted as the excitation function when the back propagation algorithm is applied in the feed-forward multilayer network.The recursion of network weight coefficient w ij is calculated in the following steps: T .This model is operated through Matlab, so the assignment is random in the computer.Thus, same program codes might have different results in different operating procedures. (2) Input sample value , , , ,1 1 2 x x x x n , , ,1 , , n and correspond to expected output , , , ,1 1 2 y y y y n , , ,1 y , , n .(3) Calculate the output of each layer.As for the output x ik of the i th neuron the k th layer, Where, In this formula, 1 As for other layers, (5) Correct ij w and i T , so (6) It can be determined whether the requirements are satisfied in accordance with established criteria after the weight coefficient of each layer is calculated. If not, go back to the step (3).Otherwise, it will end the calculation. ALGORITHMIC EXAMPLES The design principle of the expert system of internal medicine diagnosis is briefly introduced mentioned earlier.Several simple practical algorithmic examples are listed as follows.The operating process of the expert system is introduced through the example of iron-deficiency anemia.For example, there are two reference indexes of iron-deficiency anemia.One is serum ferritin and the other is sideroblast.The two characteristic values are first input and shown in Table 2. Data in Table 2 is taken as the configuration parameters of the system.Figure 8 can be acquired through the algorithm simulation of the expert system with Matlab software. In Figure 8, "O" is the parameter of a normal person and "*" is the parameter of an abnormal person.It can be seen from the figure that there is an obvious boundary between parameters of normal and abnormal people.It can be determined that whether a subject has anemia by inputting parameters in the program.Parameters of patient A are (9.70,0.09) and parameters of patient B are (12.68,0.24).Results are shown in Figure 9. It can be seen from Figure 9 that A is a healthy person while B is a patient with iron-deficiency anemia.The figure is in accordance with the actual results, reflecting the validity of this system. CONCLUSION This paper applies BP neural network into the research on the expert system of internal medicine diagnosis and indicates the rationality and the validity of the expert system of internal medicine diagnosis from the angle of computer based on a brief introduction of the internal medicine diagnosis principle.However, applications of BP neural network have certain drawbacks.For example, a great deal of data needs to be analyzed if there are numerous factors for the determination of a certain disease.Practical situations cannot be reflected authentically and objectively due to the complex calculation process of the neural network model.Besides, the neural network needs to reasonably estimate the training error.Once the error is not reasonable, there might be incorrect calculation results.Therefore, initial values of parameters should be properly set for the expert diagnostic system with BP neural network as the application principle. Keywords: internal medicine diagnosis; expert system; BP neural network; authenticity DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015 This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Figure 1 . Figure 1.Further examination of the urinary system Table 1 . Symbol descriptions of the mathematical model Symbols DescriptionsInput section of neurons, namely information released by the upper level Table 2 . Parameters of characteristic values
2,531.2
2015-01-01T00:00:00.000
[ "Computer Science" ]
A User Centred Approach for Bringing BCI Controlled Applications to End-Users In the past 20 years research on BCI has been increasing almost exponentially. While a great deal of experimentation was dedicated to offline analysis for improving signal detection and translation, online studies with the target population are less common. Although BCIs are also developed for entertainment and thus potentially for healthy users, the main focus for BCI applications that are aiming at communication and control are people with severe motor impairment. There is a great need for translational studies that test BCI at home with the target population. Further, long-term studies with users in the field are required to improve reliability of BCI control. The user centred approach appears suitable to foster such studies. Introduction In the past 20 years research on BCI has been increasing almost exponentially.While a great deal of experimentation was dedicated to offline analysis for improving signal detection and translation, online studies with the target population are less common.Although BCIs are also developed for entertainment and thus potentially for healthy users, the main focus for BCI applications that are aiming at communication and control are people with severe motor impairment.There is a great need for translational studies that test BCI at home with the target population.Further, long-term studies with users in the field are required to improve reliability of BCI control.The user centred approach appears suitable to foster such studies. In this chapter we will first define the needs and the gaps for bringing BCI to end-users and explain the model of BCI control which guides our interventions.Then we will describe the user-centered design and report first results of studies that adopted this approach for evaluating BCI applications.Those results led us to develop novel BCI components which we then tested with healthy and severely ill end-users.More specifically, we will introduce the optimized communication interface, the face speller, and remotely supervised BCI controlled brain painting with a locked-in patient in the field.We will end the chapter with summarizing the requirements for improvement and reasons for cautious optimism that the BCI community will be successful in providing end-users in need with reliable and independent BCI controlled applications. The needs and the gaps In 1973 J.J. Vidal posed the question whether "electrical brain signals" can "be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices…?"(p.157 [1]).Already in those days Vidal answered the question with a clear Yes and time has proved him right.Since the early nineties, when only few articles on brain-computer interfacing were available, publication activity has increased almost exponentially [2].We performed a coarse search in Pubmed and PsychInfo with the terms BCI OR brain computer interface for 2011 through Sept 12 and received 461 hits.Thus, we may expect at least 700 publications by the end of 2013 indicating unbowed research activity, and thus funding.However, the amount of studies including the major target population, namely severely motor impaired individuals were 39 only.Less than 10 percent of the papers published, which refer to BCI in one way or another, deal with motor impaired individuals, although many authors mention those as target of their research [3,4].This illustrates quite overwhelmingly the gap between prosperous and active research in BCI laboratories with healthy participants and the transfer of the gained knowledge to the main target population of BCI, namely patients with severe motor impairment. We are thus, facing a translational gap, i.e. a lack of translational studies that investigate the problems and obstacles that emerge when BCIs are to be applied to severely ill patients in their home environment.Such studies would include a thorough quantitative and qualitative evaluation of BCI.We argue and will describe that a user-centered design may be suitable to bridge this gap. Further, we are confronted with a reliability gap, i.e. intra-and inter-individual performance varies tremendously when controlling an application with a BCI in the short-term and even more so in the long-term use.Many studies exist that introduce one or the other more or less small improvement in accuracy, bit rate or error rate -the main outcome measures of performance in BCI research.However, only few of them deal with targeted end-users in the field, where multiple sources of artefacts exist including changes of the health status of the user, such as altered brain responses due to neuronal degeneration in the brain.Thus, the reliability gap can only be bridged with longitudinal studies that include end-users in the field.Such studies need to take into account the several aspects that may contribute to successful BCI control.An integration of these aspects leads to a neuro-bio-psychological, data analytical, and ergonomical model of BCI-control (Fig. 1) [5], which will be defined in the next section. A model of BCI control A BCI acquires input from the human brain, mostly its electrical activity recorded with electroencephalography (EEG), which is filtered, classified and transferred to an output signal.This output signal relates to the brain response or pattern of the BCI users and conveys the respective intention of the user.Importantly, the user receives feedback of his or her action and thus, BCIs imply a closed-loop between the system and the user.The output signal can be used to control an application -ideally, one that meets the desire of the user.Four aspects can be identified that contribute to BCI control: (1) individual characteristics of the BCI user, (2) characteristics of the BCI, (3) type of feedback and instruction, and (4) the BCI-controlled application [5].The individual characteristics of the user include psychological, physiological and neurobiological factors.For example, visuo-motor coordination and motivation have been identified to predict performance with BCI controlled by sensorimotor rhythms [6] and eventrelated potentials [7].Better inhibitory control, i.e. ability to allocate attention and inhibit distracting stimuli, measured as heart rate variability was related to better ERP-BCI performance [8].The amplitude of the SMR peak at rest and the P300 amplitude evoked in an auditory oddball paradigm were also related to performance with the respective BCI [9,10].Further, the location and quantity of neuronal loss due to accident or disease may deteriorate performance.Besides the hardware used, the software components, namely the classifier of the input signal further determines BCI control (for review [11]).Common spatial pattern technique and stepwise linear discriminant analyses proved to perform well in SMR-and ERP-BCIs [12,13]. Little research is available on how the type of feedback and instruction provided in a BCI setting may influence performance.From early neurofeedback studies it is known that immediate feedback is superior to delayed feedback which held also true in a BCI context [14].It may also be the case that a more ecologically valid feedback in a virtual environment outperforms traditional two-dimensional feedback on a computer screen [15][16][17].A quite robust finding across BCI types is that visual feedback is superior to auditory feedback [18][19][20].In the SMR based BCI instruction to imagine motor imagery kinaesthetically leads to increased performance as compared to visual motor imagery [21]. Finally, the complexity of the application influences performance.Usually simple spelling tasks are mastered more accurately and faster than environmental control or control of information technology, such as internet [22,23]. As can be seen, the model offers multiple toeholds for improvement and user feedback.In the following sections we will introduce novel achievements for BCI that improve and facilitate BCI use and are based on feedback provided by end-users within the user-centred approach.Before we detail the novel approaches, the user-centred design and its application to BCI will be outlined. The user centred design and its application to BCI BCI development demands for close investigation of the end-users' needs and requirements and of the restrictions that come along with their diseases.The latter restrictions may range from small artefact contamination of the recorded brain signal up to loss of perception modalities, e.g.loss of ocular control as often the case with progression of neurodegenerative diseases.Furthermore, attention allocation may be limited and long lasting training sessions may be too demanding.BCIs are required to accommodate for such restrictions and to offer appropriate solutions, such as switching to auditory or tactile modalities when vision is impaired.Many of these restrictions are not evident when testing systems with healthy users.Furthermore, a system in daily use has to meet other requirements than a system developed for research purpose only, e.g. with regard to hardware setup, software handling and technical support.Bringing BCI technology to end-users' homes thus, inevitably requires involving them into this developmental processes. More recently the potential user of a BCI came more into the focus of BCI development and user-centred approaches were adopted [22,24,25].A user-centred approach implies early focus on users, tasks and environment; the active involvement of users; an appropriate allocation of function between user and system; the incorporation of user-derived feedback into system design; and an iterative process whereby a prototype is designed, tested and modified [26].The user-centred approach was standardized with the International Organization for Standardization (ISO) 9241-210 (Ergonomics of human-system interaction -Part 210: Human-centred design for interactive systems).According to this approach three kinds of requirements have to be taken into account: (1) Business requirements: Here, typically, a Brain-Computer Interface Systems -Recent Progress and Future Prospects specific number is set in terms of how many systems should be sold in a defined time frame.Although our face speller and brain painting (see below) have already been adopted by a company (http://www.intendix.com/)and are thus, available on the market, these products are not yet suitable for daily use in the field.(2) User requirements and functional specification: BCI requirements need to be specified from a user's point of view, including the functions required to support a user's tasks, the user-system and interfaces.Usability goals that must be achieved and the approach for system maintenance at the user's home need to be defined.(3) Technical requirements: It has to be specified how the system will achieve the required functions and what data structure must be available for internal processing for the approach to be successful.Technical constraints need to be defined, such as the maximum data communication speed over a network or the trade-off between good EEG measurement and comfort with regards to the EEG cap.On the basis of these requirements Zickler and colleagues asked experts in using assistive technology (AT), i.e. people with severe motor impairment, what they would consider the most important requirements for BCI [25].Those requirements were functionality, independent use, and easiness of use (see section on "User-centred improvements of BCI controlled applications"). Two different approaches to BCI control were subject of evaluation following these standards: BCIs dependent on modulation of sensorimotor rhythms, referred to as SMR-BCI, and on detection of event-related potentials, referred to as ERP-BCI.To better understand the applications and their evaluation, we provide a condensed description of the SMR-and ERP-BCI as implemented for control of the specific applications described below. SMR-BCI BCIs can be established by detecting an active modulation of sensorimotor rhythms (SMR) over sensorimotor areas of the brain.In a resting state, these rhythms are highly synchronised in the alpha (10-12 Hz) and beta (12-30 Hz) bands.When moving or imagining a movement, these rhythms desynchronise, i.e. the power of these frequency bands can actively be modulated by the user.Thus, SMR modulation constitutes a signal for BCI control [27,28].Different classes of motor imagery can be selected depending on a user's individual brain signals and the degrees of freedom that are required for control of an application.In a typical SMR-BCI, users trigger control signals for two classes by either imagining movement of the right or the left hand.Feedback is provided during the imagery tasks to enhance participants' performance thereby reinforcing correct behaviour.As hand areas are largely separated in the sensorimotor cortex, the evoked patterns are usually well distinguishable.Importantly, it has been shown that people with amyotrophic lateral sclerosis can utilize such modulations of the SMR to operate a BCI [29].One of the remaining issues, however, is that a large number of participants is not able to achieve sufficient SMR-BCI performance [7,9,30,31].BCI systems that do not rely on such active modulations of brain signals are available.The most frequently used system is described in the next section. Event-related potential (P300) BCI A typical BCI based on event-related potentials is the so called P300-Speller, providing muscle independent communication on a character-by-character basis [32]; for recent reviews: [33] and [34].A character matrix is displayed on a computer screen and groups of characters (usually rows and columns in a matrix) are highlighted (flashed) in random order.Users focus their attention on the desired field of the matrix (the target) by counting the number of flashes whilst ignoring all other characters (non-targets).This pattern constitutes an oddball-paradigm as target flashes are rare (odd) as compared to the high amount of non-target flashes.For example, in a 6x6 matrix one row and one column contains the target character whereas 5 rows and 5 columns are to be ignored.Each stimulus triggers distinct event-related potentials among which the P300 usually is the most prominent.It is a positive deflection in the EEG which occurs roughly around 300 ms post stimulus.Its latency may strongly vary with paradigms and across individuals (for review [35]).Yet other ERPs are also elicited, therefore a time window of up to 1000 ms post stimulus (typically 800 ms) is recommended to investigate users' individual ERPs (i.e., negative and positive deflections at distinct latencies).The characteristic sequence of event-related potentials is identified for each row and each column.The row and column with the most prominent ERPs are selected and the respective letter appears on the screen.It has been shown that 72.8% of N=81 healthy BCI users were able to communicate with 100% accuracy by means of such an ERP-BCI and that less than 3% could not achieve any control [30].Importantly, these results transfer to individuals with severe motor impairment, e.g.due to neurodegenerative disease, in that the speller can be utilized as a muscle independent tool for communication (e.g., [22,[36][37][38][39]; for review [40]).Since its first description in 1988, the P300-Speller has been used intensively, further investigated and modified in a plethora of research publications leading to new applications for communication and device control (for review, e.g.[34]). Evaluation of BCI controlled applications The ISO 9241-201 (2010) defines usability as the "extent to which a … product … can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO 9241-201, 2010, p. 3).Effectiveness refers to how accurate and complete the users accomplish the task.Efficiency relates the invested costs, i.e. users' effort and time, to effectiveness.User satisfaction refers to the perceived comfort and acceptability while using the product.Context of use refers to users, tasks, equipment (hardware, software and materials) and the physical and social environments in which a product is used (ISO 9241-201, 2010, p. 2) [22]. To accommodate for these aspects when evaluating newly developed BCI driven applications, a set of measures has been compiled to assess effectiveness, efficiency and satisfaction [22].Effectiveness refers to how accurately end-users can communicate with the BCI and is operationalized by the numbers of intended and thus, correct selections in relation to the total number of selections.This measure is also often referred to as accuracy.Efficiency comprises the amount of information transferred (bit rate), which expresses speed and accuracy with one value, and the workload experienced by the end-user.A measure to assess subjective workload is the NASA task load index (TLX) which quantifies the workload for each task and identifies its sources [41].Workload is defined as physical, mental, and temporal demands, and performance, effort, and frustration.User satisfaction can be addressed with the Quebec User Evaluation of Satisfaction with assistive technology (QUEST 2.0) which is the only standardized satisfaction assessment tool that was designed specifically for AT-devices [42].It explicitly allows for deleting inadequate and adding informative questions with respect to a specific AT so that BCI specific items could be integrated.Reliability, speed, learnability, and aesthetic design were added to accommodate for specific aspects of BCI and the resulting questionnaire was referred to as Extended-QUEST [22].Possible ratings range from 1 to 5 with 5 indicating best possible satisfaction. As another measure of device satisfaction the ATD PA Device Form was used.The Assistive Technology Device Predisposition Assessment (ATD PA) is a set of questionnaires based on the Matching Person and Technology Model (MPT) of Scherer (2007) [43].It addresses characteristics of an AT-device and asks respondents to rate their predisposition for using the AT under consideration.The questionnaire rates the AT-person match and the expected support in using the device, in other words the expected technology benefit [44]. As a coarse measure for overall satisfaction with the device, a visual analogue scale (VAS) ranging from 0 to 10 (not at all -absolutely satisfied) was included in the evaluation procedure.An open interview allowed participants to state their opinion about the BCI and its application and recommendations for further development. To date, with this instrumentation three studies were performed with severely impaired endusers [22,45], which we will describe in the following subsections. Extended communication Zickler and colleagues investigated the first prototype in which BCI was integrated into a commercially available AT software [22].Control of AT was realized by means of the ERP-BCI described above.Participants tested the text entry, emailing and internet surfing options (Fig 2).The oddball paradigm had to be implemented such that these applications provided by the standard software could be controlled.Instead of rows and columns flashing red dots were assigned to each possible selectable item.The red dots then flashed in random order.Participants were able to write a text, send an email and surf the internet for a specific website. Selection accuracy (effectiveness) ranged between 70 and 100% correct responses and for all participants internet surfing was the most difficult task.Information transfer rate (efficiency) was between 4.5 and 8 bits per minute.Experienced workload (efficiency) was quite different among users.While one user rated workload on all dimensions between 9 and 12 (of 100 possible, with 100 being the maximum possible workload experienced), two participants were always between 34 and 46 indicating moderate workload for all tasks.In one user, who was confronted with BCI for the first time, workload decreased with every session from 49 to 15, which was encouraging as it demonstrated that workload can be decreased with practice. Satisfaction was high for safety of the device and the professional services and low for adjustment.With regards to the BCI specific items, reliability and learnability were rated high while speed and aesthetic design were only moderate.Obstacles for use in daily life were (1) low speed, (2) time needed to set up the system, (3) handling of the complicated software and the (4) demanding strain that accompanies EEG recordings (washing hair, etc.).Overall satisfaction ranged from 4 to 9 indicating substantial variance and considerable room for improvement.In the interview participants stated that the greatest obstacle for use in daily life would be the EEG cap, there should be no cables, no gel and it should look less eye catching.Hardware should be within one device (instead of an amplifier, a laptop and a screen) and wheelchair control should be integrated.None of the participants could imagine using the BCI in daily life unless substantially improved. The above described BCI controlled application already goes beyond simple verbal communication and may constitute a step toward inclusion via the world wide web.Some of our patients have been participating in BCI studies for a long time [46] and stated that they would also like to control other, more entertaining applications such as playing games or painting. Brain painting Together with an artist (Adi Hösle www.retrogradist.com) the letter matrix controlled by the ERP-BCI was transformed into a painting matrix which allowed the user to select shapes, size, colours, and contours and to move a brush on a virtual canvas (Fig 3).One participant stated "Everyone talks about freedom, but the worst oppression is to be locked into my own body.This art form allows me to break from the prison…".With his painting (see Fig 4) he wanted to illustrate that there is a light at the end of a tunnel.Emailing and internet surfing with the Qualilife software.Possible items to select are indicated with a red frame.The red dots appear randomly at every item which can be selected.Thus, the to-be-selected item again constitutes a rare target within frequently appearing irrelevant items, and hence, the oddball paradigm is realized (Figure 1 from [22] with permission).For painting an object and its shape, location and transparency have to be defined.Only after the selection of "color" the object is transferred to the "canvas".In the toolbox at the top of the screen the latest selections are shown (from left to right in this figure): grid size (3), brush size (1), transparency of color (100%), object shape (rectangle), color (black).In the last square of the toolbox the latest selection is shown, which in this example is "black".Four severely motor impaired potential end-users participated in the evaluation study which comprised seven daily sessions.In five of those sessions, participants could freely paint pictures of their choice.Effectiveness ranged between 80 and 90%, i.e. in 80 to 90% of the time participants selected the item they intended to.With an average around five bits per minute the information transfer rate (efficiency) was relatively low.This was due to an extended break between selection of items, to provide the user with sufficient time to think about what to select next ("creative pause"), and users explicitly appreciated this adaptation of the selection speed.Workload varied considerably between 20 and 50 and was sometimes due to disease related physical problems experienced by the users, and thus, independent of the specific BCI application.Like in the communication application described above, reliability and learnability were rated high (4.2 and 5.0) whereas users were not so satisfied with speed, adjustment and dimensions [44].For two users the ATD PA Device Form indicated a good match between the system and the user (4.3 and 4.2 of 5 possible), but for the other two only 3.4 and 3.8 indicating that the match could be improved [44].Overall satisfaction ranged between 5 and 8 also leaving room for improvement. Taken together, users enjoyed painting and painted up to one picture per session.Three users would have liked to use Brain Painting in daily life once or twice a week.They reported high satisfaction with the learnability, ease of use, and reliability of the device.The EEG-cap and system operability clearly required improvement if the BCI application was to be used in daily life [44]. Gaming Four severely disabled end-users -two in the locked-in state -evaluated the gaming application Connect-Four (http://en.wikipedia.org/wiki/Connect_Four)[45].Connect-Four is a SMR-BCI based prototype, enabling end-users to select either a row or column and setting a coin by regulating their brain activity.In six BCI sessions end-users were trained to regulate their brain activity in copy-tasks (location of coins were pre-defined by the experimenter), which were followed by free mode game playing.Effectiveness in the copy-task was low to medium in three of four end-users, with accuracies varying between 47% and 73%, and only one end-user, in the locked-in state, achieved high BCI control with up to 80% accuracy.With an ITR ranging between 0.05 and 1.44 bits/min, efficiency was low.The end-users rated their subjective workload moderate (on average between 28 and 52 of 100), with mental and temporal demand contributing most to their workload (efficiency).Two end-users reported high frustration which first increased and then decreased again with sessions.Nevertheless, the BCI game was accepted well by the end-users.On average end-users were moderately to highly (3.8 for the total Quest score and 3.9 for the added BCI items total score; ratings ranging between 1 and 5 with 1 indicating "not satisfied at all" and 5 "very satisfied") satisfied with the BCI (satisfaction).End-users were highly satisfied with weight, safety and learnability (4.3, 4.5 and 4.8). Reliability and speed were rated moderately (3.5).Main obstacles were the EEG-cap and electrodes, time-consuming and complex adjustment, difficulty to handle BCI equipment and low effectiveness.Like in the other two BCI controlled applications, the evaluation by the endusers implied that there is need for improvement.It seems to be more challenging to implement an SMR-BCI in activities of daily living of end-users as compared to an ERP-BCI controlled application [22,47].Two end-users (one of them locked-in), however, stated that they could imagine using Connect Four in their daily life.The other end-user in the locked-in state could imagine using the BCI in his daily life provided substantial improvement.The fact that both locked-in end-users were highly motivated throughout the BCI sessions and did not report any frustration, even when BCI control was low, implies the need and hope of these patients that BCI may provide better communication and control opportunities. Taken together, such evaluation studies are first steps toward bridging the translational gap experienced in BCI research and development.Based on these evaluation results we state that to date ERP-BCIs are more effective and efficient for communication and interaction as compared to SMR-BCIs (Table 1).End-users indicated that the speed of the BCI controlled application was too low.Users would have liked to use the Brain Painting application several times a week, but none could imagine to use the BCI for emailing and internet surfing unless substantially improved.Somewhat surprisingly two end-users could imagine playing Connect Four in daily life despite low control.Table 1 summarizes the evaluation results for all applications. Communication Painting Gaming Table 1.Summarized evaluation results for the three applications.Clearly, all of them leave room for improvement.However, end-users would have liked to use the Painting and Gaming applications in their daily life. User-centred improvements of BCI controlled applications As outlined above, functionality, independent use, and easiness of use were rated by expert users of assistive technology (AT) as most important for BCI use in daily life.In the next sections we will describe how we addressed and improved these three aspects. Functionality In an effort to bridge the reliability gap and to address speed of the BCI, we changed the stimulation mode of the widely used P300 spelling matrix.In the commonly used ERP-BCI, characters are light flashed and attention to one of the characters will usually elicit a distinct P300 [32] and sometimes other ERP components such as N100 or N200 (e.g., [48][49][50]).One option to increase reliability of the system is to enhance signal to noise ratio of the recorded ERPs.It is well known that familiar faces elicit characteristic ERPs, among which the N170 and N400f (f for faces, Figure 5) are very reliable ERPs.Thus, instead of flashing the letters of the matrix we overlaid row-and column-wise a famous face (the face of Albert Einstein or Ernesto Che Guevara, [51]).Figure 5 provides a screenshot from such modified BCI matrix and illustrates the grand average event-related potentials across N=20 healthy participants. Increasing the signal-to-noise ratio by eliciting more target specific ERPs, significantly boosted offline BCI performance.Importantly, these findings were replicated online in a group of possible end-user of BCI with severe motor impairment, e.g.users with amyotrophic lateral sclerosis or spinal muscular atrophy [38].They benefited to such an extent that even some users who were unable to operate the traditional ERP-BCI, reached an online accuracy of 100% due to the face stimulation.As such it was possible to decrease the number of stimulation cycles without negatively affecting performance, i.e. bit rate was strongly increased.In six online runs, the number of stimulation cycles was decreased from 10 to 6, 3, 2 and 1 (i.e.single trial) stimulation sequences.Performance in N=9 users with neurodegenerative disease was significantly increased in all runs when exposed to the face speller as compared to the classic ERP-BCI.Furthermore, we compared their single trial performance to the online performance of N=16 healthy participants.As usual, performance was significantly worse in the classic ERP-BCI, however, no difference was found for the face speller.These results clearly underline how modifications to the system can diminish performance drops in end-user samples.Zhang and colleagues (2012) reported that inversion of faces may further increase the N170 component and thus, performance in the BCI task.Face motion, face emotion and face familiarity, however, did not affect BCI performance [38,52].We conclude that investigating stimulus material other than the classical character highlighting is a very promising direction for addressing speed and reliability of the system.Brain-Computer Interface Systems -Recent Progress and Future Prospects Easiness of use We developed a so-called optimized communication interface which allows for auto-calibration and word completion and is controlled with a user-friendly graphical interface [47].After the subject is set up with the electrode cap and connected to the BCI by an expert, the calibration process for parameterizing the classifier can be started by pressing a single button on the screen.No familiarity with technical or scientific details of the BCI is required.Data from calibration is automatically analysed in the background, invisible to the user who only receives a feedback on successful or unsuccessful outcome of the calibration.In the latter case, calibration can be performed again with one click.Yet, if successfully calibrated, communication with the P300-BCI can be initiated with another button press.We tested if such a user-friendly BCI implementation can be handled independently by naïve users.All healthy subjects (N=19) handled the BCI software completely on their own and stated that the procedure was easy to understand and that they could explain it to a third person.A text completion option significantly decreased communication speed.We conclude that from a software perspective, a BCI system can easily be integrated into an automated application that allows caregivers, friends or relatives to control such complex systems without prior knowledge at the end-user's home or bedside. Independent use Finally, to bridge the reliability gap we implemented BCI controlled brain painting for longterm use at the home of a 72-years old locked-in patient diagnosed with amyotrophic lateral sclerosis (ALS) who used to be a painter [53].The brain painting application, which was successfully tested and evaluated by healthy subjects [23], as well as patients ( [44] and see above) was embedded into an easy-to-use interface enabling to use the application after a few steps only.Family was trained to set-up the 8-channel EEG-cap and amplifier and to control the brain painting interface.The brain painting software automatically saved the duration of painting time, number of runs, and the paintings, and transfered them to our lab for remote supervision.After every session, satisfaction was rated.In a separate window familiy and caregivers can comment on the session.In doing so, occurring problems can be noticed and remotely solved by our experts via remote internet access.Figure 6 shows the end-user in a brain painting session at her home. After each session the end-user is asked to rate her satisfaction with the visual analogue scale (VAS) (Figure 7) and after approximately 10 sessions the workload and device satisfaction are assessed with the NASA TLX [41] and the Extended QUEST 2.0 [22,42].Her responses as well as her data can be observed by our experts remotely to allow for system modifications or other interventions if necessary (e.g.advise for recalibration of the system). In more than 8 months the end-user has been painting in more than 86 BCI sessions with an average paiting duration of 66.2 minutes.Satisfaction with device strongly depended on functioning of the BCI (Figure 7).When implementing a remote-controlled BCI application problems of malfunctioning arise which are immediately visible in the satisfaction rating (e.g., sessions 9 and 17 in Figure 7).Three types of sources for her dissatisfaction could be identified: In most of the cases dissatisfaction was due to technical problems (software/hardware; especially in the first sessions after set-up of the BCI system at the end-user's home); second, due to problems from the end-user's side, e.g.low concentration or exhaustion or not being able to realize the desired painting; and third, due to bad control (e.g.due to false cap placement, insufficient electrode gel) or loss of control over time (e.g.due to electrode gel drying). Also for this locked-in BCI end-user effectiveness, reliability and easiness of use were the most important aspects for device satisfaction.Additionally, she mentioned professional support, specifically during times in which the system was not running properly.With a mean VAS satisfaction score of 6.2, her overall satisfaction is moderate to high.However, there is high variability with lowest satisfaction when the system was not working (early sessions) and Brain-Computer Interface Systems -Recent Progress and Future Prospects when the painting was not as she expected it to be (later sessions).Highest ratings indicate that the system worked properly and that she was satisfied with her painting.Despite initial problems with the BCI, her motivation to continue brain painting has remained high even after more than 80 sessions.The end-user is currently painting 2-3 times a week, but stated that she would like to paint every day, if she could do so.The limiting factor is the available time of the family setting up the BCI.But currently also caregivers and friends are willing to learn the set-up and control the application to enable her to paint more often.In conclusion, our results demonstrate that expert-independent BCI use by end-users in the field is possible and illustrate the important role of family and caregivers when transferring BCI technology from the research environment to the end-user's daily life.Figure 8 depicts some of her brain paintings. Conclusions Taken these results together, we can state that milestones were achieved in bringing BCIs to end-users.BCIs were combined with standard assistive technology, set up of the system including handling of software was facilitated tremendously and spelling speed was increased whilst maintaining high accuracy levels by altering the stimulation mode.In one exemplary end-user with severe motor impairment an application was installed at home such that family and caregivers can set up the system and maintenance and support is provided remotely.With innovative applications to be set-up at the end-users' home and long-term studies first steps have been undertaken to bridge the translational and reliability gaps encountered when bringing BCIs to end-users.The user-centred iterative process between developers and end-users proved successful and the results are powerful demonstrators that BCIs are well coming of age and can face the transfer out of the lab to the end-users' home. Figure 1 . Figure 1.A model of BCI-control comprised of 4 aspects: individual characteristics, BCI characteristics, feedback and instruction, BCI-controlled application.Colours serve for distinction of categories only.Boldness of black arrows indicates possible strength of influence on BCI control [5]. Figure 2 . Figure 2.Emailing and internet surfing with the Qualilife software.Possible items to select are indicated with a red frame.The red dots appear randomly at every item which can be selected.Thus, the to-be-selected item again constitutes a rare target within frequently appearing irrelevant items, and hence, the oddball paradigm is realized (Figure1from[22] with permission). Figure 3 . Figure 3. Brain Painting matrix.For painting an object and its shape, location and transparency have to be defined.Only after the selection of "color" the object is transferred to the "canvas".In the toolbox at the top of the screen the latest selections are shown (from left to right in this figure): grid size (3), brush size (1), transparency of color (100%), object shape (rectangle), color (black).In the last square of the toolbox the latest selection is shown, which in this example is "black". Figure 4 . Figure 4. Painting "Who" by a brain painter with locked-in syndrome. Figure 5 . Figure5.left: Instead of flashing letters in the rows and columns, rows and columns are overlaid with the face (Einstein is not shown due to copyright).Right: Averaged evoked potentials as response to targets and non-targets.In the face condition prominent N170 and N400f appear in addition to the P300.The ERP amplitude is depicted as a function of time[51]. Figure 6 . Figure 6.ALS patient at her home, after finishing her brain painting.While painting, the brain painting matrix appears on one screen while on an additional monitor, placed on the table in the background, she can follow the progress of her painting.The brain painting software is operated by the family or caregivers and requires few steps only for set up. Figure 7 . Figure 7. Ratings of satisfaction (VAS Satisfaction; VAS = visual analogue scale) after each of 86 sessions with the brain-painting application with 0 indicating "not satisfied at all" and 10 indicating "very satisfied".Satisfaction ratings vary strongly between very low satisfaction (rating between 0 and 3) and very high satisfaction (rating between 7 and 10).The low ratings in the first 20 sessions were always due to malfunction of the BCI which was still in the set-up state.Continuous remote access to these data allowed for in-time modifications to the system by our experts(Holz et al., in preparation). Figure 8 . Figure 8. Example Brain Paintings of the BCI -user with locked-in syndrome.All paintings were painted with the BCI in her daily life, independent of BCI expert's control, (with friendly permission from the brain painting artist).
8,461.6
2013-06-05T00:00:00.000
[ "Computer Science" ]
The deep space quantum link: prospective fundamental physics experiments using long-baseline quantum optics The National Aeronautics and Space Administration’s Deep Space Quantum Link mission concept enables a unique set of science experiments by establishing robust quantum optical links across extremely long baselines. Potential mission configurations include establishing a quantum link between the Lunar Gateway moon-orbiting space station and nodes on or near the Earth. This publication summarizes the principal experimental goals of the Deep Space Quantum Link. These goals, identified through a multi-year design study conducted by the authors, include long-range teleportation, tests of gravitational coupling to quantum states, and advanced tests of quantum nonlocality. Introduction: the case for deep space quantum optics Space-based quantum optical links support future networking applications for quantum sensing, quantum communications, and quantum information science [1][2][3][4]. In addition, such links enable new scientific experiments impossible to reach in terrestrial experiments [5]. The Deep Space Quantum Link (DSQL) is a spacecraft mission concept that aims to use extremely long-baseline quantum optical links to test fundamental quantum physics in novel special and general relativistic regimes [6][7][8]. The authors of this manuscript engaged in a two-year long study of how quantum optics in space could be used to conduct new tests of fundamental physics, in compliment to proposed tests utilizing matter or clocks. This manuscript describes the findings of the NASA-funded study, and describes some of the technology requirements and outstanding mission design studies necessary to move forward with the mission. DSQL is currently in the pre-project developmental phase, with expected mission integration planned to begin in the late 2020's. One or more nodes of a DSQL network could deploy on deep space platforms, such as the Lunar Gateway (LG) [9] or Orion modules. A key challenge of contemporary physics is the reconciliation of gravity and quantum mechanics. Quantum Field Theory in Curved Spacetime (QFTCST), established in the 1970's, is the most reliable theory combining two well-established theories: quantum field theory for matter and general relativity for spacetime dynamics. It predicts effects like Hawking radiation from black holes [10], and, with extension to semi-classical gravity, provides the theoretical framework for inflationary cosmology which foretells a spatiallyflat universe [11]. However, most tests of QFTCST, either in the laboratory setting of analog gravity, or set in strong-field astrophysical processes, are indirect tests. DSQL aims to conduct a series of direct tests of QFTCST in the weak-field regime, accessible through deployment of long-baseline quantum optical links in the Earth-Moon system. Our experimental concept is based on QFTCST. This theoretical basis is preferred since it is the most tested theory for quantum effects, just as general relativity is for gravitational and curved-spacetime effects. QFTCST is essentially the only existing theory for the full range of phenomena that it describes, i.e., quantum processes in both weak and strong gravitational fields, ranging from Solar System, to Black Holes and Cosmology. For weak gravity-the regime relevant to our proposed experiments-there are alternative models to QFTCST. Their details can be found in reviews [12][13][14]. Defining experimental opportunities to QFTCST and its alternatives is an active field of contemporary research [5,[15][16][17][18][19]. The essence of the planned experiments with DSQL is to transmit photons between inertial reference frames and across gravitational gradients to implement long-baseline teleportation, test Bell's inequality, conduct photonic tests of the Weak Equivalence Principle, and validate relativistic predictions through single-and two-photon interference. In this context, the fundamental question is how to describe the evolution of the photonic state between creation and measurement. The simple definition of a trajectory which makes sense in classical physics is not a suitable notion for the description of the motion of quantum particles. In quantum physics, the fundamental concepts are time-evolving wave functions, or more generally, consistent histories. Quantum systems can be prepared in highly delocalized states such as entangled states. When discussing particle propagation in curved spacetime, it is necessary to express particles in terms of quantum fields. This is because QFTCST is currently the only theory that provides a consistent and general way of coupling quantum matter to background gravitational fields. Furthermore, the quantum field description is essential for describing higher-order coherences of the electromagnetic field, and for the formulation of a photodetection theory. The weak equivalence principle states that all objects "fall" with the same acceleration regardless of their mass and composition. As embodied in general relativity, it says that all neutral objects, massive and massless, follow geodesic trajectories of the metric, dependent only on initial conditions of position and velocity, and not on mass or composition. The extension of the equivalence principle established in classical mechanics to quantum physics is an important challenge for both theory and experiment [20,21]. Trajectories emerge for quantum histories as an approximation that is valid only for specific quantum states (quasi-localized), under specific conditions (sufficiently decohered) and in specific experimental setups with suitable measurement protocols. Therefore, it is necessary to phrase the equivalence principle in terms of quantum mechanical notions, namely, preparations of the quantum state and statistics of measurement outcomes. For photons, the equivalence principle must be expressed in terms of photodetection probabilities. Any Colella-Overhauser-Werner (COW) [22] type test of the equivalence principle that touches upon quantum properties of photons or massive particles will have the significance of being the first direct test of QFTCST. 1 This benchmark experiment is proposed to be conducted with photons using the DSQL. The first photonic COW test planned for DSQL will use weak coherent pulses. Single-photon superposition state tests will follow. Subsequently, increasingly complex tests using entangled and hyper-entangled states will be conducted. This sequence of experiments conclude with measuring the gravitationally induced phase shift of frequency-entangled photons using intrinsically nonclassical interference in a Hong-Ou-Mandel (HOM) interferometer. Furthermore, the DSQL could test for new physics of gravitational origin manifested as non-unitary channels in the time evolution of photons and/or matter waves. Such channels have been proposed in different contexts, including Ghirardi-Rimini-Weber-type of collapse models [24], Diosi-Penrose gravity-induced collapse model [25,26], and the gravity-induced decoherence mechanisms [27,28]. These processes typically result in a loss of visibility in interference experiments. The predicted effects are too weak in Earthbased or space-station experiments, but deep-space experiments with long unbalanced interferometers could provide noticeable constraints that could be used to test the viability of alternative quantum theories. Long-baseline links enabled by deep-space platforms create a high-latency quantum communication network, sufficient to incorporate human decision-making outside of the light cone of synthesis events. DSQL could use this to perform a long-range Bell test with and without human involvement. Such an experiment would test for local hidden variables across long distances, and address the "free will" Bell test loophole. The DSQL could also perform tests of quantum optical teleportation and entanglement swapping between inertial frames. The inertial frames are defined by the relative velocity of the nodes in the communication network, and their positions with respect to the central mass. Descriptions of quantum states depend on the choices of reference frame and gauge condition, and these are not physical entities; only the measurement outcomes are physical, but these outcomes depend on the reference frame in which they are interpreted. Consider the scenario where an entangled-photon source is centered between two receivers, roughly equidistant on either side, arranged to travel either towards each other or away from each other. In the rest-frame of the source, the subsequent two detection events are simultaneous and therefore space-like separated. These scenarios are interesting because it is possible to construct models of physics where the detection event at one measurement apparatus seemingly instantaneously prepares the measurement outcome of the other sub-system, as initially discussed by Aharonov and Albert in 1981 [29] and by Moffat in 1997 [30]. However, models of this type would be falsified if quantum correlations satisfy the standard Bell tests. Beyond their intrinsic scientific interest, demonstrations of long-baseline Bell tests and photonic teleportation are key technological achievements with direct relevance to the future deployment of global-scale quantum networks and sensor systems. The large relative velocities and network latency due to long baselines could degrade network performance if left uncorrected. Effects of gravity on timing synchronization that impact classical communications also effect quantum communications. In analogy to classical systems, these effects are more pronounced at higher communication rates. The proposed DSQL tests of long-baseline quantum teleportation between different inertial frames, long-baseline Bell tests, and tests of gravitational phase shifts will thus inform future network architectures. Accomplishing these experiments requires access to different mission configurations. Some of the experiments can be accomplished with a single spacecraft and ground station, while others require multiple spacecraft, or a pair of ground stations. The orbital configurations optimized against the experimental goals include LEO, MEO, Lunar orbits, and exotic orbits about the Earth-Moon Lagrange points. The flight-and ground-system technology required to achieve the experimental goals are also of direct relevance to terrestrial quantum networking: high-rate, high-fidelity, entangled photon pair sources; high count-rate, low-jitter, and low dark count-rate single-photon counters; narrow-band optical filters; temporal synchronization and time transfer for the quantum channels; and, in some cases, quantum memory development. In the following sections we highlight potential science experiments that could be conducted by the DSQL mission, and the relevant technology development required. We also present a link model to estimate the efficacy of deep space optical links under different experiment configurations. The article concludes with a survey of space-and ground-based opportunities to advance the science objectives of the DSQL mission, in graduating complexity. Deep space quantum link science experiments This section describes the experiments proposed for the DSQL mission, which among other outcomes, would enable testing theories predicting novel coupling mechanisms between a photonic quantum state and gravity, with the goal of bounding these predictions. The experiments are categorized by the similarity of their scientific objectives. These are: GR effects on light and tests of the equivalence principle, long-baseline Bell tests, longbaseline quantum teleportation, and applications of squeezed light. Individual experiments are achievable using a diverse set of different mission architectures. These architectures could involve multiple ground and/or space network nodes, all with different instrumentation packages. Subsequent sections summarize these implementation options. In conventional gravitational redshift measurements in space, local frequency references in a spacecraft and on ground are compared through an optical (or microwave) link employing classical light. The proposed experiments are not meant to compete with such gravitational redshift tests in terms of precision, especially given that the photon flux is lower in our case. Instead, the key difference is that the frequency references are compared by means of optical links involving quantum states of light, such as entangled pairs, and relying on phenomena with no classical analog such as two-photon interference. This will enable DSQL to explore the direct interplay of general relativistic effects and genuinely quantum phenomena. GR effects on light and tests of the equivalence principle Light propagating across a weak gravity field is subject to phase delays caused by the characteristics of the local spacetime [31]. The DSQL mission will characterize these delays in the quantum optical regime, in order to test the underlying predictions from general relativity. Another interpretation is that the proposed experiments will test quantum optical interference phenomena in the regime where gravitational effects are resolvable. These tests embody the notion of testing predictions of the weak-field regime of QFTCST. Essential background on Einstein's equivalence principle (EEP), and calculations of the magnitude of the predicted general relativistic effects, are summarized prior to description of the specific DSQL experiments. Einstein's equivalence principle states that the outcome of any local experiment in a freely falling frame will be the same as in Minkowski spacetime, i.e., in the absence of gravitational fields, provided that all relevant length scales are much smaller than the characteristic curvature radius of spacetime. This guarantees that gravitational tidal effects will not affect the outcome of the local experiment. The equivalence principle, which applies to situations where self-gravitation effects are negligible, comprises three different (but intertwined) aspects [32]: • universality of free fall (UFF), • local Lorentz invariance (LLI), • local position invariance, also referred to as universality of gravitational redshift (UGR). These have implications on the propagation of electromagnetic wave packets in curved spacetime, as well as on the comparison of clocks and frequency references at different locations. In particular, since the relevant modes of the electromagnetic field will have a transverse size far smaller than the spacetime curvature radius, their propagation is well described by the eikonal approximation, with light rays corresponding to null geodesics. While this approach has been thoroughly tested for classical electromagnetic waves, DSQL would offer a unique opportunity to test it for various quantum states of light, including true single-photon states, quantum superposition states, and entangled and hyperentangled states. For optical frequencies, quantum mechanics and general relativity predict the same effect on single photons as on classical light, i.e., the effect is on the mode of the field, independent of the precise quantum mechanical excitation of that mode. The proposed DSQL experiments test different aspects of these predictions, using long-range quantum optical channels between inertial frames. The superposition of quantum states of a single photon was experimentally tested along space channels by observing the single-photon interference at a ground station due to the coherent superposition of two temporal modes [33]. The measurements were carried out using an unbalanced interferometer to create the two temporal modes, and a matching one at the receiver to recombine them, thereby recovering the fringe visibility. The space channel was realized by exploiting retroreflectors mounted on several spacecraft, as originally exploited for the first single-photon exchange in space [34,35] and recently extended to 20,000 km for MEO orbits [36]. In contrast to the experiments conducted to date, which have classical analogs, the tests proposed for the DSQL mission have the advantage of controlled and verifiable quantum statistics and entanglement, leading to measurements that are intrinsically nonclassical. In addition to verifying that frequency, superposition, and entanglement do not affect the photon propagation, DSQL could measure independent relativistic effects in controlled experiments. Experiments could be designed to independently examine the following effects: • "Moving clocks run slow" from special relativity • "Clocks located down a gravitational well run slow" from general relativity • Doppler effect from differences in path length between successive clock ticks • Experimental offsets due to drifting equipment These effects are important to the operation of global navigation satellite systems and were calculated in the design-phase of GPS [37]. The effects are large enough to matter, but small enough to be treated as first-and second-order corrections. For example, a clock near the surface of Earth will measure an extra 10 ns/day for every kilometer of altitude [38]. More generally, when we compare the rate of a clock near the rotating surface of Earth to one on an orbiting satellite, the fractional change is given by simple dimensionless correction factors involving the fraction of the speed of light v/c and the Schwarzschild radius of the gravitational potential well (in this case for the Earth), given by R Schwarzschild = 2G N M Earth /c 2 ≈ 1 cm. The ratio of the clock-tick interval on the satellite to one moving with the surface of Earth is [39] The first correction is constant at 7 × 10 -10 , with 10 -13 variation for different elevations. (This is for an observatory at the equator, but for observatories elsewhere, the reduction in velocity is compensated exactly by accounting for the Earth geoid.) The second correction depends on the satellite orbit. For a satellite in a circular orbit, a good approximation to this order is to balance centripetal force against Newton's gravitational force, in which case Writing this in terms of R Schwarzschild , we recover the somewhat-well-known result for GPS that the time dilation from special relativity is half of that from general relativity's gravitational effect. The overall satellite correction is then simply and is a small effect given that R Schwarzschild ≈ 1 cm. This term is 10 -9 for a LEO orbit like ISS and falls to 1.6 × 10 -10 for a GEO orbit. The observatory and satellite terms completely cancel at r satellite = 10 7 m, which is in the inner Van Allen Belt. For a spacecraft in an elliptic orbit with semi-major axis a, the relation between velocity v and distance r is or in terms of dimensionless ratios and R Schwarzschild , Figure 1 Total relativistic time dilation as a function of distance (log-scale) (from both gravitational and velocity effects) along a highly elliptical satellite orbit (blue) and at a fixed observatory on the surface of Earth (green). These effects subtract (Equation (1)) to get the relative rate of clocks in the two locations. Note that the sign of the relative rate can be positive or negative, depending on the satellite altitude The overall satellite correction for a clock in an elliptic orbit (with semi-major axis a), including both special and general relativistic effects is then which reduces to the circular-orbit case when a = r satellite , but varies throughout an elliptic orbit as the satellite approaches and recedes from Earth. Thus, a highly elliptical orbit breaks the degeneracy of constant offsets due to experimental drift. Even the sign of the total relativistic correction may change throughout the orbit. An example for the "typical Molniya orbit" from Wikipedia 2 is shown in Fig. 1. If the atmosphere is deemed to be a problem, the effect can be enhanced by comparing two Earth-orbiting satellites, one near apogee and one near perigee. Note that for the Lunar Gateway's near-rectilinear halo orbit, 3 the values of satellite are more than 50 times smaller due to the 80 times smaller mass of the moon, while the proposed low lunar orbit of the Lunar Gateway results in satellite about twice as small. This order-of-magnitude analysis shows that an Earth-Moon link would not be an ideal starting point for measuring gravitationally induced phase shifts in quantum light. The two-body gravitational field of the Earth-Moon system reaches an inflection point near the first Lagrange point. Future experiments may leverage Earth-Moon links to test for modulation, or even full cancellation of the phase shift of a photon propagating across this region [6]. In contrast, other quantum optical experiments, such as the tests of Bells inequality and tests of quantum teleportation, described in detail below, are enhanced through use of an Earth-Moon baseline. These calculations capture the difference in rates that the clocks run on the satellite as compared to on Earth. As we will show, these "clocks" can be the difference in time of arrival of pulses, or, the optical-frequency oscillations of the light itself. The gravitational blueshift (redshift) experienced by the photons as they fall toward (climb away from) Earth is already included in these expressions and should not be double counted. The same applies to length contraction in any fiber delay line or interferometer-Einstein's light-clock thought experiments shows that these effects are already included in the slow-down of the clock ticks that an external observer sees. In addition to these relativistic effects, there is a large Doppler shift caused by successive pulses or successive crests of waves being sent and received from different places and therefore taking different amounts of time to arrive at the observer. In practice, this will be the dominant effect, but can be separated from the intrinsic time dilation by its different orbit dependence, i.e., the sign of the effect depends on whether the satellite is approaching or moving away from the ground terminal. This shift is first order in the relative velocities v/c: so any series expansion must be kept to second order to match the size of the leading-order v 2 satellite /c 2 terms above from special relativity. Note that using the formula for the relativistic Doppler effect double-counts the source's time dilation. To avoid double-counting, it is best to use global coordinates like the Schwarzschild coordinates that asymptote to Minkowski spacetime infinitely far away: • Fix an infinitesimal time difference between two pulse transmissions. • Calculate the global time difference of reception, which will be longer or shorter by an amount on the order of v/c, depending on the orbit and orientation. This is 10 -5 for LEO satellites. • The curved path of light through the Schwarzschild metric can be computed numerically, but its effect is 10 -15 , which is small compared to the special and general relativistic time dilation. • Each of these time differences can then be translated from global time to the time experienced by clocks on the satellite and on Earth. This ≈ v 2 /c 2 effect is captured by Equation (1) and is of order 10 -10 for LEO satellites. Polarization rotation of photons in general relativity Polarization rotation also occurs when light traverses the warped space around a rotating body. This is the "frame dragging" or "Lense-Thirring" effect. An introduction to the effect can be found in Schleich and Scully's Les Houches lecture [31]. Recent calculations provide numbers relevant for satellite experiments. Brodutch and Terno [40] quotes a rotation of 55 milliarcseconds (3 × 10 -7 rad) when sending a photon from a satellite out to infinity, but this calculation is "gauge dependent" and therefore unmeasurable-the fixed reference frame of stars the satellite is measuring with respect to is also frame dragged. Brodutch et al. [41] consider a closed loop interferometer, where the polarization is compared at the same point (as differential geometry requires) to make comparisons unambiguous. For an interferometer 100 km by 10 km they find a polarization rotation of only 4 pico-arcseconds (2 × 10 -17 rad), concluding correctly that the "minuscule scale of the effect puts it beyond the current experimental reach. " 4 4 For any photon-counting experiment, the statistical uncertainty of photon polarization angle measurement scales inversely as the square root of the number of photons detected: θ = 1 2 √ N radians for large N. Completely disregarding all (2022) 9:25 Page 9 of 76 COW test with classical light, superposition states, entangled and hyper-entangled photons Quantum optics experiments enable a unique window to explore the interplay between quantum mechanics and relativity, both general and special. The Colella, Overhauser, and Werner (COW) experiment used a neutron interferometer to test the influence of the gravitational field on a quantum wave function [22]. Since then many experiments using matter-wave interferometry with atoms and molecules have been performed. However, all such experiments to date can be interpreted using a Newtonian gravity framework. In contrast, tests with massless particles -photons -would require a general relativistic description. There have been several proposals to detect general relativistic effects on single photons using Mach-Zehnder interferometers (MZI) whose arms are located at different altitudes, i.e., at different gravitational potentials. In this section, an overview of the background of photonic COW tests is presented, followed by a detailed discussion of specific experimental concepts. Both in the original COW experiment and in light-pulse atom interferometers [42,43], the matter-wave packets are diffracted by periodic gratings (a crystal in neutron interferometers and one or more optical lattices in the atomic case) but freely falling the rest of the time. This implies that the proper-time difference between the interferometer arms is insensitive to gravitational time dilation effects in a uniform field, as can be argued by considering a freely falling frame [21]. On the other hand, every time a matter-wave packet is diffracted by a grating, it acquires a phase that depends on the central position of the wave packet with respect to the grating, and the total phase shift between the two interfering wave packets depends on the relative acceleration between the wave packets and the diffraction gratings. This can be exploited in high-precision gravimetry measurements [44] and tests of the universality of free fall (UFF) [45]. In contrast, atom interferometers where the wave packets in the two arms are held at two different constant heights through matter-wave guides can be sensitive to gravitational time dilation in uniform fields [21]. These kinds of interferometers are, in fact, closer analogs of the optical interferometers considered below, where photons in the two interferometer arms propagate along optical-fiber delay lines located at two different heights in a gravitational field. Indeed, Hilweg et al. have proposed a ground experiment that uses an actively switched, triple-arm MZI, with long fiber loops to extend the interaction time, and thus the size of the effect [46]. For a vertical separation of 3 m and 100-km fiber delays, their model predicts that an effect might be observed after 2-4 days of integration time. Obviously, maintaining stability over that length of time could be quite difficult, particularly with long pieces of optical fiber that are subject to thermal expansion. Additionally, to minimize effects of dispersion, narrow-bandwidth photons (<100 MHz) are needed, which can also be challenging. To increase the magnitude of the gravitational effect, greater variation of the gravitational potential can be employed [47]. The idea proposed in [5] is to use two identical unbalanced MZIs, one located on ground and the other on a satellite (see Fig. 2). Singlephoton wavepackets enter the ground MZI and at the output a coherent superposition of systematic effects of trying to establish a common reference frame, reaching the required sensitivity requires at least 4 million detection events for the observable "out to infinity" case, but 10 33 (a megawatt of 1550 nm photons continuously for 30 years) for the observable "compare at the same place" setup. Figure 2 Simplified scheme of the optical COW experiment in space. A time-bin superposition is generated by injecting a single photon wavepacket into a unbalanced MZIs. The photon is sent towards a satellite, where an identical MZI is located. Interference detected at the satellite reveals the gravitationally induced phase shift two wavepackets |t 0 and |t 1 , known as time-bin encoding [48], is generated. The generated state can be written as 1 √ 2 (|t 0 + e iφ |t 1 ) where the relative phase is written as φ = ω 0 τ with τ the interferometer unbalancement corresponding to the delay between the two pulses and ω 0 the central frequency of the pulses. The single photons are then sent toward the spacecraft MZI and at the exit of the second MZI an interference effect can be observed that corresponds to photons that took the short path in one unbalanced interferometer and the long path in the other. Since time dilation effects change both the frequency and the pulse delay in opposite ways, the relative phase is constant in the propagation and it is written as ω 0 τ = ω 0 τ = φ. However, if the sender and receiver interferometers have the same delay τ (measured locally), the receiver interferometer applies the phase φ = ω 0 τ , since the frequency of the photon is changed. Therefore, the interference effect is able to measure the phase difference φφ . This effect can be described as an effective phase transformation from the original state to the state 1 √ 2 (|t 0 + e i(φ-φ ) |t 1 ). The general formalism of the relation of the different optical paths involved in COW tests in space by means of optical interferometry requires a careful treatment of the frequency transformation in each path as well as the relation of time with distance. According to a general analysis by Terno et al. [49], the crucial asset to obtain a closed form is the phase difference due to the difference in the emission times. When the two MZIs are properly calibrated, a phase shift due to the gravitational redshift will be observed. In this case, if the satellite is not geostationary, the main challenge is represented by the necessity to compensate for the first-order Doppler effect. To solve the above issue, an improvement of the above scheme was recently proposed to measure the Doppler shift introduced by the relative motion of the satellite to ground, and to remove it from the final result [50,51]: in addition to MZIs on the spacecraft and the ground station, satellite retroreflectors located in space are used to send some portion of the upcoming light back to the ground. The latter, detected on ground, can be used to measure the first-order Doppler effect, since gravitational effects compensate in the two-way propagation. The Doppler shift was assessed in previous experiments [33] and exploited as the modulator of the time-bin qubit phase as a function of the instantaneous velocity of the satellite. This modulation, though passive, may also be seen as a resource when ascertaining the visibility of the interference phenomena. The photon temporal superposition state along the space channel may be kept in a single polarization by exploiting suitable corner cube technology for the retroreflectors, as demonstrated for space quantum communications [52], thus allowing for a large parameter space for the observables under test; the latter use was pivotal in the test in space of the Wheeler "delayed choice Gedanken-experiment", addressing the well-known waveparticle duality of quantum physics [53,54]. We note that the scheme proposed in [50,51] exploits classical light in order to test the Einstein Equivalence Principle in the optical domain. However, extending the scheme by using single-photon wavepackets will allow the measurement of a gravitationally induced phase shift on a quantum state. Finally, the temporal resolution for the discrimination of the interference is a sensible parameter for the experimental design. Present limits in the case of a link to a MEO satellite are of order of a quarter of nanosecond [54]. The gravitational phase shift measured by the experiment depicted in Fig. 2 is connected with a small shift, caused by the gravitational field, of the time difference between the modes encoding the entanglement time-bins. This shift is of the order of 10 -10 times the temporal separation between the two time-bins, which implies that the two delay lines (on ground and in space) need to be matched at that level. For example, 100-m delay lines would need to be matched with a precision better than 10 nm. One way of achieving this precision is to independently calibrate the length of each delay line with a local frequency reference, such as an atomic clock, and convert to distance using the universal speed of light. The experiment can then be interpreted as a test of UGR where the two frequency references at different heights are compared through quantum states of light. If time-bin entangled photons [48] are used, the gravitational shift can be also measured by using the relative phase of an entangled state: for instance, by generating the entangled pair |ψ = (|t 0 A |t 0 B + |t 1 A |t 1 B )/ √ 2 on the ground, and sending photon A to the ground MZI and photon B to the satellite MZI, the wavelength shift of the transmitted photon can be interpreted as an effective transformation of the state into |ψ = (|t 0 A |t 0 B + e i(δ g +δ D ) |t 1 A |t 1 B )/ √ 2 where δ g is the gravitationally induced phase shift and δ D = k v/c, for a wave number k and delay length , is the Doppler correction; here v is the projection of the relative velocities along the optical link, which varies from approximately -v spacecraft to +v spacecraft as the spacecraft flies overhead. For a 10-km delay line and satellite velocity of 10 km/s, v spacecraft /c = 0.33 m, corresponding to δ D = 1.3 · 10 6 rad, for a laser wavelength of 1500 nm; as discussed above, any attempt to precisely determine δ g will need to compensate for this much larger value of δ D . For example, one could send a classical beacon along (also in a superposition of t 0 and t 1 ), and use that to stabilize the remote MZIs by using a fiber stretcher or a piezo-electrically controlled optical "trombone", as was done in [55]; comparing the error signal to the expected one would allow one to look for deviations. Because the spatio-temporal mode of the classical beam is expected to undergo the same relativistic corrections as the quantum signal, any deviation would be significant. Alternatively, as discussed above, corner cubes could be used to directly measure the satellite's relative motion, and a local feedback system (with a stabilized laser) could be used to set the unbalanced MZI path length. Fluctuations in the atmosphere [56] cause a slight difference between the propagation time of the locallydelayed and non-delayed photon. This is on the order of the ratio of the local-delay time and the half the atmospheric fluctuation period, which is approximately 3 kHz in many instances. This factor of roughly 5% in would need to be reduced to successfully conduct the experiment, potentially by using adaptive optics techniques. More advanced schemes can also be conceived with time-bin entanglement: for instance, if the entangled source is located on a satellite, one photon can be directed towards a ground station and the other photon towards a different satellite, enhancing the gravitational effect and at the same time allowing Bell tests between frames with large relative velocities (see Sect. Finally, one can also conceive experiments that utilize hyper-entanglement -photons that are simultaneously entangled in multiple degrees of freedom [57]. As discussed above, special and general relativity result in predictable time dilation that would affect time-bin entanglement in a measurable way, but would have very little effect on polarization entanglement, with only unmeasurably small frame-dragging effects. Extending the prototypes in [55], one could repeat the proposed optical COW experiments with a source of photons entangled both in time-bin and in polarization, again with one photon measured on the ground and the other on the space platform. One can thus directly compare the effect on the entanglement in the two degrees of freedom, one of which is sensitive to the relativistic effects and one of which is not. As with the previous suggestions, if precise enough measurements can be made, the Doppler and relativistic effects can be distinguished from each other and from constant offsets by putting the satellite in an elliptical orbit as discussed in Sect. 2.1. Mission design trades for optical COW tests For the quantum photonic COW tests that involve classical light, the measurement requirements follow the procedure of [35]. In Appendix A, the requirements derived from the parametric model to achieve a statistical confidence level are evaluated using a quantum-channel model. Here we use these to estimate the high-level system performance requirements to achieve the DSQL science objectives. The proposed optical COW experiments are carried out by using quantum optical interferometry, for example, by using an MZI with a single photon input into one port (Fig. 3). In this example, the photon is in a path superposition. The paths are characterized by different values of the gravitational potential; the time dilation between these two paths causes a relative phase shift along the two arms. A number of specific implementations of this concept, describing tests using coherent superposition states, single photons, and different forms of entangled and hyperentangled photon pairs are described in the previous section. In a uniform gravitational field, this phase shift is linked to the interferometer dimensions and gravity via the formula: where ω 0 and λ are respectively the central frequency and vacuum wavelength of the photon emitted by the source in its reference frame, τ GR is the time delay between the two interferometric paths, g is the acceleration due to gravity, h is the orbital altitude difference between source and receiver, and is the horizontal length of optical delay 5 in both A simplified MZI for quantum optical COW tests. The source node and receiver node are at different heights h measured along the direction of the gravity vector g. Each node has a long optical delay line of length l. Note that this is equivalent to the arrangement with two unbalanced MZIs shown in Fig. 2, which allows photons emitted at different times to propagate along essentially the same spatial mode (modulo whatever lateral shift has occurred due to the motion of the source in the time between the early and late emission times) the source and receiver interferometers. The parameter α, which is equal to zero for general relativity, parametrizes violations of UGR. The primary goal of this set of experiments is to validate the predicted gravitational phase shift in quantum light; the expected magnitude of the phase shift for the spacecraft implementation of this experiment is on the order of a few to tens of radians. For example, φ GR ∼ 1 rad for λ = 1550 nm, h = 400 km, and = 6 km. The other, equally important, objective of this test is to bound the magnitude of parameter α, which should be zero according to general relativity, in different regimes of quantum light. Both φ and α are measured using quantum interferometry. There is an associated integration time associated with each interferometer measurement, where N signal measurement events are captured. Qualitatively, the instrument and mission design requirements should enable a sufficiently high rate of signal events, relative to noise events, and a net interferometer stability that yields a phase error smaller than φ. Note that there is a tradeoff, due to exponential Beers-law decay in the fiber-optic delay , and diffractive propagation loss associated with the free-space portion h of the interferometer (assuming the send and receive telescopes are sufficiently small that the received flux falls as 1/R 2 ; see Appendix A for details); increasing h and both increase the magnitude of φ GR but decrease the number of photons with which to measure this. The development of a quantitative model to express these requirements is described in Appendix C. Our approach treats N , the total signal flux collected over some integration period, and the likelihood p that a given measurement is a legitimate signal, as the principle analytical parameters characterizing the link; the key mission instrumentation requirements may then be derived from N and p. As shown in Appendix A, N scales directly with the photon production rate and transmission efficiency, and also the integration time. representative results are shown in Fig. 4. We see that there is an optimal altitude, located at about 1200 km, maximizes the number of received photons, leading to the predicted error on α = 3 × 10 -4 ; such a configuration is a reasonable starting point to design a full flight mission to conduct the quantum optical COW test to validate the predicted general relativistic phase shift on various quantum states of light, and bound the parameter α in the quantum optical regime. Similar analyses considering more realistic orbits, spacecraftto-spacecraft links, and entangled and hyper-entangled photon pairs, are the subject of a future research article under development. The tests of the Einstein equivalence principle described in this paper all involve long baseline optical interferometry. Since many of them use a fiber optic delay line, the system wavelength should be within the low fiber-loss optical C, L, or O-bands. The overall link efficiency follows the description from Appendix A, using the single-channel expression (Equation (41)) for single photons, and the double channel expression (Equation (43)) for entangled and hyperentangled photons. Measuring the gravitationally induced phase shift also requires that other sources of phase shift are accounted for, either through direct measurement or usage of mitigation strategies. For example, as discussed in [33,52,58], active measurement of the spacecraft position and velocity can compensate error terms associated with these factors. Active stabilization of the fiber coil length, relative to a local stabilized laser system on both the ground system and flight system is also required. This laser reference should propagate through the fiber delay lines in the opposite direction of the signal photons to facilitate filtering. In realistic orbit configurations, the time-of-flight between the ground station and receiver is constantly changing. There is residual error introduced due to the change in time-of-flight along the long arm of the interferometer over the transit time through the optical fiber. In this sense, there are some fixed noise sources beyond the standard quantum limit that will bound the measurement precision. A detailed description of these noise factors in the context of a space-mission scenario is the subject of a future publication. Gravitational redshift in a HOM interferometer with frequency-entangled photons In this subsection we turn our attention to an experiment using an interference effect with no classical analog, i.e., two-photon Hong-Ou-Mandel (HOM) interference [59]. As Fig. 5, a photon-pair source directs (indistinguishable) photons onto a 50-50 beamsplitter. There is then a quantum mechanical interference effect, causing a complete cancellation of the two processes leading to a coincidence between detectors b and c: both photons being transmitted, or both being reflected. 6 To describe a balanced Hong-Ou-Mandel interferometer, consider a two-photon state |1 l |1 u , with one photon in each mode (see Fig. 5). These two photons are now incident on a beamsplitter: and photon bunching is seen to occur-both photons go either into Detector b or into Detector c, and the probability of a coincident detection at Detectors b and c is therefore The above assumes the two photons are indistinguishable, as otherwise destructive interference between the terms |1 b |1 c and |1 c |1 b would not occur. However, for a photon with non-zero bandwidth, temporal delays between the photons will introduce distingushability. In this case, the probability of coincidences becomes where τ is the relative temporal delay, and σ is the half-bandwidth of the photons. In the limit of zero bandwidth, this reduces to (10) for any relative delay. Note that the HOM dip does not depend on the relative phase of the incident photons, only on their relative arrival time. Thus, HOM interferometry has lower resolution when compared with a Mach-Zehnder interferometer, but is robust against certain types of group-velocity dispersion. Specifically, if there is dispersion in one arm of a standard MZI, the visibility will be negatively affected, whereas the HOM interference effect is known to be immune to group velocity dispersion (more precisely, to the odd orders of this) [62]. Another key difference between a standard MZI and a HOM interferometer is that the former suffers reduced visibility if there is a relative loss in either of the arms, while the latter does not (in the absence of noise), as such loss equally affects both of the underlying interfering physical processes. We now consider a modified HOM interferometer, in which the input state is entangled in frequency [63]: Adding some temporal delay τ to the upper path adds an energy-dependent phase ω i τ , producing the state Impinging this state upon a beamsplitter as before, the final state at Detectors b and c is yielding a coincidence probability Accounting for photon bandwidth (and assuming identical bandwidths for simplicity), we have This interferometer therefore combines the sensitivity of Mach-Zehnder interferometry with the dispersion cancellation and loss-resilience of degenerate HOM interferometry; as such it can be quite useful for precision measurements of phase differences and temporal delays. For example, a frequency-entangled HOM interferometer with wavelengths 800 nm and 1590 nm should be able to resolve temporal differences of a few attoseconds with only tens of thousands of detected photons [64]. These techniques may be useful in probing the intersection of quantum mechanics and general relativity, since the HOM effect is truly nonclassical [65]. Suppose that the two Figure 6 Frequency-entangled HOM interferometer sensitive to the gravitational redshift and consisting of a ground station and a spacecraft. (a) Simplified schematic. Photons experience different gravitational potentials, depending on which path they take, before recombination on a non-polarizing beam splitter (blue). (b) Overlapping-path architecture, with a single uplink channel for both photons; polarization entanglement ensures that only interfering processes are present, i.e., due to the polarizing beamsplitters (black squares, PBS), each photon takes one long and one short path. While the diagrams suggest that imperfect extinction ratios of the PBSs may lead to measurement errors, in fact these are strongly suppressed, since "wrong" coincidences can only occur if photons take the incorrect output of all three PBSs paths of the interferometer are held at different gravitational potentials. This will create a frequency-and path-dependent phase difference which can then potentially be resolved by the interferometer, allowing us to study the effects of a curved spacetime on an entangled quantum system. An explicit implementation consisting of a ground station (G), a spacecraft (S) and an optical link is shown in Fig. 6. A pair of frequency-entangled photons is generated in the ground station and sent through the optical link to the spacecraft, where they are detected. One interferometer arm involves a delay line in the ground station, whereas the other arm involves an analogous delay line in the spacecraft, where the two arms are then recombined on a beamsplitter with single-photon detectors at the two exit ports. Whereas delay lines with equal physical length would lead to a balanced interferometer for a traditional experiment on Earth, in this proposed setup relativistic effects and the changing distance between the ground station and the spacecraft give rise to the following time shift between the two interferometer arms, obtained up to order 1/c 3 within a post-Newtonian expansion in powers of v/c and U/c 2 : where (n · v G )(t e ) and (n · v S )(t r ) are, respectively, the velocity of the ground station along the direction of the optical link at the time of emission t e and similarly for the spacecraft at the time of reception t r . The factor containing these velocities corresponds to the Doppler effect associated with the motion of the two stations (ground and spacecraft). 7 The ratio between (dt/dτ S ) and (dt/dτ G ), which respectively account for the special relativistic time dilation and the gravitational redshift between the two stations, is given by where U(x) is the gravitational potential and we neglect higher-order terms in the post-Newtonian expansion [66], as they are suppressed by higher powers of v/c. This result, which is also applicable to more general situations such as the Moon-Earth system, reduces to Eq. (1) for a Schwarzschild metric. The probability P b,c (τ ) of two-photon detection in either of the two ports is therefore determined by Eq. (15), with where a possible difference between the proper lengths of the two delay lines (ideally = 0) has been included and we have assumed that . The third term on the righthand side of Eq. (18) corresponds to the relativistic effects that we are interested in and will typically be of order 10 -10 . In contrast, the "classical" Doppler effect encoded by the second term can be of order 10 -5 . Accurately tracking the trajectory of the spacecraft by means of satellite laser ranging is thus necessary so that the comparatively large contribution of the Doppler effect can be suppressed below the 10 -10 level through postcorrection, as discussed in Sect. 2.1.2. The length of the two delay lines also needs to be stabilized below the 10 -10 level; moreover, one should guarantee that they are equal (i.e., = 0) at that level, which can be achieved by simultaneously calibrating (and stabilizing) them with identical frequency references on ground and in the spacecraft. Note that using an elliptical orbit would enable one to distinguish the two different relativistic contributions, namely special relativistic time dilation and gravitational redshift, and would also help to separate the small signal of interest from noise sources and systematic effects, as explained below. A substantial simplification to this complex stabilization and calibration procedure can be achieved by employing classical light at an intermediate frequency between ω 1 and ω 2 for that purpose, which can be combined with an active compensation method involving a tunable delay, e.g., consisting of a movable right-angle prism with a piezo-actuated translation stage, or a piezoelectrically controlled fiber stretcher. It should be noted that since the classical light acting as a reference is equally affected by the relativistic effects, a HOM interferometer stabilized in this manner will not be able to independently measure such effects. Nevertheless, it could still be regarded as the first experimental confirmation that quantum states of light -frequency-entangled photons -experience the same gravitational redshift as classical light, and moreover that purely quantum mechanical interference is similarly affected. While the implementation displayed in Fig. 6a is conceptually simpler, a scheme involving a single uplink channel, depicted in Fig. 6b, is preferable in practice. With this goal in mind, it is useful to consider a source of co-propagating frequency-nondegenerate photons in the polarization-entangled state: Such a state has been recently demonstrated in the laboratory [55]. By using polarizing beamsplitters (PBS) and a half-wave plate (HWP), as shown in Fig. 6b, one can ensure that H-polarized photons will propagate along the short path (S) in the ground station and the delay line in the spacecraft (U ), while V-polarized photons will propagate along the delay line (U) in the ground station and the short path in the spacecraft (S ). The resulting state (after the satellite PBS) is then Finally, by inserting an additional HWP in the delay line of the spacecraft, the polarization state in that arm is rotated, |V SU → |H SU , so that the polarizations are no longer correlated with the interferometer arms and the quantum states from the two different arms can interfere when recombined at the final beamsplitter. Indeed, the polarization state of the two photons, |H SU |H US , can then be factored out and one is left with the desired state, cf. Eq. (13). We conclude this subsection with an estimate of the actual sensitivity of the proposed HOM interferometry experiment to relativistic effects. As explained in the discussion after Eq. (1), the net contribution of these effects is of order -3 × 10 -10 for a circular LEO, but just a smaller fraction -of order 4 × 10 -11 -corresponds to the gravitational redshift. In contrast, for a GEO the relativistic effects are dominated by the gravitational redshift, of order 6 × 10 -10 in that case. As with the optical COW experiments discussed above, for a highly elliptical orbit, such as that considered in Fig. 1, these effects are modulated by the orbital period and range from -5 × 10 -10 at the perigee, where special relativistic time dilation dominates, to 6 × 10 -10 at the apogee, where the main contribution comes from the gravitational redshift. Thus, orbital modulation can be very useful to extract the small signal and separate it from noise sources and systematic effects. Similarly to GEOs, for the Lunar Gateway the effect would be of order 7 × 10 -10 and dominated by the redshift associated with Earth's gravitational field, because the Moon's mass is 80 times smaller than Earth's, so that contributions from the lunar gravitational field and time dilation due to the orbital velocity would be much smaller, also implying that there are no significant orbital modulation effects. Hence, if we consider the case in which the platform in Fig. 6 is the Gateway Spacecraft, the time shift due to relativistic effects (i.e., excluding the classical Doppler contribution) is given by τ rel = 2.3 × 10 -15 s ( /1 km) and leads to the following interferometer phase shift: Quantitatively comparable results hold for GEOs, and also at the apogee of a highly elliptical orbit. In contrast, for LEOs the result for the total relativistic effect is reduced by about a half and is dominated by the special relativistic contribution, whereas the gravitational redshift is nearly 10 times smaller. Nevertheless, because the transmission rate for pairs of entangled photons scales inversely with the fourth power of the optical link baseline (see Appendix A), it should be possible to resolve this smaller effect too, potentially with even milder requirements on the telescope size. Similar conclusions apply at the perigee of the highly elliptical orbit. Mission design trades for optical COW tests using HOM interference In this section we present a brief summary of the mission design trade-space; a detailed, rigorous summary of the underlying mathematics is the subject of a future publication. Tests of gravitational effects on HOM interference are governed by similar processes as described in Sect. 2.1.3; the key difference is that a pair of photons must be transmitted, which reduces the overall link efficiency, per Appendix A. Furthermore, as described in Sect. 2.1.4, the photons comprising the pair may be non-degenerate in frequency. All these factors couple with available spacecraft trajectorys to result in a range of possible mission configurations. The system diagram is shown in Fig. 6. The net timing delay τ between the upper and lower path of the interferometer is where is the geometric length mismatch between the two paths, due to error terms in the engineering and control of the interferometer, τ c is any control signal applied to the interferometer, and τ GR is the relativistic shift we seek to measure: Here v 2 is the difference in the squares of the velocities of the two interferometer nodes, U is their difference in gravitational potential energy, c is the speed of light, and is the interferometer path length depicted in Fig. 6, equivalent to the arm length of the HOM interferometer. 8 Assume now that with probability p the interferometer contains the input photons and that with probability (1p) the interferometer is injected with uncorrelated and distinguishable photons, or the detectors have background or noise counts, leading to noise events that can cause an "accidental" coincidence count with probability 1/2. The flux of noise photons N noise can be linked to system parameters such as receiver aperture, spectral filtering, and detector dark counts; see Equation (45). The parameter p can be interpreted as the experiment quality factor, see Sect. D: where (N noise t R ) is the probability of recording a count due to noise falling within the detector timing window t R (assumed to be 10 -9 s), and F is the source fidelity (assumed to be 0.95 in our simulations). Here for simplicity we assume that all the system parameters in N noise are fixed, apart from the spectral filtering bandwidth σ that we set to match the signal photon bandwidth; therefore, p = p(σ ). The coincidence count probability thus becomes: A more complete analysis should also take into account error sources such as path length mismatch and attitude determination error; these and other sources of imperfection will be explored in detail elsewhere. The error τ in a measurement of τ is computed using the correlated countsM =M A ⊗ Fig. 6: where ω ≡ ω 1ω 2 is the frequency difference of the photons, here assumed to have (vacuum) wavelengths λ 1 = 780 nm and λ 2 = 1550 nm, corresponding to ω = 4 · 10 15 Hz. In order to avoid excessive broadening of the pulse due to propagation in the atmosphere and to simplify telescope optics, we assume a maximum bandwidth σ ≤ 4.7 · 10 13 Hz (δλ 1 ≤ 100 nm), i.e., the individual spectral components are still relatively narrow for the non-degenerate HOM implementation. Given an experiment with a certain quality factor, a natural question to ask is which choice of the overall time delay τ minimizes the timing error. More formally, we want to solve the following optimization problem: The result of the optimization is shown in Figs. 7 and 8, which shows the expected result that the timing error is minimized for the largest bandwidth σ (in the degenerate source case ω 1 = ω 2 ), but is essentially constant for ω σ . The minimum timing error for the non-degenerate case is at least an order of magnitude smaller than for the degenerate case in the considered photon bandwidth interval; even when we let σ ≈ ω, we still see a ∼ 40% improvement for the non-degenerate case. Assuming that the main source of error is given by the general relativistic time delay measurement, we can propagate it to obtain: where N c is the number of coincidence counts used for the measurement, a function of the optics aperture and of the photon wavelength (which effects diffraction and thereby photon loss; see Equation (38)). Figure 9 and 10 shows the contour plots for the minimized error on α in a satellite passage, assuming a ground (satellite) aperture of 1 m (0.3 m). For the degenerate case, the error decreases as the photon bandwidth increases, reaching a minimum of α = 0.01 for an altitude around 1300 km and σ = 4 · 10 13 Hz. The non-degenerate case also shows a slight reduction in α as the photon bandwidth σ increases but the effect is much smaller than in the degenerate case, as expected since the timing resolution is dominated by ω σ . Here the peak performance of α = 0.001 is reached at an altitude close to 1000 km . 9 Figure 11 shows the benefit of using photons with different wavelengths -the error on α for the non-degenerate case is always at least one order of magnitude smaller than the error for the degenerate case. The display of an optimal altitude minimizing the error on α was also present for the single-photon interference case (Sect. 2.1.3), in that case at 1200 km for the same 1-m (0.3m) telescope apertures. The estimation of α using single-photon interference seems to be advantageous ( α = 3 · 10 -4 , i.e., three times smaller than the two-photon case); the lower optimal altitude and higher value of α in the two-photon case arise from the additional signal loss when both photons have to be transmitted successfully (if one had sufficiently large telescopes that diffractive losses could be ignored, this disadvantage would largely disappear). Regardless, the HOM-based experiments bring the desirable feature of using purely nonclassical interference. Gravitational dephasing, decorrelation, and decoherence The preceding sections describe a set of experiments to test the gravitationally induced phase shift on quantum photonic modes. In this Section, the predictions of QFTCST on Figure 11 Contour plot of the ratio α non-degen /α degen between the error on α of the non-degenerate and the degenerate case, versus photon bandwidth and satellite altitude the gravitational decoherence of quantum states are reviewed. The estimated magnitude of these effects suggest that meaningful tests of this sort are beyond the scope of the proposed DSQL experiments. However, success of DSQL and related missions will improve the prospect for considering such tests of gravitational decoherence in the future. Gravitational decoherence first appeared as an attempt to explain the very different behaviors between the micro and the macro worlds, i.e., we seem not to observe quantum superpositions in the latter. The simplest way is to assume the existence of a length scale which demarcates the micro from the macro -superpositions can then only persist below this scale; above it, the states naturally decohere in some basis. If the underlying theory attributes the decoherence to new physics (e.g., continuous collapse models such as the Ghirardi-Rimini-Weber-Pearle (GRW-P) theories [24] or Diosi-Penrose (DP) model [25,26]), then decoherence may happen in the position basis, and it can explain the micromacro separation in quantum theory (a.k.a. the quantum measurement problem). In contrast, gravitational decoherence that originates from known physics-i.e., general relativity (GR) and quantum field theory, as in the Anastopoulos-Blencowe-Hu (ABH) theory [27,28], occurs in the energy basis. Here, we shall focus only on genuine gravitational decoherence, 10 which involves new physics arising from either A) new phenomena deduced from established GR theoretical foundations (e.g., the ABH master equation) or B) modifications of either a) quantum mechanics (e.g., the DP or GRW-P theories), which we refer to collectively as alternative quantum theories (AQT); or b) the structure of space or time, known as intrinsic, fundamental, or quantum gravity decoherence. In what follows we shall briefly highlight the salient features of these alternatives, and estimate the magnitude of their effects, as they are directly relevant to DSQL experiments. Models of gravitational decoherence typically involve one or more free parameters. Requirements of theoretical consistency and past experiments on gravitational quantum physics have already excluded regions of the parameter space-see, for example, Refs. [67,68]. Deep space experiments can improve such constraints by many orders of magnitude, even if some regions of the parameter space may be beyond current measurement capabilities. In the ABH model [27,28], decoherence arises from fluctuations of gravitational waves (classical perturbations) or gravitons (quantized linear perturbations); the source of these fluctuations may be cosmological [28] (stochastic gravitons produced in the early universe near the big bang or from inflation), astrophysical [69], or structural when GR is viewed as an emergent theory (e.g., [70]). The corresponding master equation depends on the noise temperature , which coincides with the graviton temperature if the origin of perturbations is cosmological, but is unconstrained if gravity is emergent; in the latter case is determined by the deeper layers in the structure of spacetime at the Planck scale. The ABH master equation is: where τ is a constant of dimension time andĤ =p 2 2m . A similar master equation can be derived for photons [71]. In the ABH model, τ = 32π 9 τ P ( /T P ), where T P = 1.4 × 10 32 K is the Planck temperature and τ P = 5.4 × 10 -44 s is the Planck time. If is regarded as a noise temperature, it need not be related to the Planck length, and T P is perfectly acceptable. For motion in one dimension, the ABH master equation simplifies to Equation (29) also appears in models by Milburn [72], Adler [73], Diosi [74] and Breuer et al. [75] from different physical considerations. In these models, τ is also a free parameter, but the natural candidate is the Planck-time τ P . Diosi [25] postulates a collapse term with noise correlator proportional to gravitational potential, which leads to a master equation of the form: wheref (r r r) is the mass density operator. Penrose's [26,76] idea is not model specific, but leads to similar predictions for decoherence time. A key point in the DP model is that predictions do not involve any free parameters (at least in the experimentally relevant regime). The decoherence rate is typically of the order of 1 E where E is the gravitational self-energy difference associated to a macroscopic superposition of mass densities. Other decoherence models that lead to position decoherence have similar properties to DP, but more free parameters, some of which have no intuitive physical interpretation. This includes, for example, continuous collapse models like GRW-P and the Power-Percival [77] decoherence model based on fluctuations of the conformal factor. In any event, most AQTs have to tolerate a small degree of energy-conservation violation. Experimental tests for such violations lead to significant constraints in the free parameters of some models [68]. Another distinct class of models is based on the Newton-Schrödinger equation (NSE) [78]. One postulates a non-linear equation for the single-particle wave function, where V N (r) is the (normalized) gravitational (Newtonian) potential given by Note that the NSE for a single particle is not derivable from GR and Quantum Theory [79]. There are several advantages of carrying out tests of these theories in space experiments. They include the high quality of microgravity (∼10 -9 g), very long free-fall times (>10 4 s), and the combination of low pressure (∼10 -13 Pa) and low temperature (∼10 K) with full optical access. Here we outline necessary experimental parameters using optomechanical systems, atom interferometry, atomic spatial wavefunction spreading, and photon decoherence. Optomechanical experiments Consider a body brought into a superposition of a zero momentum and a finite momentum state, corresponding to an energy difference E. For the ABH model, the decoherence rate for the center of mass is then where τ is the free parameter in the master equation (28). A value for ABH of the order of 10 -3 s may be observable in optomechanical systems, as it is competitive with current environment-induced-decoherence timescales. Hence, to exclude values of τ > τ P , we must prepare a quantum state with E ∼ 10 -14 J. In the Diosi-Penrose model, the decoherence rate for a sphere of mass M of radius R in a quantum superposition of states with different center of mass position (though the predicted decoherence rate is largely independent of the details of the prepared state) is of the order of DP = where is a cut-off length, originally postulated to be of the order of the size of the nucleus, but recently constrained to > 0.5 · 10 -10 m [68]. Alternative models postulate up to a scale of 10 -7 m. For an optomechanical nanosphere with M ∼ 10 10 amu and R ∼ 100 nm, DP ∼ 10 -3 s -1 , a value that is in principle measurable in optomechanical experiments. Matter wave interferometry The ABH model (but not the 1-d master equation (29)) leads to loss of phase coherence of the order of ( ) 2 = m 2 v 3 τ L/ 2 , where L is the propagation distance inside the interferometer. 11 Setting an upper limit of L = 100 km, and v = 10 4 m/s, decoherence due to cosmological gravitons requires particles with masses of the order of 10 16 amu. If is a free parameter, experiments with particles at 10 10 amu will test up to ∼ 10 -5 T P . For comparison, the heaviest molecules used to date in quantum mechanical interference experiments are oligoporphyrines with mass of "only" 2.6 · 10 4 amu [80]. The Diosi-Penrose model and other models that lead to decoherence in the position basis can also be tested by near-field [81] and far-field [82] matter-wave interferometry. A rough estimation for the loss of phase coherence is ( where R is the radius of the particles. In contrast to the ABH model, this loss of coherence is enhanced at low velocities. Assuming L = 100 km, v = 10 m/s, and R = 100 nm, an experiment would require a mass M ∼ 10 9 -10 10 amu to observe decoherence according to the DP model. Wave-packet spread The intrinsic spreading of a matter wave-packet in free space is a hallmark of Schrödinger evolution. ABH-type models predict negligible deviations in the wave-packet spread from that of unitary evolution. The DP model and all other models that involve decoherence in the position basis predict a wave packet spread of the form where ( x) 2 S (t) is the usual Schrödinger spreading, and depends on the model. The changes from free Schrödinger evolution become significant at later times. An exact estimation of this effect depends on properties of the initially prepared state, and is rather involved. The MAQRO proposal [81] estimates that for a free-propagation time equal to 100 s (accessible in their setup) it is possible to constrain GRW-type models, some models of quantum gravity decoherence, but not decoherence of the D-P type. In contrast, the Newton-Schrödinger Equation predicts a retraction of the wave-packet spread for masses around 10 10 amu [83]. An osmium nanosphere of radius R 100 nm would require a couple of hours of free propagation in order to observe significant deviation from Schrödinger spreading [84]. This effect provides the only realistic prospect of directly testing the NSE, and it requires a space environment. Decoherence of photons Only the ABH model has been generalized for photons [71]. For interferometer experiments with arm length L, the model predicts loss of visibility of order ( ) 2 = 8G E 2 L 2 c 6 . For L = 10 5 km, ∼ T P and photon energies E of the order of 1 eV, this implies a loss of coherence of the order of = 10 -8 . In principle, this would be discernible with EM-field coherent states with mean photon numberN > 10 16 , though it would be very challenging to suppress all other systematic errors to this degree. GR effects: summary The untested prediction from QFTCST that propagation across a gravitational potential induces a phase shift on a single photon, a photon superposition state, and (hyper)entangled photon pairs was reviewed in the preceding sections. A set of experiments involving interferometers distributed between spacecraft and ground nodes designed to test this 11 While the exact derivation of ( ) 2 requires a dynamical analysis, its magnitude is of the order of ABH t int , where t int = L/v is the average time of the particle in the interferometer. prediction was outlined. The order of magnitude of the phase shift on the photon states caused by gravity was determined to be compatible with experimental capabilities. Preliminary mission systems analysis suggests an optimal regime for a spacecraft mission to achieve these objectives. As such, tests of the equivalence principle using photons are plausible using a future DSQL mission. In contrast, the magnitude of gravitationally induced decoherence, based on QFTCST models, is likely too small to measure without significant breakthroughs in multiple instrumentation capabilities. Long-baseline Bell tests The long baseline of DSQL could enable tests of Bell's inequality [85] up to the lunar orbital radius and between inertial frames with large relative velocity, well beyond what is possible on Earth or Earth orbit. As described in the sections below, conducting Bell tests at extremely long baselines between inertial frames opens experimental possibilities and addresses fundamental questions around quantum theory in the regime of general relativity. Such tests also serve as an important validation benchmark for the implementation of future quantum technologies. Verification of long-baseline quantum entanglement A future global-scale quantum network shall be capable of maintaining the fidelity of distributed photonic states in various degrees of freedom, and interact with quantum memory devices, effectively establishing a quantum internet [86][87][88]. This network could be useful for fundamental tests of quantum physics, distributed quantum computing or distributed quantum sensors (e.g., [89]). Based on current technologies, relatively high-rate entanglement distribution and quantum communication across baselines more than a few hundred kilometers are only possible using spacecraft links [1]. Furthermore, all quantum network applications rely on the validity of quantum mechanics, and a complete understanding of long-baseline quantum link behavior, which the DSQL tests could provide. Entangled quantum systems shielded from the environment exhibit correlations expected to persist no matter how far apart the systems travel, e.g., the amount of entanglement between two entangled photons propagating through optical fibers should remain constant even though the local topology of the individual fibers will induce a specific rotation on the polarization of each photon, changing the specific form of the entangled state. Such transformation can be reversed so that Bell inequality violating measurements are still possible (though in practice if the photon pairs have a large bandwidth, wavelengthdependent polarization transformations within the fibers can be difficult to correct, resulting in an effective depolarization). In contrast to propagation through optical fiber, photons propagating across the vacuum of deep space will encounter very few effects known from conventional physics 12 that could change the entanglement correlations in the various degrees-of-freedom. Kinematic effects may cause small shifts in the polarization state, but these require the detectors (or 12 alternative physical theories predict some polarization rotation and other rotations through proposed coupling mechanisms as photons propagate across changing gravitational potential [90,91]. These predicted effects were not observed by the first battery of on-ground [92] and Micius spacecraft experiments [93][94][95]. In the Micius experiments, entanglement was distributed between a ground station and the spacecraft, and also between two ground-based observatories approximately 1100 km apart. In these experiments, the entanglement fidelity was 0.907± 0.007, sufficient to determine the Clauser, Horne, Shimony, and Holt (CHSH) parameter to within 2.4 σ . The theory listed above could thus be much better bounded with a test resulting in higher statistical significance on the CHSH parameter, achievable using brighter, higher fidelity sources. observers) to be accelerating (e.g., the non-inertial frame experienced when orbiting a massive body). The evidence and the prevailing theories to date suggest that polarization entanglement correlations should persist in most scenarios involving a deep space communications link. Given the lack of coupling between light and the environment, we also expect that entanglement in other degrees of freedom (such as spatial mode, time-energy, time-bin, or even simultaneous hyperentanglement across these degrees of freedom) should also be preserved over long distance propagation. Consequentially, it is expected that a robust quantum communications link utilizing entanglement correlations should not face fundamental obstacles to realization based on known, conventional physics. DSQL will help validate the assumption that deep space holds no further surprises in the form of new physics that might invalidate our assumptions of the characteristics of a space-based quantum communications link. In the following sections, we provide a checklist of fundamental experiments that could be performed in order to gain confidence that engineering an entanglement-based quantum communications system is a worthwhile endeavor, while enhancing the distance limits of fundamental tests of quantum physics. At the same time, the implementation of these experiments will build up significant know-how and capability that will aid future quantum network engineering efforts. In order for the DSQL to characterize and validate ultra-long range Bell tests, the methodologies to use photon correlation measurements and counting statistics as described in Appendix D would apply. A violation of Bell's inequality by at least five standard deviations would be considered a viable test. The required number of successfully detected photon pair counts needed depends on the correlation visibility, but typically around 1000 detected pairs should allow the test to be conclusive. Furthermore, a detailed signal-noise analysis that involves quantum optical models of the photon pair source, channel properties, noise sources and photon detectors would be used to further compare the results with the known (conventional) physics, similar to the study in [96]. Current status of state-of-the-art Bell tests The most sophisticated "loophole-free" Bell tests thus far with entangled photons [97,98] close the detector-efficiency loophole, locality loophole, and versions of the freedom-ofchoice loophole. These experiments use local sources of quantum randomness, where each random bit comes into being at a point in spacetime that is space-like separated from the measurement on the other side. In [98], three different sources of random bits were XORed together based on the optical phase of a gain-switched laser, sampling the amplitude of an optical pulse at the single-photon level, and a predetermined pseudorandom source comprised of popular movies and digits of π . Any local-realist explanation for the observed Bell-violating correlations would be required to predict the outcomes of all of these processes well in advance of the beginning of each trial. The assumption that the random bit comes into being when the phase or amplitude is measured is critically important in Bell tests. If the bit on either side is determined (or even influenced) in some way by something in its past that the other side's measurement also has access to, the Bell violation can be explained with a local theory. To address this, [99][100][101] used the unpredictable color of incoming astronomical photons from opposite sides of the sky, e.g., two quasar photons emitted when the universe was a half and a tenth as old as today [101]. This forces any local explanation that takes advantage of the freedom-ofchoice loophole to have access to the past light cones of these quasar emissions, allowing one side's measurement to predict the other side's next photon color. Note that there is no device-independent way to verify that the clicks registered by the quasar photons were really mostly determined by the distant cosmological past rather than a local conspiratorial random number generator that coordinated the specific results with the other side of the experiment. Recently the BIG Bell Test [102] created a web-browser game where people around the world were rewarded for acting as unpredictably as possible. For a 24-hour period, their inputs were used to choose the measurement bases for 13 simultaneous Bell-type tests around the world. This was a heroic effort to close a loophole in previous experiments, where something in each experiment's past could have influenced the settings. However, given the constraints of Earth-bound participants, their choices were electronically recorded and used in such a way that (purely in terms of past light cones) the sources of entanglement and all measurements had access to the choices in advance, i.e., a substantial loophole remains. As mentioned above, the measurement of entanglement over long distances is expected to follow "conventional physics", no matter what distances are traversed, or whether the measurement apparatus (the observers) are at rest relative to each other. Measurement devices in relative motion can lead to a reference-frame-dependent event sequence, where the expectation of entanglement preservation becomes less obvious. This is especially striking if the reference frames are physical, if the wavefunction is an element of reality as the Pusey-Barrett-Rudolph theorem (PBR theorem) favors [103], or if wavefunction collapse is a physical phenomenon caused by interaction with a measurement device in different reference frames. These alternative viewpoints are fundamentally different from general relativity, in which both super-observers and preferred reference frames are impossibilities. Only physical collapse is a non-standard viewpoint. In that sense, conducting experiments along these lines would test the predictions of QFTCST against these alternative "strawman" theories. Bell tests between frames with large relative velocities Consider a Bell test scenario involving three inertial frames. The entangled photon source is in the center while the two receivers, roughly equidistant on either side, are arranged to travel either towards each other or away from each other. In the rest-frame of the source, the two detection events are simultaneous and therefore space-like separated. In the case where the detectors are traveling away from each other, each detector would "consider" itself to be the first to receive the incoming photon and to generate a signal in its own reference frame. When the detectors are moving towards each other, the opposite case occurs and each detector would "consider" itself to have received the photon after its distant counterpart. In special relativity, a reference frame for a local observer is operationally determined as the radar coordinates, which respect Lorentz symmetry [104,105]. With similar operations, the above "considerations" by each detector can only emerge after the detector receives an ideal radar signal, emitted earlier by itself, echoed back from the measurement event by the other detector. This is long after both events of receiving photons at the two detectors have occurred in all the reference frames of the detector and the source. Note, however, that such radar signals could have been used prior to the measurements described, so that reference frames are predicted before the experiment. Suarez and Scarani [106] refer to the above as "before-before" and "after-after" scenarios, respectively; this nomenclature assumes that, while each observer may not know whether it measures before or after its distant counterpart at the moment of its local measurement, as each observer completes their measurement, the collapse of the wave-function propagates instantly in their respective reference frame. But, since the two observers are in relative motion, it appears that both reference frames yield contradictory wave-function collapses . 13 Suarez and Scarani identified the polarization analysis-determining beamsplitter in each measurement station as the necessary device that must be moving, and proposed to place them on rotary mounts that spin to achieve relative speeds of 100 m/s. Experimental studies with such dynamics, as first demonstrated by Zbinden et al. [108], utilized rotating absorbers as "detectors" that monitored part of the entangled photon signal; the rest of the signal was routed to conventional, stationary detectors. The results showed no statistically significant deviation from a standard test using only conventional detectors at rest, 14 i.e., no dependence on whether the rotators moved towards or away from each other. Another experimental test was reported by Stefanov et al. in 2002 [109] using time-bin entangled photons separated over 10 km via an optical fiber, and "moving" beamsplitters implemented with acousto-optical modulators. Each incoming photon effectively saw a beamsplitter moving at about 2500 m/s; as in the previous experiment the actual single-photon detectors themselves were not moving . 15 Again, the experiment found that the entanglement correlations were perfectly preserved without need to consider the time-sequencing of events, in agreement with standard quantum mechanics formalism. An experimental scenario where the detection events by the two moving observers of entangled photons are in each other's respective future, or past, has yet to be tested -though it should be noted that Scarani et al. [110] have shown that the multisimultaneity model could lead to superluminal communications, in violation of relativity. It is understood that satellites are the best approach to obtain the required relative speeds and distances for such a measurement, and these tests could be considered for the DSQL platform [5]. 13 In the theoretical description of a quantum system, a reference frame to coordinatize physical events (spacetime points) is chosen by a specific observer, and wavefunctions to relate the outcomes of physical measurement events are specific to each observer [29]. In fact, the description of quantum states requires an explicit reference frame choice by an observer, because in the canonical quantization scheme a time-foliation (1+3) scheme needs to be specified before a Hamiltonian can be written down [27,107]. Making a statement that two wavefunctions supported by different time-slices in two reference frames for two different local observers have contradictory collapses does not make sense, because it presumes that there exists a super-observer who can see the co-existence of two time-slices to support those wavefunctions. 14 In these experiments the signal was routed by fibers, and the detectors only separated over 10 km, so it was very difficult to get the space-like separation needed for the test. Typically, the experiment would begin with slight asymmetrical distances, and then the experiment would run over several hours, so that the diurnal effect on the fiber would sweep through the circumstance of simultaneous measurements at some point. Furthermore, the absorbers used as "detectors" gave no active signal (such as a heralding flash upon successful completion of an absorption event) that could directly contribute to the statistics; instead, they served "null-result" measurementslack of an absorption event projected the photon wavepacket into the arm with the photon detector. 15 This is potentially problematic in an experiment whose underlying hypothesis relies explicitly on a measurement-induced wavefunction "collapse". Specifically, the unitary transformations of the beam splitter could be undone by other (local) optical elements. The measurement basis settings and the results are not truly fixed until the actual detection of the photons, which involves the amplification of a detection signal to the macroscopic scale, involving millions of electrons. Therefore, any convincing test of these simultaneity arguments should have the same collapse-inducing detectors themselves in relative motion. The Many-Worlds and de Broglie-Bohm interpretations make the same predictions as the default textbook interpretation of quantum mechanics. Bell's Inequality should be violated in the same way regardless of reference frame. Dynamic collapse theories can give different predictions from the textbook interpretation if the collapse happens at a finite speed (already constrained to be faster than the speed of light). If we measure the standard quantum mechanical prediction, any interpretation compatible with that would be equally plausible after our measurement as before. The measurement would set a lower limit on the collapse speed and causal story that collapse theories tell about what initiates the collapse. In the extremely unexpected scenario that violation of Bell's Inequality depends on the reference frame, quantum mechanics and all compatible interpretations will need to be modified. As stated above, the actual photon detection systems were stationary in the previous tests, leaving an open question if a moving measurement device is required to represent a moving "observer", since the beam splitter operation, even while moving, remains coherent (and reversible). Ultimately the entire observer system including the detection process must be in the motion [5,110]. Here we consider space-based experiments, where satellites can be distant enough from each other and moving fast enough away from each other such that each detector's measurement can be considered complete in its own reference frame before the other detector even begins its measurement. Similarly, the satellites can move toward each other fast enough such that in each local detector reference frame the distant detector's measurement is completed before the local detector begins its measurement. Creating high relative velocity between two moving platforms near enough to each other to maintain high link efficiency over a long enough integration time is the key requirement for such tests of relativistic simultaneity. To satisfy the conditions of this scenario, at least two platforms should be on satellites in space. The constraints on timing are stricter in these before-before or after-after experiments than in a typical Bell test, where locality dictates only that the two measurements be space-like separated. This space-like separation means that there exists some frame where one measurement is first and some other frame where the other is first. In the before-before experiment, these frames cannot merely exist, but must include the actual rest frame of each detector (see Fig. 12). For satellite speeds much slower than the speed of light, the measurements must be within t < vD/c 2 of each other, where v is the relative velocity of the detectors and D is the instantaneous separation between the measurements. For reasonable LEO parameters, this is around 10 m of light travel time, translating into an accuracy requirement of approximately 10 -5 in the receiver positions. Note that it is not sufficient to merely know the orbits to within 10 m and the detection times to within 30 ns-the orbits and timing must be controlled to this accuracy for sufficient duration so that a statistically significant number of entangled pairs arrive while this condition holds. Furthermore, in a LEO constellation, these scenarios only exist for brief periods of time and drive strict orbit determination and station-keeping requirements on the flight platforms. With receivers on two independent spacecraft, the link budgets are likely to improvein Fig. 13, three polar-orbiting satellites would meet regularly over the poles to perform the experiment repeatedly, potentially both while approaching and receding. Some of the Figure 12 After-After Spacetime Diagram: Spacetime diagram in Alice's rest frame. Alice's measurement happens at (t = 0, x = 0). The tight constraint on Bob's allowed measurement window t is also shown Figure 13 After-After Polar Orbits: One source and two "After-After" satellites in polar orbit over Antarctica. An alternative would be to site the source on the ground for double-uplink transmission of entangled photons to the orbiters Figure 14 The asymmetric Bell test using only one space-based observer, and a ground-based observer with suitable delay (either a fixed path or quantum memory). A ground-based source (e.g., located at the Canary Islands) could transmit one of the entangled photons to another terrestrial receiver, and the other to a receiver located in orbit. The relatively short ranges on Earth require that there be a substantial delay (∼ 1 to 3 ms) at the terrestrial receiver to achieve nearly similar optical path lengths. The 144-km free-space link would only be sufficient (ca. 0.5-ms delay) for very low altitude satellites. Longer delays could be implemented in fiber-optics, but would entail about -100 dB of loss or greater. Ideally, a low-loss quantum memory with finely adjustable readout times would be used on the ground challenges in this scenario, however, are the slew rates as the platforms converge, avoiding collisions, and the additional challenge of operating in high-radiation zones. An alternative might be to place the source on the equator and to beam photons to two counterpropagating receiver satellites in an equatorial orbit. To mitigate uplink losses, the source would have to be placed at a relatively high altitude, perhaps even on a balloon. For comparison, consider the scenario where an entangled pair source transmits one photon to an orbiting platform, and the other to a ground receiver (see Fig. 14). The asymmetrical free-space path lengths necessitate a delay line or buffer at the terrestrial receiver that can change quickly by the equivalent of several hundred kilometers, resulting in several technical challenges. Because the detectors are not moving symmetrically, there is only a 1 ms (10 m/v) time window in each orbit where the alignment can be such that each detector measures first (or second) in its own reference frame. A huge possible improvement to such an asymmetric scheme would be utilization of a quantum memory to achieve variable read-out times that correspond to the time-of-flight for the ground-tospace link. If the moving observer were located on a LEO platform, this quantum memory would need to store the ground photon for around 1-3 ms, and release it at the correct moment, within about 1 ns, to ensure the relativistic separation of the two measurements of the entangled photons. The low efficiencies of both the quantum uplink and the quantum memory would certainly present challenging low count rates. It is nevertheless encouraging that a first proof-of-concept test of such an asymmetric setup may be possible with a single receiver, such as the Canadian QEYSSAt mission [111]. Human-decision Bell tests Quantum experiments that directly involve human participants are both scientifically interesting and socially relevant, igniting the public interest in fundamental science. In a space-based test of nonlocality, astronaut participants can meaningfully address the freewill loophole of Bell tests in a new regime. The motivation behind this body of testing is largely philosophical in nature, dealing with the epistemological underpinnings of quantum theory. In fact, John Bell himself suggested letting humans choose the basis in a test of his famous inequality [112]: "It has been assumed that the settings of instruments are in some sense free variables-say at the whim of experimenters" [113] and "Roughly speaking it is supposed that an experimenter is quite free to choose among the various possibilities offered by his equipment" [113]. Furthermore, Leggett points out [114] that any exploitation of a loophole that relies on a complicit role of the process that chooses the random settings (it is either not random, or it is somehow influenced by the entangled photons) can maybe best be settled by having two human observers operate the measurement devices. Lucien Hardy, in "Proposal to use Humans to switch settings in a Bell experiment, " [115] sums up this experiment well: "The radical possibility we wish to investigate is that when humans are used to decide the settings (rather than various types of random number generators) we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality. Such a result, while very unlikely, would be tremendously significant for our understanding of the world. " In this Section we explore both the feasibility and desirability of doing such an experiment given that all previous Bell tests have vindicated the quantum prediction. Allowing humans to choose the settings while simultaneously closing the locality loophole requires that the experiment be large enough that no information about the measurement choice on one side can be accessible to the measurement on the other side. Libet famously used EEG to measure the Readiness potential 0.3 s before a person consciously decided to move and 0.5 s before they pressed a button [116]. Perhaps specialized training can improve the reaction times, but it takes somewhat less than 1 s second for a person to be presented with an choice that they did not know about in advance, make a decision using what they perceive to be their free will, and reliably register this choice in a way that can quickly (electronically) change the analysis basis of a polarizer. This timing makes the 1.2-1.4-s light-travel time between the Earth and the Moon the right scale. The space-time diagrams shown in Fig. 15 lead to the conclusion that with a source half-way in between experimenters on the Earth and Moon, participants would have the full 1.2-1.4 s to carry out each round of the experiment. Similar timings are relevant for the source at an Earth-Moon stable Lagrange point. Although the link losses over such distances are daunting, laboratory analogs are being used to prepare for quantum communication experiments over such high-loss channels [117]. Another scenario is an asymmetric configuration of quantum entanglement, as shown in Fig. 16. This involves creating an entangled pair between Earth and Moon, sending Spacetime Diagrams for a Bell test where the source is halfway between Earth and Moon. In each round, humans on each side have 1.3 seconds to be presented with a choice, make a decision, and register their decision with something like a button, whose activation turns that decision into a polarizer setting Figure 16 Space-time diagram for a Bell test involving human observers. In this asymmetric configuration, while the space-like separation of random choice is made by humans separated by the Earth-Moon distance, only one of the entangled photons is sent over a long distance link to Earth (blue arrow); the other entangled half is stored in a quantum memory at the source (red arrow), awaiting the random choice classical signal transmitted from the Moon (black double-line arrow). When the source is still located on the direct line to the moon, but not necessarily halfway, each participant still has a time to make their choice that is equal to twice their light travel time to the source one photon to Earth, and storing the other photon until a classical signal arrives from the Moon with the astronaut's basis choice. The random choices implemented by the human random choices are still separated by the Earth-Moon distance, but this configuration has the benefit of requiring only one long-distance link at the cost of half-second quantum storage. The figure shows a source halfway between Earth and Moon, but the optical link can be shortened at the cost of longer storage time. A smaller version of this scheme was implemented with a fiber delay on the Canary islands in 2010 [118], and used to study the Freedom-of-Choice Loophole, albeit with measurement settings chosen based on the random behavior of LED photons on a beamsplitter; no deviations from the predictions of standard quantum mechanics were found. Finally, the human-decision Bell test architecture shown in Fig. 18 does not require deployment of quantum optical hardware in deep space, but ensures there is no chance the emitted entangled photons nor the distant measurement could have been influenced by the local measurement choice. The key requirement is that the decision-events of the astronaut participants and emission-events of entangled photons comply with specific timing requirements. Both astronauts are queried at time t question , synchronized using GPS, at a time well before t entangled when the source is activate to (possibly) emit a photon pair. One astronaut here is presumed to be located on the Moon, while the other is space-like separated from the Earth, with at least the same distance as the Moon, but in the opposite direction (the third Lagrange point (L3) of the Earth-Moon system provides such a benchmark, but in principle the second astronaut can be elsewhere in the solar system). As described above, each astronaut has some reaction interval t choice during which they can consider and respond to the prompt. There is no reason to expect that both astronauts would submit their answers at precisely the same time, so their decisions are cached locally up to time t transmit (> t question + t choice ), at which point their decisions are transmitted classically to unmanned spacecraft with measurement stations "Alice" and "Bob", and received at time t basis . Completing the Bell test requires entangled photon reception and measurement at the Alice and Bob stations -at t Bell -before any possible signal emitted at t question from the other side's basis-choice astronaut station could reach the analysis satellites. In order to ensure this, the time-of-flight from the entangled photon source to either Alice or Bob In this asymmetric configuration, while the space-like separation of random choice is made by humans separated by the Earth-Moon distance, only one of the entangled photons is sent over a long distance link to Earth; the other entangled half is stored in a quantum memory at the source, awaiting the random choice classical signal transmitted from the Moon. By the geometry of the causal influences, the maximum time window t M in which the astronauts on the moon need to make their basis choice is equal to the storage time t S . A longer storage time comes at the expense of Earth decision time t E . The best situation is to divide the decision time equally, (t E = t M = t S ). In this case, the source near the moon, transmits one entangled photon all the way to Earth, and stores the other entangled photon for the full 1.3 seconds (labeled in Fig. 18 as t) must exceed t transmitt question . 16 Assuming a human reaction time t choice = 0.25 s, the cache time t transmit should be greater than this; here we assume 0.4 s. The corresponding distance between source and Alice (Bob) is about 1/3.25 the Earth-Moon distance. Because the associated link efficiency of the whole experiment (see Appendix A) goes as 1/R 4 (R being the source-to-measurement station separation), this is ∼100 times more efficient than the Earth-Moon human Bell test configuration (Fig. 17a), and 7 times more efficient than the "midway" Earth-Moon Bell test variant (Fig. 17). This margin may be Figure 18 Alternative scheme for human-decision Bell test. Artemis astronauts on the Moon, and astronauts at the Earth-Moon L3 point or beyond, are queried at time t question . Both astronauts make a decision within the human reaction time t choice . This answer is locally cached until time t transmit , when a classical signal is sent to unmanned measurement spacecraft "Alice" and "Bob". A light source, located on or near to Earth, emits entangled photons toward Alice and Bob before intersection of the world line of the source and light cones of the query events, i.e., with no chance that the emitted photons could have been influenced by the measurement basis choices of the astronauts traded for making the question and answer interface of the astronauts more relaxed by, e.g., querying them once every two seconds instead of once every 0.4 s. Even then, the rate improvement is 22 times higher than the Earth-Moon test, and 1.5 times higher than the midway test. Furthermore, the rate could be further enhanced by using multiple astronauts at each station, e.g., a typical capsule crew of three could supply measurement decisions 3 times as often. Quantum mechanics itself makes no distinction between basis choices determined by classical randomness, quantum randomness, or human choices-all of these should violate Bell's Inequality equally. If this experiment is performed with all of these sources of randomness and the Bell violation in each case is indistinguishable, we would confirm that quantum mechanics holds, but we would conclude nothing about free will. If we get the unexpected result, which is that the runs using human choices do not match the quantum prediction, what would we conclude? First, we would be forced to concede that quantum mechanics is incorrect. Any local-realist explanation for all of the observations must include a mechanism for predicting or influencing all non-human random number generators used in previous experiments that confirmed quantum predictions in Bell tests. Second, we would be able to say that human choices are at least sufficiently complicated so as not to be predicted or influenced, though exactly how one relates this back to concepts of free will would be up for debate. In the closing paragraph of [119], Brans writes "It is sometimes said that quantum theory saves free will. In the context of this paper, this might be reversed, so that free will saves quantum theory, at least in the sense of eliminating hidden variable alternatives. In other words, if there are any truly 'free' events in the experiment, then there can be no classical determinism and hence no classical hidden variables. " We wish only to caveat this inspirational quote by noting that for an experiment to rule out a local hidden variable theory, a single 'free' event would not make a significant difference. Instead, a sufficient majority of the basis choices for each measurement must be 'free' in the sense that they are not able to be predicted or influenced by anything happening on the other side of the experiment. Mission design trades for Bell tests All of the proposed Bell tests are characterized by the statistical significance of the measured violation of Bell's inequality. This is described mathematically and conceptually in Appendix D. Increasing the number of successful, high-fidelity photon pairs simultaneously detected by Alice and Bob will improve the statistical significance of the test. Practically, this experiment could be realized through use of a very broadband entangled photon pair source undergoing dense-wavelength-division-multiplexing, thereby creating many simultaneous channels, each approaching the saturation capacity of the detectors. Leveraging this source architecture requires exceptionally low timing jitter. Reducing the probability of a noise event also improves the Bell test statistics. Thus, number of succesful photon pairs measured during a measurement campaign, N , and purity factor, p, are the parameters used to parametrically describe the Bell test mission design. Equation (56), reproduced here for convenience, relates the statistical significance of violation of Bell's inequality, σ , to parameters N and p: 17 The parameters N and p can be expanded into instrument performance parameters, as discussed in Appendix A, with p defined in Appendix D. Figure 19 represents the σ violation significance achievable with a source of clock rate f clock , pair production probability p(1), photon pair fidelity F, total link efficiency (the product of efficiencies to Alice and Bob) η 2e , for total integration time T, with a receiver with temporal resolution t R , and total background noise flux of N noise . The purity factor p used here is not to be confused with the photon pair production probability p(1), or, the "p-value" used in the previous loophole-free Bell tests [97,120] to characterize the significance of the violation; loosely speaking, the p-value is the likelihood the observed results are compatible with the null hypothesis, i.e., that a local hidden variable model could explain the results. We acknowledge that a more thorough mission design should use this more sophisticated metric in the analysis, but do not believe the conclusions will be substantially different than our simpler analysis using σ . Figure 20 System performance to achieve an nσ violation (color coded) of Bell's inequality across orbital alitudes represented on the y-axis. An Earth-Spacecraft link is assumed up to Geostationary orbit. The three dotted lines on the top of the chart represent 1.0, 0.5, and 0.33 times the Earth-Moon mean orbital radius. For simplicity, the source-to-Alice and source-to-Bob channels are assumed identical. The color scale represents the achievable statistical significance of the experiment, in nσ . Note: for y-axis values greater than Geostationary, the integration time was clamped to 1 hour Detailed space-mission design would optimize orbital parameters and optical link efficiency against the science measurement objectives. Different wavelengths are considered in the following examples, and throughout the paper, to better describe the trade-space. As a starting point, we consider the design example from Sect. 2.1.2, which for the COW tests predicted optimum performance around 1500-km altitude orbits, using 0.3-m and 1.0-m diffraction-limited telescopes at 1550 nm. This flight system, upgraded with a suitable entangled photon pair source operating at 1% pair production probability with a 1 GHz clock rate, and a second 0.3-m telescope pointed to a second 1.0-m aperture, would perform the Bell tests indicated in Fig. 20. It is evident that this system as described would be insufficient to support the human-decision Bell tests as described. An upgraded system, with improved pointing and larger receiver apertures, as well as a higher rate photon source, is required. For example, Fig. 21 shows the predicted performance assuming 2.0-m aperture transmitters, with a 2.0-m Lunar-vicinity receiver and a 10-m aperture on Earth. In both of the above examples, the x-axis of Figs. 20 and 21 captures both the fidelity of source and receiver parameters. Now, we can combine Equation (56) with the contours of Figs. 20 and 21 to derive system performance parameters. The Bell test requires transmission of entangled photons to two receivers, characterized in Eq. (43). For a source-to-receiver separation corresponding to the diameter of the Earth, applying Equation (38) with λ = 810 nm, D Tx = 0.5 m, D Rx = 3.5 m, M 2 = 1.05, and η x = 0.1 leads to a one-channel link efficiency of 0.003. The two-channel efficiency, characterizing the likelihood that both photons from an entangled pair are recorded at their respective (equidistant) receivers, is 0.003 2 10 -5 for the assumptions stated. Assuming a source clock rate of 1 GHz and corresponding pair production probability of 1%, the rate of success is about 100 transmitted photon pairs per second. If the source telescope diameter is reduced to 22 cm, and the receiver telescope diameters are reduced to 1.0 m, the success rate drops to 0.03 transmitted photon pairs per second; the time to measure 500 events would then increase from about 5 seconds to 4.5 hours, the latter of which might require integration over multiple orbital passes. Per Fig. 19, with 500 successful measurement events a statistical confidence of 3σ is achieved for parameter p ≥ 0.85. Assuming a source fidelity of 0.90, Eq. (64) then constrains the noise probability, t R · N noise , to be less than 0.06 over the measurement interval. The critical point in evaluating this trade study is that t R is much less than the integration period. Using the upper limit of t R ≈ 1/(0.003 · 1 GHz) = 333 ns, the requirement for purity factor p > 0.85 is satisfied for N noise < 170 kcps. A high σ value provides confidence in the result of the test against local hidden variables. Measuring a higher magnitude S parameter is valuable to allow direct comparisons with quantum predictions, possibly excluding alternative models [121,122]. It is worth noting that, e.g., through the 5 s interval of the experiment, roughly 500 signal counts are resolved against 800 K noise events through the application of temporal filtering -count events occurring outside the expected time-of-arrival bins of signal events are excluded from further analysis. This technique requires time synchronization between source and receiver, conveniently represented in units of picoseconds of drift per second of integration. In the case requiring 5 s of integration, the timing requirement is maintaining temporal synchronization between source and receiver to reduce the net drift to less than 100 ps/5 s = 20 ps/s. In the link scenario requiring 4.5 hours of total integration, the residual drift is required to be less than 100 ps/4.5 hr = 6 fs/s; although this sounds very low indeed, one would incorporate active locking to stay synchronized, e.g., by distributing a classical clock signal. These figures can be further partitioned into phase noise, long-term stability, and time-transfer requirements for local clock systems on each network node of the Bell test. We consider 1/2 Earth-Moon baseline for the human-decision Bell tests. In this scenario, λ = 780 nm, D Tx = 1.0 m, D Rx = 1.0 m, M 2 = 1.05, and η x = 0.1. The low efficiency (10-100 photons per hour, depending on other link assumptions) drives a further requirement of multiplexing the source to improve the transmitted signal photon count rate to 2-5 photon counts per second (about the maximum rate that astronauts involved in a human Bell test could reasonably accommodate using appropriate an interface). While multiplexed entangled photon pair sources are an active area of research [123,124], the high-order multiplexing required to perform the human-decision Bell tests should be considered a new area of research. The overall efficiency of the human-decision Bell test could be increased by using an even larger aperture receiver on the Earth-side. If an 8.3-m effective aperture is used (such as NASA's RF-Optical Hybrid telescope [125,126]), the net photon-pair rate improves to 0.2 counts per second. Without requiring source multiplexing, the detector dark noise requirement to achieve a 3σ result is 0.095 noise counts per second. Such detector performance was realized in [127], with a corresponding detection efficiency of 0.75. Bell tests: summary The proposed Bell test measurement scenarios will test the assumption that there are no local hidden variable processes at work between inertial frames, or across long baselines. Executing highly statistically significant tests across planetary baselines is possible but challenging using existing technologies. The most ambitious of the human-decision Bell tests require development of multiplexed source technology, and a commitment to deploying large, diffraction-limited telescopes with exceptionally low-noise detection systems in high-Earth and Lunar orbits; they would further benefit from ground-based, largediameter telescope infrastructure. Performing Bell tests with high statistical confidence in the low-to-mid orbital regimes would be possible using a 10 8 pair-per-second source of high fidelity entangled photon pairs. Long-baseline quantum teleportation The third pillar of DSQL science is to perform quantum teleportation, which has no equivalent classical counterpart, over long distances in space, thereby acting as a pathfinder for future quantum communication networks as well as a testbed for studying the interplay between quantum entanglement and gravity. Planned and operational space-based quantum optics experiments, most notably Micius [128], use long baseline quantum teleportation to test a basic assumption of quantum mechanics: that quantum correlations of entangled photon pairs, shielded from the environment, are maintained across any baseline. The overarching goal of the proposed DSQL quantum teleportation experiments is to test this assumption across ever-longer baselines, and between inertial frames. Micius demonstrated distribution of entangled photon pairs across a 1200-km baseline, and successful uplink of a teleported photon from Earth to space up to a 1400-km baseline [128]. How long should a new quantum teleportation baseline be to advance the art? We consider four phenomenological benchmark distances. First, in the context of a future network of quantum sensors coupled together using a teleportation swapping system (as described in Ref. [89,129,130]), a global-scale network will require teleportation to function across a global baseline. The first baseline benchmark is thus the Earth's diameter. The second benchmark range corresponds to the distance between a geostationary spacecraft and the Earth's surface. This range is of practical importance-if future technology development improves the rate of usable quantum entanglement distribution, geostationary spacecraft could serve a valuable role in tomorrow's quantum networks [1]. High clock-rate entangled photon pair generation, coupled with high timing resolution photon time-of-arrival detection, are necessary tools to uncover relativistic effects in quantum measurement. As these timing parameters improve, the departure of the gravitational model of the long baseline link from Newtonian to Schwarzschild, and from Schwarzschild to multi-body, becomes measurable. Hence, the third benchmark baseline for quantum teleportation occurs where the experimental timing resolution provides sensitivity to multi-body gravitational effects. Roughly speaking, this threshold is reached at the first Lagrange point of the Earth-Moon system. The final benchmark for teleportation corresponds to the maximum range available given state-of-the-art technology, using the simple link expression described in Appendix A. Currently, this range is on the order of the Earth-Moon mean orbital distance. Testing teleportation between inertial frames is the other, equally important motivation for this proposed set of DSQL experiments. Generally, the available timing performance of the system must be high enough to allow sensitivity to these inertial effects. In analogy to benchmarking quantum teleportation baselines based on phenomenological thresholds, the sensitivity to time dilation between frames is benchmarked against timing resolution. The first benchmark is when the total predicted time dilation of a given experiment exceeds the available timing resolution of the measurement apparatus. The second benchmark is when the contribution to time dilation from relative velocity (special relativistic effects) and from gravitational-potential difference (general relativistic effects) are both greater than the system timing precision. This presents a logical path towards driving future experiments with ever-improving system timing performance, enabling sensitivity to higher-order general relativistic effects, such as frame dragging, Shapiro time delay, and gravitational deflection. In the Section that follows, the basic concept of quantum teleportation and its variants like entanglement swapping and atomic state teleportation are presented. Next, we highlight the subtleties of performing such experiments in the Earth-Moon system and review the influence of gravitational effects. We conclude this Section by briefly evaluating the requirements on a flight quantum memory -which would substantially enhance the viability of the proposed experiments -and how it could be realized with current technology. The quantum teleportation protocol In teleportation [131,132] an unknown quantum state is transferred from one system to another, possibly far away, by using a maximally entangled state and a classical signal. In the photonics domain, teleportation can be described as follows [132]: first, one photon from an entangled pair is sent to Alice, and the other to Bob (see Fig. 22). Alice then performs a Bell State Measurement (BSM) [133] on her part of the entangled state and the unknown quantum state, thereby projecting the -innately uncorrelated -two photons into an entangled state. The BSM will project Bob's part of the state onto one of four different Figure 22 Scheme of quantum teleportation [131]. An unknown input state is transferred with perfect fidelity using a combination of distributed entanglement created in an entangled photon source (EPS) and a classical signal conveying the outcome of the Bell-state measurement (BSM). The final step is a unitary operation (U), which is contingent upon the BSM result, and is applied to the entangled twin particle possible states, depending on the BSM result . 18 Alice communicates her BSM result to Bob, who uses this information to suitably rotate his state, thereby recovering the original unknown input state. 19 Quantum teleportation also applies to the transfer of an entangled particle, arguably the ultimate unknown state, a protocol called entanglement swapping [140,141], which is critical for quantum repeaters [142]. Teleportation is not only highly interesting from a fundamental perspective, but also a crucial concept for multi-user quantum networks which could be used for secure communications, interlinking quantum computers, or distributed quantum sensors. The ultimate applications of teleportation or entanglement swapping will use quantum memories with ultra-long storage times (hours, days, or even years). For instance, a space craft could carry a register of stored, entangled quantum bits (qubit), and gradually use them up for quantum communication tasks such as quantum networking, secure communications or super-dense-coding 20 proposed by Bennett and Wiesner [143]. However, while in principle a quantum memory could store information indefinitely, current quantum memory systems typically operate with storage times of order milliseconds or less; see Sect. 2.3.4. Standard quantum theory places no bound on the distance over which entanglement or teleportation may be accomplished. However, the required classical and entanglement channels impose limits on a teleportation protocol. First, the transfer speed for quantum teleportation is limited to luminal signaling due to the classical channel: even if the entangled "receiving" particle is already at its destination, the correct input state can only 18 Note that with linear optical elements (and no ancilla photons), the projection is not perfect: only 2 of the 4 Bell states give unambiguous experimental signatures, the other 2 giving the same signature as each other [134]. As a consequence, all photonic teleportation experiments to date have been limited to an efficiency of 50%. Adding extra single-photon input states to the quantum circuit can boost this efficiency [135][136][137], and complete (i.e., 100% efficient) Bell-state analysis can be achieved using various matter-based qubits. 19 Note that neither Alice nor Bob obtain any knowledge on the input state at any time, and the final unitary transformation depends only on BSM result, not the input state. The 'original' particle loses all quantum information during this process, implying that teleportation does not permit creating a copy of the original state -in accordance with the no-cloning theorem in quantum physics [138,139]. 20 Superdense coding allows one to transmit up to twice the normal amount of information on a successfully transmitted photon. However, because the protocol must be implemented at the single-photon level (i.e., the maximum attempt rate is limited by the inverse bandwidth of the photons), and long space links imply very high loss, it is nearly always preferable to send bright optical pulses instead, if the goal is simply to transmit classical bits. be retrieved once the (classical) information about Alice's Bell State Measurement has arrived. Second, the protocol suffers from any decorrelation or decoherence in the entangled channel used as a communication resource, and thus any impact of curved space time on entanglement could impact the teleportation fidelity, see Sect. 2.1.6. The DSQL would allow studies of these unwanted effects in a realistic environment. State-of-the-art long-range teleportation Long-range teleportation was achieved outside a single laboratory starting in 2002, when a signal was teleported between two stations separated by 55 m using optical fiber [144]. In 2003, the first long-range teleportation with an active unitary operation at the receiver was demonstrated over 500 m [145], sending the entangled photon through optical fiber at 2/3 the speed of light, while the BSM result was radioed above ground and "overtook" the entangled photon to arrive in time for an electro-optic modulator to rapidly apply the correct unitary operation. In subsequent years, quantum teleportation was demonstrated over increasingly further distances including demonstrations over 100 km [146], over 144 km [147], and in 2017 from ground to space [128] demonstrating the teleportation of independent single-photon qubits from ground to a low-Earth-orbit satellite, through an uplink channel, over distances of up to 1400 kilometres. These demonstrations represent major advances, yet all photons involved in the experiments were generated by the same laser pulse on the same optical bench, and only after their creation was the receiver photon transmitted over a large distance, i.e., the actual entangling BSM operation occurred while all the photons were technically still in or very close to the original lab, which would largely defeat the purpose in a practical quantum networking application. Like teleportation, entanglement swapping requires photons generated from independent sources, experimentally much more challenging than simply producing entangled pairs, because the different photons must be spectrally and temporally indistinguishable in order to achieve a high-quality BSM, which is based on two-photon interference. Typically, entangled photons have a temporal coherence of ≈200-500 fs, which is right at the limit of synchronizing lasers. A first demonstration of two-photon interference using two synchronized femtosecond lasers was reported by Kaltenbaek et al. in 2006 [148], and entanglement swapping shown in 2009 [149]. Another method to realize truly independent optical sources uses two entangled photon sources operated with continuous wave lasers, and very narrow-band filtered photons [150], with associated coherence times of several hundred picoseconds, longer than the timing resolution of detectors. A particularly promising approach is to generate entangled photon inside high-finesse optical resonators; such sources are intrinsically narrow-band, and do not require frequency filtering, which otherwise severely reduces the achievable rates. For example, photon pairs were generated in lithium niobate whispering gallery resonators with coherence lengths tunable roughly between 10 -20 ns [151]. Furthermore, these sources can be engineered to match the wavelengths and bandwidths of atomic transitions [152], an important factor for implementation of a quantum repeater . 21 Another important aspect of long-range quantum teleportation is the fidelity reduction due to the emission statistics of typical realistic photon sources, including entangled photon sources based on spontaneous parametric down conversion (SPDC) [153,154] and four-wave mixing [155,156]. Here the thermal statistics of the source constrain the probability of creating exactly one photon pair in a pulse to be ≤ 1/4; the empty pulses lead to inefficiency while pulses with two or more pairs lead to noise; one method to ameliorate this problem uses multiplexing [123]. Alternatively, the Jennewein group proposed in 2013 [157] that quantum teleportation implemented with single emitters (e.g., quantum dots) could greatly improve teleportation fidelity for ground and space links. The technical challenges around such emitters make this approach challenging to implement; however, recently high-efficiency coupling of photons from quantum dot sources into optical fibers has been realized [158], as has generation (though not yet efficient extraction) of high quality polarization-entangled photon pairs [159,160]. Teleportation in the Earth-Moon system Expanding quantum teleportation and entanglement swapping over large distance scales would demonstrate truly quantum communication protocols at unprecedented scales and provide crucial insights into the validity of quantum mechanics, leading the path towards deep-space quantum networking and quantum computing. As stated above, long range "passive" teleportation [161] from ground to space was accomplished with the 2017 Micius mission [128], transferring one of the entangled photons from an SPDC source from a ground station at very high elevation (around 5000 m) to a receiver on board Micius, at around 600 km altitude. With the DSQL we want to extend this range and perform quantum teleportation experiments on the Earth-Moon distance by, for instance, connecting the International Space Station (ISS) and the Lunar Gateway (LG) [9,162]. While atmospheric photon scattering can be avoided in outer space, the losses in photons due to the diffraction of optical beams traversing such a distance will be challenging (see sections Appendix A). Furthermore, the travel time of a light signal from Earth's surface to the Moon is about 1.3 seconds. This implies that, to complete the protocol of quantum teleportation from Alice on ISS to Bob on LG, the quantum state carried by Bob's photon entangled with Alice's must be kept longer than a time scale of order 1 second in Bob's quantum memory waiting for the final operation (assuming that Bob already possessed his half of the entangled state before Alice made her Bell state measurement, i.e., assuming that the entanglement was pre-shared as shown in the original picture of quantum teleportation (Figs. 22), which has not been the case for most teleportation experiments to date). Fortunately, this may be achievable by emerging technology [163]. In the conventional ground-based experiments requiring transmission of multiple entangled photons (e.g., for a Bell test or quantum teleportation verified by full quantum state tomography on a large ensemble of systems), because of the limited spatial separation of the transmitter and receiver, the late events in each agent's worldline would be inside the future lightcones of the early events of the other agent's worldline (e.g., [97,98,101]), and thus could causally depend on the outcomes and settings of the early events (e.g., the events M B and M A in Fig. 23 (left)), potentially opening up a "memory loophole" [164]. The O(1)-second travel time of light signals along our long baseline offers the possibility to perform sufficiently many resolvable runs within this travel time, so that a whole set of Figure 23 In a conventional experiment of the Bell test (above-left), a series of identical processes are done while a later measurement event can be in the future lightcone of earlier events (e.g., M B and M A ). Between ISS and LG we may be able to achieve the Bell test (above-right) and incomplete quantum teleportation (below) with all Alice's measurements M A spacelike separated from all Bob's M B in the period of sampling outcomes by one agent for ensemble averaging can each be spacelike-separated from the measurement events in the same period by the other agent ( Fig. 23 (middle) and (right)). To achieve this, however, the photon emission rate of the source of the entangled photon pairs has to be large enough to compensate for the high transmission loss of photons over this large length scale. In the Bell tests this will eliminate the two-sided memory of the early measurements by the other agent, thereby closing the memory loophole [164] without the need to suppress it by performing sufficiently many runs of an experiment with memory [165,166]. The long-term quantum memories that Alice and Bob would ideally use to store their quantum state may experience additional effects which can be treated independently as interactions with their respective environments at non-zero temperatures. For example, if Alice's quantum memory on the (accelerating) ISS is coupled to the vacuum state of quantum fields with respect to the Earth, it will experience the Unruh effect [167], seeing an effective temperature due to the acceleration; however, for the ISS acceleration this temperature is only ∼ 4 × 10 -20 K, which is much lower than the temperature of the ambient environment and thus negligible. With a smaller acceleration, Bob's quantum memory will see an even lower Unruh temperature in the vacuum state of the fields. Since the coupling between photons and gravitational waves is extremely weak, the gravitational effect on photons in the quantum optical experiments at the ISS-LG scale are mainly those for electromagnetic fields in a fixed spacetime background: 1) the gravitational redshift ( λ/λ 0 ∼ 10 -9 ), which can be comparable to the transverse Doppler shift, and 2) the Wigner rotation of polarization, where the gravitational field provides a classical background [40,41,168,169]. These can be negligible compared to similar effects due to the relative motion and corresponding to the radial Doppler shift ( λ/λ 0 ∼ 10 -5 ), which can be suppressed by executing the experiments during periods when the relative radial motion is minimal, or by dynamically correcting according to the reference laser beams from the photon sources [128]. Further details can be found in Sect. 2.1, where these effects are thoroughly discussed. In the passive teleportation [161] achieved by the Micius mission, the operation supposed to be done by Bob according to the classical signal from Alice is performed not physically, but virtually via data analysis [128] to obtain the fidelity of quantum teleportation. In this approach Bob does not need to maintain the quantum coherence of his photon or quantum memory until he receives Alice's classical signals. To obtain the fidelity more efficiently, Bob can perform the measurement immediately [M B in gray in Fig. 24 (aboveleft)], randomly choosing which measurement to perform on different photons in the ensemble. Bob can even perform the measurement on B before Alice's joint measurement on A and C in the bookkeeper coordinates. In this case, Alice can also make a "delayed choice" on performing the joint measurement on A and C or not [170]. Entanglement swapping is more difficult to achieve than a Bell test or teleportation because more participating photons and observers are involved. In particular, both Alice and Bob may store part of their states [carried by A (B) of the entangled Aa-pair (Bb-pair) in Fig. 24] in a local memory until they perform their local measurements. In each run a joint measurement on two photons [a and b in Fig. 24 (right)], each belonging to an entangled photon pair produced by Alice or Bob, may be performed by Diana. 22 With potentially all parties (Alice, Bob, Charlie/Diana) in high relative motion with respect to each other, it is important to consider the frame dependence, which is innate with canonical quantization, where one first chooses a coordinate system and specifies the time coordinate. One then writes down the Hamiltonian and the Schrödinger equation, and assigns the quantum state evolved accordingly. Note that simultaneity is relative, and quantum states in different reference frames are in general incommensurate when their associated time slices are different. Suppose Charlie in our quantum teleportation experiment and Diana in our entanglement swapping experiment are placed on a transfer vehicle going from the Earth to the Moon; two separate events on the ISS and the LG that are considered as simultaneous by Alice on the ISS (or Bob on the LG) will not occur at the same time when perceived on the transfer vehicle: for Charlie and Diana the LG event would occur before the ISS event. If the transfer vehicle is instead returning from the Moon to the Earth, the same events will have an opposite time order, the event on the ISS occurring first according to Charlie and Diana. The time coordinate of each event in the transfer vehicle's frame is determined by the radar signals, namely, after the transfer vehicle receives the echo of the radar signal emitted earlier by itself [104]. In Fig. 24 the green dashed lines represent the time-slices in the reference frame of a transfer vehicle moving from the Earth 22 Just like the incomplete passive quantum teleportation, the time order of the measurements by Alice, Bob, and Diana in different bookkeeper's coordinates can be different once they are space-like separated. In particular, the decision of performing joint measurement or not can even be made by Diana later than both local measurements by Alice and Bob in bookkeeper's coordinates, so that quantum entanglement of the photons of Alice and Bob appears to be determined after the fact by Diana's delayed choice, as seen in that reference frame [171]. Figure 24 Spacetime diagrams of the Bell test (a), quantum teleportation (b) and delayed-choice entanglement swapping, where Diana may choose to perform a joint measurement (c) or not (d). Here, "×" and "•" represent local measurement and operation events, respectively. The black dotted lines and thick solid lines represent the worldlines of the participating quantum objects (Alice) to the Moon (Bob), while the blue lines represent the time-slices in the reference frame of a bookkeeper who is roughly at rest both for Alice and Bob. Flight quantum memory Quantum memories are essential ingredients for implementing large-scale quantum networks or long-distance quantum communication channels, where they are critical for quantum repeaters enhancing the transmission range. In addition, quantum memories based on atomic or solid-state systems enable fundamental physics research, like testing atomic teleportation (see Sects. 2.3.1 and 2.3.3) and performing loophole-free Bell tests (see Sect. 2.2) over long distances. In all these applications the role of the quantum memory is to store single or multiple entangled photons in a long-lived state and to retrieve them reliably on demand. Throughout the last two decades there has been an enormous effort in developing and improving quantum memories for photonic qubits, relying on a variety of physical systems and concepts [161,172,173]. Most systems studied today can be assigned to one of the following categories: rare-Earth-ion doped solids, color centers in diamonds, crystalline solids, hot and cold atomic vapors, molecules, and switchable optical delay lines. The performance of different quantum memories can be compared by characteristic properties like the efficiency to retrieve a photon when requested, the fidelity that the retrieved photon is in the same state as the previously stored one, and the storage time. Other key parameters include the repetition rate and the ability to store multiple photons simultaneously. Furthermore, when coupling to an optical fiber or transmitting device is required, the wavelength and mode structure of the stored and retrieved photons plays an important role. In general, most systems show good performance in one or more of these aspects but have limitations in others, so that the choice of the optimal quantum memory strongly depends on the planned application. In order to support complete atomic state teleportation and Bell tests covering the distance between the Earth and the Moon, the DSQL requires a quantum memory with storage times longer than one second. Since free-space transmission is used, the wavelength and mode structure have to be compatible with the available transceivers (or suitable wavelength/bandwidth converters must be used). The requirements on the efficiency, fidelity and repetition rate depend on the details of the respective protocols and are specified in the corresponding sections. Moreover, size, weight, and power (SWaP) are limiting resources aboard any spacecraft and have to be accounted for when choosing a quantum memory platform for the DSQL. Due to the required long storage times, quantum memories based on molecules and crystalline solids (including semiconductor quantum dots) are currently not suited to being used for experiments spanning the Earth-Moon distance. However, systems employing rare-Earth-ion doped solids, diamond color centers, and atomic vapors are promising candidates and will be briefly discussed in the following. For a more detailed comparison we refer to the excellent review articles on this topic [172,173]. Rare Earth ion-doped solids combine long coherence times and good optical access to collective electronic and nuclear spins. The energy difference between the ground and excited state of the memory is typically in the low MHz range. Experiments have demonstrated coherence times of the order of one second [174] up to one minute [175], and even of several hours [176]. With 167 Er 3+ :Y 2 SiO 5 there is also a material available that operates close to the telecom bandwidth [163]. One downside of these systems is the need for cryogenic cooling in the regime of 1-4 K, which could limit its implementation in space missions. However, first steps towards space-compatible cryostats have been made [177]. Vacancy centers in diamond enable the storage of qubits in single electron and nuclear spins, with the latter providing storage times of up to one second at room temperature [178], or even one minute within cryostats [179]. In these systems neighboring spins can interact with each other, allowing for multi-qubit storage and the two-qubit operations [179,180] necessary for advanced quantum repeater applications. In most experiments either neutral or negatively charged nitrogen or silicon are used to create the defect cen-ters, leading to optical wavelengths between 700 and 750 nm and frequency differences ranging from 100 s of kHz to a few MHz. In order to address single spins the photoncoupling typically needs to be enhanced with resonators and cavities [181][182][183]. Atomic vapors made of alkali metals provide large optical depths even at room temperature and are therefore well-suited for photonic quantum memories [184]. The qubit is stored in the collective excitation of the atoms with energy differences of several GHz between the ground and the excited state of the memory. Since the lifetime is mainly limited by atomic motion -the photonic state is mapped onto the distributed states of the atoms at the particular locations when the photon was absorbed -cooling the atomic ensemble and employing dipole traps or optical lattices can improve the storage properties, enabling lifetimes of one second [185] and beyond [186]. In addition, mode matching is a crucial step for atomic vapor systems and can be enhanced by employing cavities [187] or by placing the atomic cloud inside of nanofibers [188]. Fundamental atomic physics experiments generating Bose-Einstein condensates have been realized on a sounding rocket [189] and on the ISS [190] demonstrating the general feasibility of such an apparatus in space. Teleportation mission design The procedure outlined in Appendix A is applied to the quantum teleportation process, characterized by the resultant fidelity of the teleported state compared to the initial qubit state, as determined by state tomography [191], for an initial entangled resource of the formρ Using maximum likelihood estimation techniques, 23 we calculate the resultant fidelity of the teleportation state in the presence of loss and noise events; see Fig. 25. The results are an average over 10 tomographies per data point, with the counts sampled from a Poissonian distribution to take into account normal counting noise. The Bell tests described above require up to two simultaneous optical channels. The single-channel link efficiency expression in Equation (38) characterizes the one-way losses of a teleportation experiment. Figure 25 represents the fidelity of quantum tomography as a function of the noise parameter (x-axis) and the number of successful measurement events (y-axis). In analogy to the derivation of the instrument requirements for the Bell tests, we start by deciding what tomographic fidelity is required to meet experimental goals, then derive instrument requirements from the corresponding count rate and noise parameter for a successful demonstration of long-baseline quantum teleportation. For example, achieving a quantum tomography fidelity of 0.90 requires, e.g., a noise parameter of 0.95 and signal counts in excess of 1700. Consider an F clock = 1-GHz clock rate source generating entangled photon pairs at 810 nm with P(1) = 1% pair production per pulse efficiency, used to close the link between a 0.5-m transmitter aperture and a 1.0-m receiver aperture across the baseline of the Earth's diameter, with 10 dB of additional losses assumed contributing to the net link efficiency (see Appendix A). In this configuration, roughly 250 events per second are expected, requiring 6.8 s of integration to obtain the desired counting statistics. Achieving a purity in excess of 0.95 means that for every 20 signal 23 In this specific example, Bayesian estimation did not provide a meaningful benefit to calculation time. Figure 25 The fidelity of simulated tomographies with an input state being the Werner state in Eq. (37), and the target state being a maximally entangled state. The x-axis is parameter p (p = 1 implies all signal, while p = 0 implies all noise) and the y-axis is the total number of successful count events (i.e., for all measurements) integrated over the measurement duration. The colorscale ranges from a tomographic fidelity of 0 to 1. Tomographic fidelities greater than 0.66 are only possible through quantum correlations events, there is at most 1 noise event. Accounting for 2-fold and 4-fold noise counts in a detection system with 500 ps resolution, the required noise count rate is about N noise = 11.25 noise events per second, commensurate with the capabilities of state-of-the-art detector systems [127], with a photon-time-of-arrival resolution t R (and residual timing synchronization error) less than or equal to the optical pulse width of signal photons. Increasing the aperture sizes to e.g., 0.5 m and 3.5 m, results in a higher flux of signal photons and relaxes the noise requirement commensurately. Quantum teleportation: summary The experiments propose to perform a completely quantum mechanical process-the teleportation of the quantum state of one photon to another-in a regime where relativistic effects impact the results. The DSQL will empirically test whether teleportation across long-range links between inertial frames is successful, as predicted by standard theory. Successful demonstration of the quantum teleportation experiments described in this Section will thus provide critical empirical justification for what are currently untested assumptions of QFTCST in the weak-field regime. These experimental regimes are not otherwise achievable in laboratory analog experiments, and truly require spacecraft links. Using one or more quantum memories may enhance the teleportation system performance. The key figures of merit of a quantum memory are its bandwidth and wavelength, which should be compatible with the signal photons; its read and write efficiency, which need to be high to avoid introducing more loss to an already lossy channel; its coherence time and storage time, which are linked to the efficiencies and need to be of comparable magnitude to the time of flight between nodes in the network (or the round-trip light time between two nodes); and the number of storable modes, which needs to be high given the high clock rates and low link efficiencies of the long baseline channels. Furthermore, the quantum memory modes should be individually addressable and exhibit continuous read-out capability. Ultimately, the performance parameters of a space-qualified quantum memory that would enhance a proposed DSQL experiment are beyond the current state of the art. Ground station quantum memory systems are marginally more mature. While no specific implementation plan is proposed at this time, the general philosophy of continuing engineering design of the DSQL mission is to ensure system compatibility with future, ground-based quantum memory systems. Potential applications of squeezed light Squeezed states of light are the quantum states that offer a reduced quantum uncertainty in one quadrature of the electromagnetic mode phase space (x, p) while having an increased uncertainty in the conjugate quadrature. If the area in the phase space representing the squeezed state retains the minimal values, the same as for the vacuum or coherent state (such minimum-uncertainty states are sometimes called the intelligent states), then the squeezed state is said to have a unity purity and it remains a pure quantum state. If, however, the phase space area is increased, i.e., the anti-squeezing exceeds the squeezing, as is often the case in experiment, the squeezed state is (partially) mixed, and is characterized by a purity value below unity. The purity is therefore an important parameter in the context of squeezed states applications in quantum information processing. One particularly important example of squeezed states is the so-called squeezed vacuum. This is the state centered on the phase-space diagram so that x = p = 0, which means that its Fock representation contains only even photon-number states |n . In spite of having a zero mean field, the squeezed vacuum carries finite optical energy, which is uniquely determined by the degree of squeezing. It should be noted that squeezed states are fragile quantum states that decohere quickly under loss or other external coupling. Usually squeezed states are measured with continuous variable measurement techniques, i.e., by measuring the continuous spectrum of the electromagnetic field variables. The field variables are typically measured by optical homodyne or heterodyne detection, while the amplitude squeezing can be measured by directly observing the reduced optical power fluctuations. The homodyne detection is mode-selective, i.e., it measures the optical mode defined by a reference beam (the local oscillator). Hence, any changes in the classical mode structure of a beam travelling long distances (e.g., red shifts or change of bandwidth or polarization) will be noticed as decoherence leading to a reduced interferometric visibility. By systematic modification of the reference beam, one can deduce a change in the classical mode structure. This technique will enable one to distinguish between various effects that may act on the optical fields along their path in space, and effects that are acting on the field excitation, e.g., the photon statistics. Squeezed states of light have been thoroughly investigated in the context of sensitive interferometric measurements surpassing the shot-noise limit. The first such application of squeezed vacuum was done back in 1987 [192]. A very prominent modern demonstration of this technology geared for gravitational wave detection was performed in the context of the LIGO project [193]. In this work a 10-dB squeezed vacuum source was coupled to the dark port of the LISA interferometer, leading to a 3.5-dB noise suppression below the shot noise level. This limited noise suppression is due to the loss in the optical system -a technical problem that needs to be mitigated in all squeezed light applications. More recently both the LIGO and VIRGO projects have demonstrated sensitivity enhancements using squeezed light [194,195]. Squeezed states and closely related continuous-variable (CV) entangled states of light have been found useful in the area of quantum information processing. For example, these states can serve as a building block for linear quantum computation [196,197]. The quantum network applications include CV quantum key distribution protocols [198][199][200], CV quantum teleportation [147,201,202] and entanglement swapping [203,204]. The CV quantum teleportation typically has higher efficiency (albeit lower fidelity) than the discrete-value quantum teleportation, and may become the protocol of choice in situations where the efficiency is a stretched resource (e.g., over large distances). 24 Another interesting aspect of quantum teleportation, also highly relevant to space applications, is that it can be achieved not only in bipartite but also in tripartite systems, in which case Bob receives classical communications both from Alice and a third party [204]. There may be an even larger number of communicating parties, which presents an opportunity for building a quantum network and implementing various multipartite quantum communication protocols. Spectroscopic and photochemical applications rely on the fact that squeezed light has unusual two-photon absorption properties. It is predicted that one can achieve a linear (rather than quadratic) dependence of the absorption rate on the optical intensity for weak fields, significantly different absorption rates for phase-and amplitude-squeezed beams of the same power, and the possibility of a decreasing absorption rate with increasing intensity [213,214]. Conversely, the enhanced intensity fluctuations of an anti-squeezed state can enhance the two-photon absorption compared to coherent or thermal light [213,215]. Calibration of photo-detectors is enabled by strong correlation of the intensity fluctuations in two-mode squeezed optical beams. Originally proposed by D.N. Klyshko, and often associated with his name, this method uses two photon states (e.g., from SPDC) to calibrate a pair of photon-counting detectors [216]: Treating the detection of one of the photons as a herald guarantees the presence of the other photon directed to the detector being calibrated: after subtracting noise counts, the detection efficiency is simply the coincidence rate divided by the singles rate at the heralding detector (whose efficiency then cancels out) [217][218][219]. The method now has been generalized to a pair of analog detectors [211,212], and even to a CCD array [220], in which cases the two-photon light source is replaced by a two-mode squeezed light source, and the photocounts are replaced by the photocurrents' fluctuations. Finally, squeezed states can potentially act as "probe states". Here the idea is that because of the reduced intrinsic noise, any signature imprinted on such states can be recovered with better fidelity. In the DSQL settings, this may help determine if the fragile quantum states travel long distances and along changing gravitational fields without decoherence or dephasing, and whether it is possible to distinguish different mechanisms of decoherence. These questions could be investigated by deploying a squeezed light source on a lunar orbit, and a homodyne detection setup on a second satellite around lunar orbit (or on Earth). However, the feasibility of such a measurement depends on whether one can still detect squeezing given the low collection efficiency typical for such distances, which can be viewed as an additional high loss of the quantum link. Perhaps a positive answer can be obtained by changing the approach to the squeezed states measurement from a quantitative statistical measurement of the squeezed states properties (e.g., variance measurements) to instead merely distinguish the quantum states, i.e., asking whether a measured state is more likely to be a squeezed state (with coherence preserved) or a classical state (e.g., coherent state). A sequence of missions The experiments described in this manuscript can be achieved by a phased deployment of spacecraft and ground infrastructure. 1 Phase 1: Elliptical orbit with multiple ground stations 2 Phase 2: Spacecraft array with multiple ground stations 3 Phase 3: Lunar node with extremely large aperture ground station As indicated throughout the text, spacecraft occupying elliptical orbits are well suited for explorations of relativistic effects. Phase 1 of DSQL could involve a single spacecraft in such an orbit. The spacecraft would be outfitted with an optical payload consisting of: a pair of independently gimballed telescopes; a high-rate entangled photon pair source ; 25 a high performance single-photon detection system capable of performing photonic state tomography; a stabilized fiber optical delay line; and a reconfigurable optical switch array. The flight terminal requires exceptional pointing accuracy to leverage larger apertures for high efficiency links. Recent flight missions have demonstrated performance commensurate with the requirements [93,221,222]. A summary of key technology items is provided in Sect. 3.2 below. The Phase 1 system would enable COW tests, tests of quantum teleportation, and a subset of the Bell tests between inertial frames. An array of ground stations, potentially located around the world, could establish quantum communication links with the spacecraft in support of the experiments described here, as well as supporting new experiments and technology demonstrations by a user community. Phase 2 adds additional spacecraft to the network, in complementary elliptical orbits with longer orbital period (greater orbital semimajor axis) than the Phase 1 spacecraft. This array of spacecraft will perform COW tests at larger baselines, and allow the full range of inertial frames required to achieve the Bell tests and quantum teleportation tests. One or more of the spacecraft would be located at a point suitable to support a future human-decision Bell test, either in a 9-day period orbit (roughly corresponding to a "midway between Earth and moon" configuration), or in orbit about the fourth/fifth Lagrange point of the Earth-Moon system. Phase 3 of DSQL provides the capability to perform quantum optical tests well into the regime of 2-body gravitational physics, with baselines long enough to finally perform human-decision Bell tests. Astronauts and large-aperture telescopes on or near the Moon are assumed, e.g., the link analysis given in Appendix A considers a 1-m aperture, nearly diffraction-limited telescope on/near the moon. A ground system on Earth will need to be established with an extremely large aperture telescope; there are a number of development efforts underway to produce Earth-based 10-m class telescopes, e.g., JPL is engaged in the design and deployment of an 8.3-m class telescope suitable for supporting deep-space classical optical communications [223]. The ground system further requires a large collection area coupled efficiently to low-jitter single-photon detectors and readout electronics. Superconducting nanowire single-photon detectors are a technology that has demonstrated system detection efficiencies of 98% [224], photon number resolution [225,226], dark count rates below 10 -4 cps [227], and timing jitter below 3 ps [127] (though not yet all in a single device). DSQL technology challenges The DSQL experiments considered and discussed here are at the bleeding edge of what current technology can accomplish, as is evident from the photon rates estimated in Appendix A. Therefore, in addition to the scientific research, we would like to summarize some of the quantum technology developments that could immensely improve the feasibility of the proposals discussed in this article. The challenges involved in technology advances can be divided into three groups: on superconducting nanowires (SNSPD)) have achieved tremendous performance [127,[224][225][226][227]. However, integrating these devices into a space-amenable system requires further advancement in small-scale cryogenics, and advancing multi-mode optical fiber interfaces. • Large-scale space telescopes for transmitting or receiving the DSQL quantum signals could benefit from larger apertures, and novel approaches including segmented mirrors or "deployable" systems could be beneficial. For ground applications, segmented arrays of optical receivers are another option that could be considered, as the large area of a "photon bucket" may be more important than precise optical imaging. • The COW tests require sets of fiber-optic delay lines on different communications nodes. These delay lines are used in a measurement of photon phase shift induced by gravity. This effect is on the order of several waves, which requires a commensurate length stability of the local delay lines. Stabilization may be achieved through active feedback to an atomic reference. We note here that this concept for stabilizing the fiber delay line is very close to an optical clock, which may also be used as a complimentary tool to explore relativistic effects. • Precision measurement of satellite range and velocity are required. The COW tests, Bell tests between inertial frames, and tests of quantum teleportation all demand precise accounting for the range and the relative velocities between nodes. As described at length in Sect. 2.1.2, these kinematic contributions to phase and frequency are orders of magnitude larger than the relativistic effects DSQL aims to investigate; if left uncorrected, they will dominate the measurements; advanced spacecraft ranging and velocity measurements are thus required to implement these experiments. • Time synchronization between nodes is required to perform quantum entanglement swapping and teleportation, since the two photons incident on the beam splitter are required to overlap temporally as well as spatially. The allowable time-of-arrival error is less than the optical pulsewidth. In the extreme case, an optical pulsewidth of 1 ps propagating over a 1.3-s light path (between the Moon and Earth) at 1-10 GHz repetition rate sets the time synchronization requirement. 3 Ground system infrastructure • Ground-based systems involving multi-meter aperture telescopes need to be adapted and interfaces for quantum subsystems developed and demonstrated. In particular, given the limited access to such facilities, a very efficient and fast DSQL system should be devised. Other applications of DSQL The instrumentation required to achieve the scientific goals described above is useful for other scientific and technical applications. For example, a local stabilized laser system is required at all nodes for the Einstein equivalence principle test. This subsystem could form the basis of an optical clock, opening the doors to classical clock-comparison experiments. Used in conjunction with the infrastructure required for the quantum teleportation experiments, the fundamental elements of a quantum network of clocks (Re. [129]) will be hosted by the DSQL. As noted above, the extremely long baseline quantum channels require extremely low noise detection systems to achieve meaningful statistical significance. The low-noise channel could be exploited in a demonstration of purely classical optical communication to achieve performance close to the asymptotic channel capacity limit [228,229]. The DSQL telescopes could be directed towards astronomical light sources, where the high-rate, single photon-sensitive receiver could be used for narrow-band, high-speed astronomy [230]. Along similar lines, the quantum state tomography system could be used to assess other astronomical sources by testing for correlations in polarized light emission. The pair of telescopes at each node could also be used for a demonstration of a quantum telescope array [89], where one telescope from each node is directed towards an astronomical target, and all telescopes are fed by coherent non-local single-photon states. Coincidence counts between different telescopes are then used to determine the coherence of the astronomical light coming to the telescopes (as a function of their baseline separations), and thereby information about the spatial distribution of the source itself. Conclusion The evolution of quantum states can be predicted using a variety of means (e.g., the Wigner function formalism), none of which are fully compliant with Lorentz invariance, as demanded by General relativity for all measurements. This fundamental discrepancy is at the heart of modern physics and motivates the body of experiments proposed in this manuscript. As stated in the introduction, QFTCST is a successful theoretical framework supported strongly by astrophysical measurement. The proposed DSQL experiments present a means of testing QFTCST in a complimentary, weak-field setting local to the Earth. The results of the DSQL tests will have a significant bearing on theories outside of QFTCST that express coupling between gravitation and quantum states [15,28,231]. We have proposed a set of experiments that conduct quantum optical measurements in a regime where relativistic effects are strong and measurable. The EEP tests propose to assess a hitherto untested prediction of general relativity -that quantum states of light accumulate the expected phase when propagating along geodesic paths defined by the local spacetime. The Bell tests propose to measure violation of Bell's inequality across extremely long baselines, and between relatively moving inertial frames [105]. The latency associated with the long baseline is sufficiently high to close the "free-will" loophole through the involvement of astronauts; such human-decision Bell tests have important philosophical implications as well. Psychological aspects of the "choice" presented to the astronauts must also be considered. Finally, the quantum teleportation tests will validate the prediction that quantum entanglement is maintained over the long baselines associated with proposals to establish global quantum networks [88,232]. One to four spacecraft, and one or more optical ground stations, would be required to execute some or all of the listed experiments. The technology required to execute the experiments is mostly present. Key quantum technology development areas are: high-rate, high-purity, multiplexed entangled photon pair sources; simultaneously high efficiency, low dark-noise, high count-rate single-photon detector systems; and addressable, high efficiency, high repeat-fidelity, and long storage time quantum memories. Key classical technology development areas are large diameter flight telescopes, radiation-hard high-speed read-out electronics, and modified existing optical ground stations, upgraded with the infrastructure required to close quantum optical links. The technology development, instrument development, and mission execution will benefit immensely from international cooperation and long-term strategic planning. Appendix A: DSQL system design The basic link expressions characterizing optical transmission paths are expressed in this Section. These are developed following the procedure of [233,234] and partially carried out in analogy to a previously published evaluation of low Earth orbiting satellite-enabled quantum key distribution [157,235,236]. The basic one-way link efficiency for coupling a Gaussian mode through perfectly aligned circular apertures is In Equation (38) [237], light of wavelength λ is transmitted through an aperture of size D Tx with diffraction limit factor M 2 ≥ 1. The light is directed towards a receiver aperture of diameter D Rx across a displacement vector of R. Most of the DSQL experiments are reasonably characterized using the assumption | R| πD 2 Tx /λ, i.e. the diffracted spot at the receiver is much bigger than the receiver aperture. In this limit Equation (38) then reduces to the ratio of the receiver aperture to the spot size formed by the transmitter telescope at the receiver plane: where the prefactor η x characterizes other loss factors: Here, η Rx characterizes the receiver efficiency (except for the detection efficiency) [234], η D is the total photon detection efficiency of the receiver, η Tx characterizes the transmitter efficiency, clipping efficiency and pointing errors [233], η atm ( R) characterizes absorption through the atmosphere (which is a strong function of horizon angle), and η margin accounts for any additional inefficiencies of the link. Equation (38) parametrically describes the link efficiency of the quantum channel in terms of the indicated instrumentation performance parameters. For single photons created at rate F clock , the received photon flux N s in units of counts per second is Similarly, if only one photon is transmitted from an entangled photon pair source operating at clock rate F clock and a per-pulse photon pair production probability p(1), the number of received photons is Figure 26 A greatly simplified orbital model to calculate integration time for quantum optical experiments. A ground station is located on Earth, which rotates at rate e . The spacecraft is in an orbit characterized by orbital altitude h, or semimajor axis a = h + R e , where R e is the Earth mean radius. The orbital frequency of the spacecraft is ω s . The ground station is limited to view θ m above the local horizon angle. The shaded area in the diagram represents the part of the orbit where line of sight is maintained, in a reference frame rotating with the Earth Finally, closing a simultaneous link to a pair of receivers, located at Z 1 and Z 2 relative to the source, has an expected total rate of successful events Note that in the limit of perfect spatial and temporal acquisition, the total number of successfully recovered photons is the integral of either Equation (42) or (43) over the time that the spacecraft maintains clear line-of-sight with the other ends of the network . 26 This can be approximated using the product of the relevant rate equation and the total integration time per orbital passage. The integration time can be extracted through geometry and numerical analysis of the various orbital configurations described in the sections above [237]. Link expressions such as Equation (43) require line of sight between the source and two other nodes, which must be satisfied simultaneously. The integration time is determined by the orbital dynamics. Our example scenario is an Earth-orbiting spacecraft closing link with a single Earth ground station, though this does not capture the diversity of spacecraft-to-spacecraft links, links between Earth and Moon orbiters, or links to multiple ground stations. Nevertheless, since many of the experiments described in this report do rely on links between ground stations on Earth and Earthorbiting satellites, it is an instructive example. This situation is depicted in Fig. 26. The total integration time T is obtained through geometry and orbital dynamics. In the regime of perturbation-free, circular orbits about a circular Earth, the integration time can be computed using Equation (44). A separate calculation estimates the total noise count rate incurred during the measurement process. Three sources of noise events are considered: the intrinsic dark count rate of the receiver D r ; the rate of extra photon events from the source S n , and background photons counted by the detection system B sky . The total noise count rate, N noise , in units of counts per second, is then: where FOV is the telescope linear field of view, A is the primary mirror collection area, and BW the filtering bandwidth; W is the background radiance in units of photon flux per area-solid angle per unit bandwidth. Appendix B: The human-decision bell tests in the context of free will A human-decision Bell test requires humans on both sides of the Bell test-a human "Bob" and a human "Alice". In the context of a future NASA space mission, this may involve astronauts on the International Space Station ("Bob") and astronauts on or near the surface of the Moon ("Alice, " or, perhaps "Artemis".) A local explanation only needs to predict or influence the detector settings on one side to violate Bell's Inequality and match the predictions of quantum mechanics, i.e., a local scheme can thus still violate Bell's inequality even if only one side's random number generator is perfectly free will, independent, and unpredictable. Any amount of unpredictability will suffice to show a Bell violation, but the significance of the violation depends on the predictability. While in this experiment, the human choices are not strictly required to pass tests for randomness, these free-will choices must be unpredictable to the measurement on the other side in a way that is different from all prior schemes of generating randomness for Bell tests. At minimum, this requires one of two assumptions, as we now describe. First, one can take the scientifically accepted materialist view that human choices are results of physical processes in the brain, i.e., some probabilistic combination of deterministic computation and randomness, from the external environment or internal thermal, quantum, or chaotic processes. For human (or, perhaps animal) choices to make a difference, one must assume that the complexity of the process that links physical inputs in the brain's past light cone to a decision must exceed some threshold such that the other side cannot compute, predict, or influence the decision. Alternatively, one can drop the assumption that human choices are purely results of physical processes in the brain, and instead adopt a stance like Cartesian dualism, 27 where one invokes some external non-physical mind that is somehow able to inject events into our 3 + 1 dimensional spacetime while not being part of it. This is similar to the assumption that is required to close the freedom-of-choice loophole when quantum random number generators are used in the "loophole-free" experiments cited above-a truly novel bit of information enters the world such that a 0 and 1 are both perfectly compatible with identical past light cones. The only difference here would be that somehow "will" is involved, not just "freedom". The experiment must be implemented in a way that ensures the participants feel their choices come from their own free will, while simultaneously being unpredictable; otherwise, the physical state of their brains might already be zeroing in on a choice. For example, the participants could be asked a series of questions that they did not know or even consider in advance. For an astronaut on the moon, it could be questions about their next series of meals-something that matters to them enough to feel they are exercising free will, but something where their answers are not predictable, e.g., "instant coffee or instant tea, " to use an astronaut version of Sam Harris' opening question in his book about free will [238]. Their answers would need to be registered and turned into polarizer settings in a way that is space-like separated from the source and the measurement on the other side. The composition and phrasing of the questions asked of the astronauts would need to be carefully considered by experts in behavioral psychology and philosophy of the mind. Other than choosing basis settings, there is a second way in which the human Bell test addresses foundational principles of quantum theory, going back to Wigner's suggestion [239] that quantum collapse could somehow be caused by conscious minds. There is barely enough time in an Earth-Moon experiment for measurement results to be shown to participants such that each becomes conscious of the results in a way that is space-like separated from the other. If collapse only happens in conscious minds, no experiment to date has actually closed the locality loophole. One may even consider moving macroscopic masses based on the measurement results, to address Penrose's suggestion [240] that collapse takes place between macroscopically distinct gravitational fields. David Hume states that the question of free will is "the most contentious question of metaphysics" [241]. An age-old discussion, beginning in the ancient Western philosophical texts of Plato and Aristotle, and continuing to the present day, the question of whether human decisions are based in genuine free will or are deterministic, is still open. Determinism is the idea that everything in the universe is determined by causal laws. This means that anything in the universe that happens at any given moment is the result of some antecedent cause. Thus, determinism maintains that there is no such thing as an uncaused event. The idea that every event is caused, is one of the fundamental presuppositions of science [242]. According to determinism, since human actions are events, no human action is uncaused, and therefore are not free, and instead are simply the result of some causal process. Additionally, determinism precludes randomness -since everything is the effect of some previous cause, nothing is truly random. If human beings can exercise free will, then humans are able to perform uncaused acts. For instance, at time t 1 , an agent S can perform either act A 1 or act A 2 . So, S's action at t 1 will determine what the world looks like after t 1 , regardless of the pre-existing conditions at t 1 [242]. An uncaused act should be unpredictable, non-hysteretical, and otherwise stochastic. There is a subtle difference between making a "knee-jerk" or instinctual reaction to some stimuli versus taking time to internally deliberate decision before action [243]. This distinction underscores the timing latency requirement in human-decision Bell tests. It also suggests alternative testing schemes with either shorter decision time intervals (forcing an instinctual decision) or longer time intervals (allowing some level of thought before decision making), or even using non-human animals as the decisionmaking agents. Free will may be non-random and predictable by the local observer themself. Free will is independent of the observed system, assuming that our universe consists of 1) free will of observers and 2) the observed world; both assumptions follow from causality. The question of free will versus determinism is based in the question of causality: whether every event must have a cause, or if there are events that are causally undetermined, a question that impacts the value of scientific inquiry. In this broader context, a Bell test involving human decision-making creates an empirical framework with which to assess the idea of free will, and to explore the relationship between human decision making and the causal trajectory in nature leading to the moment of decision. Appendix C: Obtaining the parametric model of the COW tests The state of the interferometer output (see Fig. 3) can be written as: |ψ = 1 2 e i(φ GR +θ) -1 |0, 1 + i e i(φ GR +θ) + 1 |1, 0 , where the expression for φ GR is given in Equation (8), and θ is an additional controllable phase that can be tuned to improve the measurement precision, by biasing to the linear part of the fringe. We can model the experimental imperfection by assuming the preparation contains our target state |ψ with probability p, and a noise photon with probability (1p). The flux of noise photons N noise has been defined in Equation (45). The parameter p can be interpreted as the experiment quality factor and can be linked to the system parameters such as timing, receiver aperture, and spectral filtering bandwidth as follows: Here (N noise t R ) is the probability of recording a count due to a noise photon within the expected coincidence time window t R , and F is the fidelity of the source, assumed here to be 0.95. We further assume all the system parameters in N noise to be fixed apart from the spectral filtering bandwidth that needs to change according to the signal photon bandwidth, so that p = p(σ ). Keeping into account the finite bandwidth σ of the photon having frequency ω 0 , the probability of having a detection at detector A is given by: where φ = φ GR + θ and p should be regarded as an overall quality factor for the experiment. A more complete analysis should also take into account error sources such as path length mismatch and attitude determination error; these as well other source of imperfection will be explored in detail elsewhere. Let M be the operator denoting a count at detector A; the error on the gravitationally induced phase shift between the interferometric arms is then given by: Where we have used τ = φ/ω 0 .The quantity M(φ) ≡ M2 -M 2 is often referred to as an "estimator" for the unknown quantity φ. Given an experiment with a certain quality factor p(σ ), we can choose the overall relative phase φ in order to minimize the previous expression. More formally, we can solve the optimization problem: The optimized phase error can then be propagated in order to obtain the error on the parameter α, characterizing violations of UGR. The result of the optimization is shown in Fig. 27, showing that φ opt is essentially constant for the range of the photon bandwidth considered as expected -as long as the interferometers are well matched, so that the path imbalance is less than the coherence length, the fringe visibility will be very high. Appendix D: Obtaining the parametric model for Bell tests A simple but somewhat old-fashioned way to estimate the statistical significance of a Bell test is to assume uncorrelated trials along with unbiased and uncorrelated random basis selection, though this approach does not address the detector efficiency or memory loopholes. Under these assumptions, there are two types of effects: systematic imperfections which lower the intrinsic and/or measured entanglement fidelity of the produced entangled Bell state, and purely statistical fluctuations on the measurements due to Poisson photon counting statistics. A lower measured entanglement fidelity means a CHSH parameter S that is less than the quantum mechanical maximum of 2 √ 2, but still hopefully greater than 2, the maximum value that local realism allows. Statistical fluctuations would lead to a measured value of S drawn from a distribution centered around an expectation value (proportional to the entanglement fidelity of the measured quantum state) with a standard deviation of σ . A 5-sigma result would mean that the measured value of S was 5σ above the local-realist limit of 2. Many effects can degrade the measured entanglement fidelity: intrinsic entanglement fidelity of the source, detector dark counts, sky background, and noise from multi-photon events, i.e., having photons from different entangled pairs each arrive within the coincidence window. In all of these cases, the measurement results on each side of the experiment are completely random and uncorrelated with each other. Experimentally, they produce results that are indistinguishable from those produced by a completely incoherent "mixed" state. As in the previous section, to model the mixed state as measured by the pair of detectors, we take a fraction p of a particular Bell state, say | + , and a fraction (1p) of the completely incoherent state of dimension 4. The mixed state that is actually measured, where the (1p) contribution includes both the source's intrinsic incoherence and the external noise, iŝ Given this mixed state, we will first calculate the expected value of the CHSH parameter S and its statistical uncertainty σ , assuming N total coincidence measurements. To this end, we first define an experimentally measured correlation coefficient E(a, b) as a function of measurement basis settings a and b; its value ranges from -1 to +1: where N(a, b) is the number of coincidences where Alice's photon passes through an analyzer with setting a (e.g., for polarization entanglement, a polarizing beam splitter oriented at angle a), and Bob's photon passes through an analyzer with setting b. Similarly, N(a, b ⊥ ) is the number of coincidences where Bob's photon is instead detected in the b ⊥ output of his measurement apparatus (e.g., his polarizing beam splitter). There will be 16 such coincidence measurements; the sum of all 16 counts is N := i N i . The CHSH parameter is then defined as the sum of four correlation coefficients, with the sign flipped on the coefficient with the widest separation between the measurement basis angles: For the mixed state in Eq. (51), the expected CHSH parameter S = 2 √ 2p . 28 This reaches the quantum mechanical maximum of 2 √ 2 for the pure Bell state and 0 for the completely incoherent state, where there are no correlations. Assuming each of the 16 counts N i is drawn from a Poisson distribution whose standard deviation σ i is √ N i , the variance of S can be calculated through propagation of error as whose expectation value for the state in (51) is 28 One could also consider state imperfections of the form p| + + | + (1 -p)(|HH HH| + |VV VV|)/2, i.e., a partially entangled state, with perfect correlations only in one basis. In this case the CHSH parameter has an expected value of S = 2 √ 2p + 2(1 -p). To claim an n · σ violation of Bell's inequality, S measured -2 > nσ . The expectation value of n, the number of σ 's of Bell violation, is then A contour plot of this expected number of σ violation as a function of Bell-state fraction p and total coincidence counts N is shown in Fig. 19. If p ≤ 1 √ 2 , no Bell violation would occur, and the result would be compatible with local realism no matter how large N is. Near this threshold, quality (high p) wins over quantity (high N ). Above this threshold, the significance of the result scales as √ N as might be expected. In practice, once the entanglement fidelity threshold is crossed, accumulating enough data does not take very long. This is qualitatively different than many other physics experiments, where one can "average down" the noise. Now we proceed to estimate p for various space scenarios, determine how many coincidences N are required to achieve a given σ -level of Bell violation, and estimate the time required to achieve this. An important parameter is t, the time window inside of which two photons will be counted as a coincidence. This should be as short as possible to avoid accidental coincidences from dark counts, sky background, or incorrectly paired entangled photons. However, it does not help for t to be shorter than the combined effect of the time resolution of the detectors, the jitter in the amount of atmospheric delay, and the timing jitter of the optical and electronic systems; otherwise, the signal of true coincidences is also reduced. The intrinsic jitter of the superconducting nanowire single-photon detectors (SNSPD) is 0.1 ps, determined in part by material parameters of the nanowire. The best reported performance is 3 ps, but more typically 30 ps for optimized nanowire and electronics. The atmospheric jitter is typically 10 ps [244]. Thus, accounting for 10-100 ps of excess atmospheric jitter captures the expected dynamics. Next we will see that this window t sets the scale for maximum source brightness and maximum allowable background. If photons (entangled or not) arrive at a rate r and are Poisson distributed, P(k photons in time window t) = (rt) k e -rt k! . For small rates such that rt 1, P(1 or more photons in time window t) = 1e -rt ≈ rt. If Alice is receiving photons at a rate r a and Bob at a rate r b , the probability of recording (though not necessarily detecting) an accidental coincidence within the small coincidence window t is P accidental = (r a t)(r b t). The rate that accidental coincidences occur is r accidental = r a r b t. Next we turn to an imperfect source of entangled photons. Without knowing more about the source itself and any potential dependence on measurement basis choices, we model the source itself as producing pure entangled photons at a rate r e along with completely incoherent photon pairs at a rate r i . For good sources, r i will be < 5% of r e . If noiseless detectors were to measure the source directly with no other background, p would be r e /(r e + r i ). Both of these rates are reduced by η a for Alice's link and by η b for Bob's link. noise events (Eq. (45)) and the quantum fidelity F of the source. First, the density function of a source with fidelity F = (1 + 3λ)/4 is: The receiver sums photon counts over a time interval t R , corresponding to a frameperiod, word-length, or user-defined integration window. The smallest value t R could take would be the total timing jitter of the receiver. The largest value would be the effective frame-rate of the receiver system, which would be no smaller than the inverse of the product of transmitter clock rate and single-side channel loss. The probability of counting a noise event within this period is P N = t R · N noise . A limiting case describing the worse-case effect of noise events on the measurement process is expressed in Equation . Combining Eq. (62) and Eq. (63) with Eq. (51), the noise parameter p can be reduced to source fidelity, noise count rate, and coincidence window time using
41,505.8
2021-11-30T00:00:00.000
[ "Engineering", "Physics" ]
ON RONCUS ALMISSAE N. SP., R. KRUPANJENSIS N. SP., AND R. RADJI N. SP., THREE NEW PSEUDOSCORPIONS (PSEUDOSCORPIONES, NEOBISIIDAE) FROM CROATIA AND SERBIA, RESPECTIVELY – Three new species of the pseudoscorpion genus Roncus L. Koch (Neobisiidae) are described from Croatia (from nr. Omiš, Dalmatia: R. almissae n. sp.) and Serbia (near the town of Krupanj, north-western Serbia, Lukića Pećina Cave and nr. Izvor: R. krupanjensis n. sp., and R. radji n. sp.), and their diagnostic characteristics are illustrated. Their interrelations with phenetically close congeners are analyzed; in addition, the presence/absence of microsetae proximal to the trichobothria eb and esb is established as an important taxonomic characteristic. INTRODUCTION Over the past four decades there has been a marked increase in our knowledge of the Neobisiidae of south-eastern Europe (the Balkan Peninsula), and especially of the representatives of the genus Roncus L. Koch, 1873 which occur in leaf litters, soil and caves (Ćurčić, 1988;Ćurčić et al., 2004;Harvey, 1990).Increased interest in the soil/litter and cave ecosystems and improved sampling techniques have contributed to this knowledge.During a study of postembryonic development and teratology of the pseudoscorpions in Dalmatia and Serbia, three hitherto undescribed species of Roncus were found. This paper provides descriptions of Roncus almissae n. sp., R. krupanjensis n. sp., and R. radji n. sp., with some details on the comparative morphology of both sexes. All specimens are mounted on slides in Swan's fluid (gum chloral medium) and all are deposited in the Institute of Zoology, Faculty of Biology (IZB), University of Belgrade, Belgrade, Serbia. SYSTEMATIC PART RONCUS ALMISSAE, NEW SPECIES Etymology.-After Almissa, the old Latin name of Omiš Material.-Holotype male and allotype female samples residing under stone were collected by Tonći Rađa in the village of Podašpilje, nr.Omiš, on the northern slopes of Mt.Omiška Dinara, Dalmatia, Croatia, 22 September. Galea (cheliceral spinneret) low (Figs. 8 and 15).Cheliceral palm with six setae, movable finger with one seta (both in male and female).Cheliceral dentition as in Figs. 8 (male) and 15 (female).Eight-bladed flagellum (Figs. 5 and 14); one short proximal blade and 7 longer blades distally, all blades denticulate.Apex of pedipalpal coxa with 4 long acuminate setae.Pedipalpal trochanter with a small lateral tubercle and some rare tiny and inconspicuous denticulations dorsally.Pedipalpal femur with a small exterior and lateral tubercle and with interior and dorsal granulations as in Figs. 5 (male) and 12 (female).Tibia smooth; chelal palm with tiny interior granulations or smooth; exteriorly palm with some rare and inconspicuous surface irregularities (Figs. 5 and 12).No microsetae proximal to eb and esb (Figs. 2 and 9); however, 4-6 microsetae present distally or laterodistally to eb and esb.In both sexes, sensillum located between the 10 th and 17 th teeth.The trichobothrium ist slightly closer to isb than est, or equidistant from them (Figs. 2 and 9). Chelal fingers generally as long as the chelal palm and shorter than the pedipalpal femur (Table 1).Pedipalpal femur shorter (male) or slightly longer than carapace (female) (Table 1).Trichobothriotaxy as in Figs. 2 and 9. Tibia IV, basitarsus IV and telotarsus IV each with a single tactile seta (Figs. 6 and 13).Tactile seta ratios are presented in Table 1. Based on present knowledge, R. almissae n. sp. is known from its type locality only. Etymology.-After Krupanj, a town near the type-locality of R. krupanjensis n. sp. Trichobothriotaxy: eb, esb, ib, and isb on finger base; it, et, and est in proximal half of finger; ist slightly closer to est than to isb (or equidistant from these).Seta sb only slightly closer to b than to st, st closer to t than to sb.For trichobothrial pattern, see Figs. 17 and 25.Tibia IV, basitarsus IV, and telotarsus IV each with a long tactile seta (Figs.19 and 27; Table 1).For morphometric ratios and linear measurements, see Table 1. Distribution.-Western Serbia, epigean, under stones, and in humus and leaf-litter.Probably endemic to Serbia and the Balkan Peninsula. Remarks.-The present species is distinct from the phenetically close congener R. tintilin Ćurčić, 1993, in many important respects: body size (2.28 mm in the male of R. krupanjensis n .sp. vs. 2.84-3.58mm in R. tintilin), in the pedipalpal length (3.37-3.85mm in krupanjensis vs. 3.97-4.57mm in R. tintilin), in the form of the epistome (apically blunt in krupanjensis vs. triangular i tintilin), in the form of the pedipalpal articles, in the cheliceral dentition, and the presence (in krupanjensis) vs. absence of small setae proximal to eb and esb (in tintilin). Etymology.-The town of Krupanj is the center of the Radjevina region, which was named after Radj, a great knight of Prince Lazar, who defended it from Hungarian and Ottoman conquerors.The new species is therefore named after this nobleman. Fixed chelal finger with 79-81 teeth; distal teeth pointed and asymmetrical, followed by small, closely-set, and square-tapped or rounded teeth proximally.Movable chelal finger with 72-77 teeth; only distal teeth pointed and retroconical, other teeth square-cusped or rounded.Chelal fingers longer than chelal palm and considerably longer than carapace (Table 1).Tiny microsetae proximal to eb and esb absent; chelal palm with 4 microsetae distal to these trichobothria (Fig. 40). Trichobothriotaxy: eb, esb, ib, and isb on finger base, it, et, and est on proximal finger half; ist slightly closer to isb than to est.Seta sb equidistant from b and st, respectively, seta st closer to b than to sb, respectively.For trichobothrial ratios and linear measurements, see Fig. 40 and Table 1. Distribution.-Western Serbia, cave-dwelling.Probably an endemic and relict species. Remarks.-R.radji n. sp. is easily distinguished from R. trojan Ćurčić, 1993 (its phenetically most similar species), from southeastern Serbia, by the (male) body size (3.395-3.74mm vs. 2.415-3.07),by the number of setae on sternite II (20-21 vs. 12-13), by the presence/absence of microsetae proximal to eb and esb (absent vs. present), by the number of teeth on the fixed (79-81 vs. 51-63) and movable chelal fingers (72-77 vs. 56-63), by the pedipalpal femurs length-to-breadth ratio (4.52-5.54 vs. 3.27-3.55),by the pedipalpal chela length-to-breadth ratio (4.125-4.285 vs. 3.27-3.705),etc.The discovery of the described representatives of Roncus in Serbia (and Dalmatia) supports the fact that the taxonomy of this genus is still far from being complete (Ćurčić, 1991, 1992a, 1992b; Ćurčić, et al., 1992a, 2004).The variety of cave-dwelling species of Roncus described elsewhere by Ćurčić (1984, 1991) and by Ćurčić et al. (1981, 1988, 2004), offers further proof that this genus is presently subjected to intensive radiation or divergent differentiation into new species.Furthermore, the diversity of Roncus representatives in the Balkan regions bordering on Serbia (Ćurčić, 1984;Ćurčić and Beron, 1981;Ćurčić et al., 2004) compared to the same feature in other areas, points to the Balkan Peninsula as a center of origin and genesis of numerous forms of the genus.In addition, the occurrence of numerous Roncus species with extremely limited distribution areas demonstrates their endemic nature. NOTE With regard to a single diagnostic characteristic (presence/absence of microsetae proximal to eb and esb), it should be noted that this feature is present in R. krupanjensis n. sp.(as well as R. pannonius Ćurčić, Dimitrijevic and Karamata, 1992 in R. trojan Ćurčić, 1993, andin R. lubricus L. Koch, 1873. However, these microsetae are missing in other epigean and cave species of the genus which inhabit the Balkan Peninsula (R. onaemi n. sp., R. radjai n. sp., R. parablothroides Hadži, 1937, etc).Therefore, it is possible that the presence or absence of this characteristic could be useful in distinguishing representatives of two species groups, which we have described as the "Roncus lubricus" (microsetae present) and "Roncus parablothroides" (microsetae absent), respectively.It seems that both groups are widespread in Europe (Ćurčić, 1992, 1992b); however, their precise taxonomic and biogeographic features are insufficiently known.Therefore, this problem remains one of the main goals for future research.
1,771
2010-01-01T00:00:00.000
[ "Biology" ]
Introducing the CLEF 2020 HIPE Shared Task: Named Entity Recognition and Linking on Historical Newspapers Since its introduction some twenty years ago, named entity (NE) processing has become an essential component of virtually any text mining application and has undergone major changes. Recently, two main trends characterise its developments: the adoption of deep learning architectures and the consideration of textual material originating from historical and cultural heritage collections. While the former opens up new opportunities, the latter introduces new challenges with heterogeneous, historical and noisy inputs. If NE processing tools are increasingly being used in the context of historical documents, performance values are below the ones on contemporary data and are hardly comparable. In this context, this paper introduces the CLEF 2020 Evaluation Lab HIPE (Identifying Historical People, Places and other Entities) on named entity recognition and linking on diachronic historical newspaper material in French, German and English. Our objective is threefold: strengthening the robustness of existing approaches on non-standard inputs, enabling performance comparison of NE processing on historical texts, and, in the long run, fostering efficient semantic indexing of historical documents in order to support scholarship on digital cultural heritage collections. Introduction Recognition and identification of real-world entities is at the core of virtually any text mining application. As a matter of fact, referential units such as names of persons, locations and organizations underlie the semantics of texts and guide their interpretation. Around since the seminal Message Understanding Conference (MUC) evaluation cycle in the 1990s [11], named entity-related tasks have undergone major evolutions until now, from entity recognition and classification to entity disambiguation and linking [21,25]. Besides the general domain of well-written newswire data, named entity (NE) processing is also applied to specific domains, particularly bio-medical [10,14], and on more noisy inputs such as speech transcriptions [9] and tweets [26]. Recently, two main trends characterise developments in NE processing. First, at the technical level, the adoption of deep learning architectures and the usage of embedded language representations greatly reshapes the field and opens up new research directions [1,16,17]. Second, with respect to application domain and language spectrum, NE processing has been called upon to contribute to the field of Digital Humanities (DH), where massive digitization of historical documents is producing huge amounts of texts [30]. Thanks to large-scale digitization projects driven by cultural institutions, millions of images are being acquired and, when it comes to text, their content is transcribed, either manually via dedicated interfaces, or automatically via Optical Character Recognition (OCR). Beyond this great achievement in terms of document preservation and accessibility, the next crucial step is to adapt and develop appropriate language technologies to search and retrieve the contents of this 'Big Data from the Past' [13]. In this regard, information extraction techniques, and particularly NE recognition and linking, can certainly be regarded as among the first steps. This paper introduces the CLEF 2020 Evaluation Lab 1 HIPE (Identifying Historical People, Places and other Entities) 2 . With the aim of supporting the development and progress of NE systems on historical documents (Sect. 2), this lab proposes two tasks, namely named entity recognition and linking, on historical newspapers in French, German and English (Sect. 3). We additionally report first results on French historical newspapers (Sect. 4), which comfort the idea of various benefits of such lab for both NLP and DH communities. Motivation and Objectives NE processing tools are increasingly being used in the context of historical documents. Research activities in this domain target texts of different nature (e.g. museum records, state-related documents, genealogical data, historical newspapers) and different tasks (NE recognition and classification, entity linking, or both). Experiments involve different time periods , focus on different domains, and use different typologies. This great diversity demonstrates how many and varied the needs-and the challenges-are, but also makes performance comparison difficult, if not impossible. Furthermore, as per language technologies in general [29], it appears that the application of NE processing on historical texts poses new challenges [7,23]. First, inputs can be extremely noisy, with errors which do not resemble tweet misspellings or speech transcription hesitations, for which adapted approaches have already been devised [5,27]. Second, the language under study is mostly of earlier stage(s), which renders usual external and internal evidences less effective (e.g., the usage of different naming conventions and presence of historical spelling variations) [2,3]. Further, beside historical VIPs, texts from the past contain rare entities which have undergone significant changes (esp. locations) or do no longer exist, and for which adequate linguistic resources and knowledge bases are missing [12]. Finally, archives and texts from the past are not as anglophone as in today's information society, making multilingual resources and processing capacities even more essential [22]. Overall, and as demonstrated by Vilain et al. [31], the transfer of NE tools from one domain to another is not straightforward, and the performance of NE tools initially developed for homogeneous texts of the immediate past are affected when applied on historical material. This echoes the proposition of Plank [24], according to whom what is considered as standard data (i.e. contemporary news genre) is more a historical coincidence than a reality: in NLP non-canonical, heterogeneous, biased and noisy data is rather the norm than the exception. Even though many evaluation campaigns on NE were organized over the last decades 3 , only one considered French historical texts [8]. To the best of our knowledge, no NE evaluation campaign ever addressed multilingual, diachronic historical material. In the context of new needs and materials emerging from the humanities, we believe that an evaluation campaign on historical documents is timely and will be beneficial. In addition to the release of a multilingual, historical NE-annotated corpus, the objective of this shared task is threefold: strengthening the robustness of existing approaches on non-standard inputs; enabling performance comparison of NE processing on historical texts; and fostering efficient semantic indexing of historical documents. Task Description The HIPE shared task puts forward 2 NE processing tasks with sub-tasks of increasing level of difficulty. Participants can submit up to 3 runs per sub-task. Task 1: Named Entity Recognition and Classification (NERC) Subtask 1.1 -NERC Coarse-Grained: this task includes the recognition and classification of entity mentions according to high-level entity types (Person, Location, Organisation, Product and Date). Subtask 1.2 -NERC Fine-Grained: this task includes the classification of mentions according to finer-grained entity types, nested entities (up to one level of depth) and the detection of entity mention components (e.g. function, title, name). Task 2: Named Entity Linking (EL). This task requires the linking of named entity mentions to a unique referent in a knowledge base (a frozen dump of Wikidata) or to a NIL node if the mention does not have a referent. Data Sets Corpus. The HIPE corpus is composed of items from the digitized archives of several Swiss, Luxembourgish and American newspapers on a diachronic basis. 4 For each language, articles of 4 different newspapers were sampled on a decade time-bucket basis, according to the time span of the newspaper (longest duration spans ca. 200 years). More precisely, articles were first randomly sampled from each year of the considered decades, with the constraints of having a title and more than 100 characters. Subsequently to this sampling, a manual triage was applied in order to keep journalistic content only and to remove undesirable items such as feuilleton, cross-words, weather tables, time-schedules, obituaries, and what a human could not even read because of OCR noise. Alongside each article, metadata (journal, date, title, page number, image region coordinates), the corresponding scan(s) and an OCR quality assessment score is provided. Different OCR versions of same texts are not provided, and the OCR quality of the corpus therefore corresponds to real-life setting, with variations according to digitization time and preservation state of original documents. For each task and language-with the exception of English-the corpus is divided into training, dev and test data sets, released in IOB format with hierarchical information. For English, only dev and test sets will be released. Annotation. HIPE annotation guidelines [6] are derived from the Quaero annotation guide 5 . Originally designed for the annotation of "extended" named entities (i.e. more than the 3 or 4 traditional entity classes) in French speech transcriptions, Quaero guidelines have furthermore been used on historic press corpora [28]. HIPE slightly recasts and simplifies them, considering only a subset of entity types and components, as well as of linguistic units eligible as named entities. HIPE guidelines were iteratively consolidated via the annotation of a "mini-reference" corpus, where annotation decisions were tested and difficult cases discussed. Despite these adaptations, HIPE annotated corpora will mostly remain compatible with Quaero guidelines. The annotation campaign is carried out by the task organizers with the support of trilingual collaborators. We use INCEpTION as an annotation tool [15], with the visualisation of image segments alongside OCR transcriptions. 6 Before starting annotating, each annotator is first trained on a mini-reference corpus, where the inter-annotator agreement (IAA) with the gold reference is computed. For each language, a sub-sample of the corpus is annotated by 2 annotators and IAA is computed, before and after an adjudication. Randomly selected articles will also be controlled by the adjudicator. Finally, HIPE will provide complementary resources in the form of in-domain word-level and character-level embeddings acquired from historical newspaper corpora. In the same vein, participants will be encouraged to share any external resource they might use. HIPE corpus and resources will be released under a CC-BY-SA-NC 4.0 license. Evaluation Named Entity Recognition and Classification (Task 1) will be evaluated in terms of macro and micro Precision, Recall, F-measure, and Slot Error Rate [20]. Two evaluation scenarios will be considered: strict (exact boundary matching) and relaxed (fuzzy boundary matching). Entity linking (Task 2) will be evaluated in terms of Precision, Recall, F-measure taking into account literal and metonymic senses. Exploratory Experiments on NER for Historical French We made an exploratory study in order to assess whether the massive improvements in neural NER [1,17] on modern texts carry over to historical material with OCR noise. The data of our experiments is the Quaero Old Press (QOP) corpus, 295 OCRed 7 newspaper documents dating from December 1890 annotated according to the Quaero guidelines [28], split by us into train (1.45 m tokens) and dev/test (each 0.2 m). We only consider the outermost entity level (no nested entities or components) and train on the fine-grained subcategories (e.g., loc.adm.town) of the 7 main classes. Modeling NER as a sequence labeling problem and applying Bi-LSTM networks is state of the art [1,4,17,19]. Our experiments follow [1] in using characterbased contextual string embeddings as input word representations, allowing to "better handle rare and misspelled words as well as model subword structures such as prefixes and endings". These contextualized word embeddings rely on neural forward and backward character-level language models that have been trained by us on a large collection (500 m tokens) of late 19th and early 20th centuries Swiss-French newspapers. In accordance to the literature, a Bi-LSTM NER model with an on-top CRF layer (Bi-LSTM-CRF) works best for our data. As a baseline system, which will also be provided for the shared task, we train a traditional CRF sequence classifier [18] using basic spelling features such as a token's character prefix and suffix, the casing of the initial character and the presence of punctuation marks and digits. The baseline classifier shows fairly modest overall performance scores of 69.4% recall, 56.2% precision and 62.1 F 1 (see Table 1). Trained and evaluated on the QOP data, the neural model relying on contextual string embeddings clearly outperforms the baseline classifier. As shown in Table 1, the Bi-LSTM-CRF model achieves better F 1 for all of the 7 entity types and surpasses the feature-based classifier by nearly 12 points F 1 . Examples in Table 1 evidence that the CRF model frequently struggles with entities containing miss-recognized special characters and/or punctuation marks. In many such cases, the Bi-LSTM-CRF classifier is capable of assigning the correct label. These results indicate that the new neural methods are ready to enable substantial progress in NER on noisy historical texts. Conclusion From the perspective of natural language processing (NLP), the HIPE evaluation lab provides the opportunity to test the robustness of existing NERC and EL approaches against challenging historical material and to gain new insights with respect to domain and language adaptation. From the perspective of digital humanities, the lab's outcomes help DH practitioners in mapping state-of-the-art solutions for NE processing on historical texts, and in getting a better understanding of what is already possible as opposed to what is still challenging. Most importantly, digital scholars are in need of support to explore the large quantities of digitized text they currently have at hand, and NE processing is high on the agenda. Such processing can support research questions in various domains (e.g. history, political science, literature, historical linguistics) and knowing about their performance is crucial in order to make an informed use of the processed data. Overall, HIPE will contribute to advance the state of the art in semantic indexing of historical material, within the specific domain of historical newspaper processing, as in e.g. the "impresso -Media Monitoring of the Past" project 8 and, more generally, within the domain of text understanding of historical material, as in the Time Machine Europe project 9 which ambitions the application of AI technologies on cultural heritage data. grant number CR-SII5 173719. We would also like to thank C. Watter, G. Schneider and A. Flückiger for their invaluable help with the construction of the data sets, as well as R. Eckart de Castillo, C. Neudecker, S. Rosset and D. Smith for their support and guidance as part of the lab's advisory board.
3,190.6
2020-03-24T00:00:00.000
[ "Computer Science", "History", "Linguistics" ]
PROCEEDINGS OF THE CONFERENCE NEW DEFINITION OF THE MUSEUM: ITS PROS AND CONS – INFORMATION AND CONSIDERATIONS <jats:p /> On the cover of the last issue of this journal, we can see the director of the Technical Museum in Brno, Ing.Ivo Štěpánek, opening the conference "New Definition of the Museum: Its Pros and Cons" held on 7 and 8 March 2022.On pages 32-48 of the same issue, a contribution by Mgr.Lucie Jagošová, PhD. and doc.Mgr.Otakar Kirsch, Ph.D. was published, in which they evaluated the results of a questionnaire survey on the museum definition, carried out in spring 2021.Most of the contributions have already been published in both Czech and English versions. 1On the eve of the ICOM General Conference in Prague, the participants reflected upon whether it makes sense to change the valid definition.Partial questions were also discussed, such as the significance of questionnaire surveys for the creation of a new definition, non-profitability, the issue of tangible and intangible heritage, virtual reality, and also the perhaps still marginal phenomenon of a museum without collections.The employees of the Technical Museum in Brno once again demonstrated their excellent organisational skills 1 DOLÁK, Jan and Josef VEČEŘA (eds.).Nová definice muzea aneb její klady a zápory: sborník přednášek ze stejnojmenné konference.Brno: Technické muzeum v Brně, 2022.ISBN 9787-80-7685-010-1.English version of the collected papers was published as DOLÁK, Jan and Josef VEČEŘA (eds.).New Definition of the Museum: Its Pros and Cons.Proceedings of the Conference [online].Brno: Technical Museum in Brno, 2022Brno, [accessed 2023-05-02]-05-02].Available from www: <https://www.tmbrno.cz/produkt/newdefinition-of-the-museum-its-pros-and-cons>.and made the participants' stay more pleasant by opening an exhibition dedicated to the life and work of Leonardo da Vinci, which they took over from their Polish colleagues.The collected volume, edited by Jan Dolák and Josef Večeřa, contains 10 written contributions to which Jan Dolák added a short summarizing introduction.In my text, I inform about all contributions, but in more detail only about those in which the authors managed to get closer to the goal of the conference, which was drawing attention to the pros or cons of the (as of March Temple of the Muses to the basis of today's concept formulated by French encyclopaedists.More attention is paid by him to the Czech environment and wordings in the Otto's and Masaryk's academic dictionaries.He cites § 2 of Act No. 54/1959 Coll. on museums and galleries: "Museums and galleries are institutions which on the basis of investigation, or scientific research, systematically collect, professionally manage and process the collections of tangible documentary material on the evolution of nature and society, on artistic creation or other kinds of human activity using scientific methods, and utilise these collections for cultural and educational outreach purposes."The creators of entries in all our later encyclopaedias were based on this definition.In the end, the author informs how the personalities of Czech museology (J.F. Svoboda, J. Neustupný, J. Beneš and Z. Z. Stránský) defined the museum and states: "Despite these attempts by major Czech museologists, none of their definitions gained traction amongst the experts.From the mid-1950s, either the definition within the Act on Museums was used, or later the definition adopted by the international organisation ICOM was used". Critical comments on the proposals for the new definition were presented by Mgr.Tomáš Drobný and Mgr.Pavla Vykoupilová from the Moravian Museum in their contribution The educational function of museum culture and its reflection in the definition of a museum.They consider the official proposals to be misleading: "Producing a definition of a museum under these circumstances as a vision of the institute in future would also mean that we are convinced that all museums have, or should have, the same programme.These observations suggest that the tendency towards an activist approach to formulating a new definition of a museum is not appropriate."Support in museum collections is essential for museum education.A collection is a feature that can be used to distinguish museums from "educational or entertainment projects being set up which use exhibitions or virtual media products to connect with the general public, and which make use of the name 'museum' or 'gallery' for their presentation."In the end, they plead for a "definition minimum for museums and museum culture in general which is fundamentally unchanging, because it captures the method by which the human need to collect is grasped, something that has been part of our culture since the period of Greek thinking to the present day." The same conclusion regarding the museum collections was also reached by PhDr.RNDr.Richard R. Senček, PhD. from the Slovak Mining Museum in Banská Štiavnica in the contribution f -Múzeum.The "f" here stands for either a futuristic true (real) museum or a fictitious (pseudo) museum.The author bases himself on the concept of museality by Z Z. Stránský and, using the knowledge of C. Lévi-Strauss and U. Eco and the argumentation of W. Gluziński and J. Dolák, he analyses the question of truth, legitimisation of knowledge both in the form of storage in collections and, conversely, in the form of reference to collections.He also investigated the possibilities of cooperation between a "classical" museum and the virtual environment and came to the conclusion that the support in collections is essential for professional outputs in a digital form."A museum without authentic exhibits (musealia) is a museum without truth.It is like metallurgy without metal, a library without words in books.A museum without collections is not a museum!"And the author "hit the nail on the head" in another way as well.He explained why some facilities refer to themselves as museums."This is because a museum has a number of forms and approaches to activities and a lucrative trademark."(highlighted by O. B.). An opposing opinion was expressed by Mgr.Jakub Jareš, Ph.D. and Bc.Karolína Bukovská from the MUSEum+ organisation in Ostrava.In the contribution Museum without collections?! Museums' new role and discussion of their definition, they refer to the questionnaire survey mentioned in the introduction.Its results, according to them, "showed that emphasis on a collection as the core defining hallmark of museums is not a homogeneous position in the Czech Republic.In terms of frequency, the term 'heritage' was in top place, a term encompassing collections, but which is more universal and emphasises a relationship to that which is handed down from generation to generation.The Czech terms vzdělávání and edukace, both referring to education, were in second and third place, while collections were only in fourth place".However, the use of the word "only" is somewhat manipulative.Of the 499 responses obtained, "heritage" was in first place (411 = 83 %), "learning" was second (394 = 7 %), "education" was third (354 = 71 %) and "collections" were "only" in fourth place (353 = 71 %).There is no difference between the third and the fourth place.The result rather shows that the collectionbased foundations are accepted by the majority of museum workers as essential.Their organisation is somewhat different: "The museum which we work for -the new statesubsidised organisation MUSEum+does not yet have any collection either.While the museum will gradually create one, it is not meant to be one of its primary activities.The principal emphasis will be placed on presentation, education and participation as activities which form the essence of the museum, just as a collection does."(highlighted by O. B.) On the website, they cite museums (sic!) as their inspiration, such as the Ars Electronica Center in Linz, Berlin's Futurium, the Museum of Tomorrow in Rio de Janeiro or the Zollverein complex in Essen.They refer to them as museums, although only one facility out of the four named has this designation in its name, and only the last of them would be suitable for this designation as a technical monument.This contribution deserves attention because it concerns the use or misuse of the name "museum" as a brand.Not everyone does, e.g.VIDA! -although the amusement science park in Brno competes with the Technical Museum in a certain sense, the name does not lie.There is no doubt about the usefulness of these facilities, but their efforts should be focused on enhancing their own prestige and promoting their own grant programmes, not on diluting the world of museums. The author of the text New Museum Definition 2022.What do Slovak museologists think, Mgr.Františka Marcinová from the Association of Slovak Museums, states the following about their questionnaire survey: "The result indicated that Slovak museologists are rather conservative and do not really feel the need for a new definition."How did the Slovak colleagues respond?The basic question asked whether or not the current definition should be changed.The result was 21.9 % yes, 40.6 % no, the rest was not interested.Even the non--profitability is not necessary in the definition according to the majority of Slovak museum workers (68.8 %).They are also clear about the issue of collections."Museums cannot exist without collections."And the final summarization: "The consensus of all museologists in our environment is that a museum without collections is not a museum.We are open to the not-for-profit concept, for we all agree that profit should not be the main reason for establishing and operating museums.Museologists in Slovakia see museums as permanent scientific and educational institutions and organizations which not only acquire, preserve, manage, and present, but above all protect their collections." The contribution by Mgr.Václav Rutar from the National Technical Museum in Prague Why do we need a new museum definition, after all? is also based on the results of the questionnaire survey carried out in the Czech Republic.The author is not one of those who consider the method of composing a definition from the statistically most frequent words to be unproductive."The methodology selecting the right terms appears to be the right one -we need, however, to remember that a definition isn't just a set of selected words, but rather words put together and, according to a number of practitioners I agree with, also a clear and succinct definition allowing for an understandable translation."This method was also chosen for the preparation of draft definitions for the 26th ICOM General Conference in Prague in August 2022.The author recalls the nearly fifty-year validity of the existing definition and describes the broader context of terminological works within ICOM.He also points to the activities of the documentation committee (CIDOC) and the publication of the dictionary Dictionarium museologicum, in the creation of which I, together with Zbyněk Z. Stránský, participated in 1985Stránský, participated in -1986, and describes how the work on the new definition has progressed in recent years.However, the question of whether we need a new definition of the museum must be answered by the readers themselves. The topic of the conference was treated in full detail by PhDr.František Šebek from the Faculty of Arts and Philosophy, University of Pardubice.In the text Where is the museum world heading in the midst of early 21st century changes?, he states: "In terms of logic, we need to observe certain rules in defining a term.[...] It is particularly important to enshrine the vital role of museums in creating museum collections within the definition."He justifies the importance of differentiating museums from other facilities with an exhibition and education programme, because "... it appears that those voices which claim that the core essence of a museum is not creating collections, that some 'museums' need not be institutions with collections and it is enough when just 'some museum functions' are fulfilled are growing stronger.I think this is a grave error..."He critically evaluated the proposals for a new definition: "There are a large number of ambiguous expressions, often close or identical in meaning.The primary attributes of the formulated meaning of the term are hard to find, and they do not create a coherent whole.From a formal perspective, it is not the definition of a term, but rather a proclamation on the recommended focus of museum activities, almost with the characteristics of an ideological political manifesto."In the text and several times in the discussion, he proved that the way how the proposals for the new definition of the museum were created is not the right one.Mostly, however, in the absence of those who should listen to his words above all.And the words that the author incorporated into the text are almost prophetic: "If museums are not acknowledged this irreplaceable (crucial) role in the definition of the term, the museum world will begin to crumble and collapse.The word 'museum' will only carry on as a marketing tool..." A single contribution also offered an alternative proposal for the definition of a museum.It was presented under the title Moving on the definition of a museumwithout philosophy or poetics by doc.PhDr.Jan Dolák, Ph.D. from the Comenius University in Bratislava.First, he distinguished two basic approaches -the philosophical-museological and the practical one, where the aim of the latter is to create "a simple, apt, concise definition which is also understandable and substantive, and certainly strictly apolitical".He recommended that "the Kyoto wording remain in the history books of the discipline and that something else be focused on" with the remark that "it is better to leave the current definition as it is than to adopt a worse definition."He also thought about why other memory professions do not bother with redefinition, why librarians or archivists, for example, are not engaged in this issue for years.The reasons may be different, but the most likely seems to him that "the museum world has succumbed (and not for the first time) to the endless desire for gnoseological manifestations and self-definitions which do not result in much of any use."Regarding the method of creating the definition, he remarked that before the meeting in Kyoto, the proposal was drawn up by experts, which is in principle correct.He does not see failure in the method, but in the execution.In four points, he elaborated on what everyone who intends to deal with terminology and the creation of definitions not amateurishly or emotionally, but in a qualified manner should know.He considers the work based on a questionnaire survey to be useful for investigating how museum workers perceive their field, but not for drawing up a definition.From the previous, he inferred that "the final wording could comprise three parts: a) A preamble -a descriptive discussion of museums, in which some terms from the ICOM questionnaire, or from the ICOM Code of Ethics could be used -this part is not essential, b) the actual definition, c) comments, explanations." After further elaboration of his reasoning, he offered a new definition as a working version: "A museum is a permanent organisation which communicates its collections.A museum is open to the public and generally does not make profit." What to say in conclusion?The conference had the ambitious goal of "summarizing the results and, through the Museological Commission of the Czech Association of Museums and Galleries, handing them over to the ICOM Czech National Committee, which can use them as a basis for further negotiations and work."The participants actually attempted the impossible, and the speakers mostly did not even aim for agreement, but rather presented their opinions in parallel, or some tried to get support for them.However, the meeting was useful in its diversity.In the end, a document was created that our museology does not need to be ashamed of.OSKAR BRŮŽA freelance museologist, Brno, Czech Republic<EMAIL_ADDRESS>work can be used in accorda nce with the Creative Commons BY-SA 4.0 International license terms and conditions (https://creativecommons.org/ licenses/by-sa/4.0/legalcode).This does not apply to work s or elements (such as images or photographs) that are used in the work under a contractual license or exception or limitation to relevant rights.
3,568.6
2023-01-01T00:00:00.000
[ "Computer Science" ]
Development of microsatellite loci and optimization of a multiplex assay for Latibulus argiolus (Hymenoptera: Ichneumonidae), the specialized parasitoid of paper wasps Microsatellite loci are commonly used markers in population genetic studies. In this study, we present 40 novel and polymorphic microsatellite loci elaborated for the ichneumonid parasitoid Latibulus argiolus (Rossi, 1790). Reaction condition optimisation procedures allowed 14 of these loci to be co-amplified in two PCRs and loaded in two multiplex panels onto a genetic analyser. The assay was tested on 197 individuals of L. argiolus originating from ten natural populations obtained from the host nests of paper wasps. The validated loci were polymorphic with high allele numbers ranging from eight to 27 (average 17.6 alleles per locus). Both observed and expected heterozygosity values were high, ranging between 0.75 and 0.92 for HO (mean 0.83) and from 0.70 to 0.90 for HE (mean 0.85). The optimized assay showed low genotyping error rate and negligible null allele frequency. The designed multiplex panels could be successfully applied in relatedness analyses and genetic variability studies of L. argiolus populations, which would be particularly interesting considering the coevolutionary context of this species with its social host. Scientific RepoRtS | (2020) 10:16068 | https://doi.org/10.1038/s41598-020-72923-6 www.nature.com/scientificreports/ The marker of choice for many areas of population research, such as mating systems, kinship structure, demography or conservation genetics, are microsatellite loci [6][7][8] . The usefulness of these markers results from their codominance, high polymorphism, and abundance throughout the genome 9 . The high popularity of microsatellite loci, resulting in their frequent application in population genetics studies, is caused also by the beneficial proportion between the amount of information gained and the sum of financial costs and expended work efforts. Thus far, no such loci have been described for the L. argiolus species; therefore, the purpose of this study was to investigate these loci. Several methods for novel microsatellite marker development exist. One of them introduces third generation sequencing on a PacBio RSII platform. The described de novo loci must be verified first for reliable amplification and high polymorphisms. Microsatellites fulfilling the criteria of error free amplification, high genotyping quality, and polymorphisms may be applied for population genetic analysis. In studies applying microsatellites, at least over a dozen loci are required to obtain high statistical power of the calculations results 10,11 . Therefore, concurrent amplification of markers in multiplex polymerase chain reactions (PCR) is desirable because this procedure will significantly shorten the time of laboratory work and reduce the cost of analysis. Such assays need to be first optimized and validated to justify the use of the loci in future research projects [12][13][14] . This study aimed to develop a set of novel microsatellite loci and then to optimize and verify the newly designed multiplexes that could be applied in population and evolutionary studies of the specialized ichneumonid parasitoid L. argiolus species. Material and methods Sampling and DnA extraction. The Pacific biosciences RS platform sequencing. The PacBio library was constructed using 9 µg of genomic DNA consisting of 16 (12.5 µl) equally mixed female DNA samples. DNA was fragmented using miniTube (cat. no. 520064) in the Covaris System to a size of 1.5 to 3 kb according to protocol with minor modifications in which the intensity was set to 0.2 while the sonication time was reduced to 40 s. The shared sample was purified applying 0.6 × volume ratio of AMPure PB Beads (cat. no. 100-265-900, Pacific Biosciences). Furthermore, the 2 kb library was prepared using 750 ng of DNA according to the Pacific Biosciences protocol available online 15 and using the SMRTbell Template Prep Kit 1.0 (cat. no. 100-259-100, Pacific Biosciences). The procedure included DNA damage and end repair followed by blunt ligation of hairpin adaptors at both ends of the DNA fragments. Failed ligation products were removed with ExoIII and ExoVII enzymes. Purifications steps separating enzymatic reactions were performed by applying 0.6 × volume ratio of AMPure PB Beads. Size distribution of DNA fragments during the library preparation procedure and for the final library was checked by electrophoresis on 0.5% agarose gels. The resulting library was doubly purified and prepared for sequencing using DNA/ Polymerase Binding Kit P6 v2 (cat. no. 100-372-700, Pacific Biosciences) and applying MagBeads Kit v2 (cat. no. 100-676-500, Pacific Biosciences) for loading onto the sequencer according to the protocol generated with Binding Calculator v.2.3.1.1 (available online: https ://githu b.com/Pacifi cBio scien ces/Bindi ngCal culat or). Single molecule real-time testing (SMRT) was carried out on a PacBio RS II sequencer running two SMRT Cells 8Pac v3 (cat. no. 100-171-800, Pacific Biosciences) and a 360-min data collection mode. Data gathered from sequencing were subject to the RS_ReadsofInsert.1 protocol analysis with default settings to generate reads of insert (ROI). The ROI reads were subsequently subjected to microsatellite analysis and primer design using msatcommander v.1.08 16 with a threshold of at least eight or ten repetitions for tri-and tetra-nucleotide repeats, excluding mononucleotide repeats. Primer design parameters including the msatcommander option "combine loci" consisted of several parameters: (1) size of primers ranging from 18 to 22 bp, (2) annealing temperature (T m ) ranging from 58 to 62 °C, (3) GC content ranging from 30 to 70%, and (4) amplicon product size range of 80 to 400 bp. Finally, only microsatellite sequences containing tri-or tetra-nucleotide motif with at least 10 repeats were used for marker selection and amplification test performance. Microsatellite marker selection and testing. Loci with the highest number of tandem repeats were chosen from the identified microsatellite sequences set. The DNA sequences of the newly developed loci were aligned against the NCBI database of already described nucleotide sequences using BLAST algorithms (BLASTN 2.10.1 + and BLASTX 2.10.1 + programmes) [17][18][19] . In the next step, loci were filtered for their optimal amplification performance by applying following criteria calculated in silico: (1) maximal PCR efficiency and no dimer formation, (2) difference in annealing temperature between primer pair below 2 °C, (3) minimal penalty score, and (4) sequence length of microsatellite ranging from 100 to 350 bp. For multiplex design purposes, the selected markers were further ordered according to the predicted PCR product size into three categories long (> 300 bp), medium (150-300 bp), and short (< 150 bp). The finally selected markers were amplified on DNA of 16 diploid females. Polymorphisms and the allele range of the chosen markers were tested using the universal primer labelling method 20 Positive controls were not used as the DNA concentration and quality of all samples was satisfactory enough to permit successful PCR reactions. All loci were amplified separately, and the reaction efficiencies were visualized on 2% agarose gels. For loci in which no amplification products were observed, the amplifications were repeated by lowering the annealing temperature to 57 °C. PCR products were separated using an automated sequencer ABI 3500XL Genetic Analyzer applying GeneScan 600 LIZ as an internal lane size standard (cat. no. 4366589, Applied Biosystems). PCR products labelled with different fluorescent dyes were mixed for joint analysis. The resulting fragment sizes were read using GeneMapper v.4.1 (Applied Biosystems) software. Out of the tested set, only loci fulfilling following criteria were selected for multiplex panel optimization: (1) clear peak morphology, (2) no artefacts co-amplification, and (3) high number of alleles. Multiplex panel optimization. Out of 41 de novo described loci, 14 microsatellite markers were selected and organized into two multiplex PCR sets (seven loci each) by maximizing the number of loci labelled with the same dye (avoiding though allele overlap between loci) as shown in Fig. 1. The gaps between the neighbouring loci were set to the size of 66 to 137 bp according to the genotype data set of 16 females obtained during single microsatellite marker testing. The forward primer of each marker was fluorescence-labelled ( Table S2). Only six tetra-nucleotide loci fulfilling the microsatellite selection criteria (out of these set) were detected (Supplementary Table S1). From this set, 41 tri-nucleotide loci, named LA1-LA41 were selected for PCR amplification and testing (Supplementary Table S3) and then deposited in GenBank under accession numbers MN531308-MN531348. Out of 41 amplified markers tested in 16 diploid L. argiolus females, 40 (97.6%) were successfully amplified and were polymorphic (Supplementary Table S4). For four markers (LA1, 14, 16, and 38), the annealing temperature had to be lowered to 57 °C in order to obtain PCR products. Most of the markers (78%) amplified well and yielded clear and readable products. Twenty-four (60%) of the markers gave single peaks, while in the others 16 (40%), additional loci peaks impeding potential uncertainness in allele scoring were observed. These peaks consisted of stutter bands or after-peaks. In seven cases, two peaks differing by one bp in length were observed. The latter effect is caused in cases in which polymerase does not add adenine to the 3′ end of the newly synthesised strand during replication (resulting in so called − A and + A bands occurrence) 27 . BLASTX analysis showed no results for neither of sequence of the 41 tested loci. BLASTN results indicated that eight DNA sequences were similar to previously described DNA or cDNA sequences deposited in GenBank (Supplementary Table S5). However, in all results, the obtained query cover values were too low for convenient sequence annotation to the already described in GenBank genes or transcripts. This indicates that the selected loci are probably located in the non-coding regions of the L. argiolus genome. The exception could be locus La36 for which the query cover values exceeded 50% together with low E value scores and relatively high percentage identity of the aligned sequences for several insect species. In most of these cases BLAST search identified transcription factors sequences as the most probable results. The observed allele number in the tested loci ranged from 4 to 18 with 25 loci exhibiting at least 10 alleles. The best amplified, readable, and most polymorphic loci (Supplementary Table S4) were considered for multiplex development. LA11 was excluded from this set since it exhibited difficulties in raw read rounding, which could have been caused by its hypervariability (N A = 18). Development and characterization of the microsatellite multiplex assay. For multiplex optimization purposes, 14 loci were selected and organized into two multiplex panels (Table 1, Fig. 1). The combined microsatellite primers pairs in these assays successfully amplified the chosen loci to yield peaks that were clear and easy to score. For most of the loci, the initial primer concentration had to be adjusted to achieve balanced amplification of the markers (Table 1). We did not detect any evidence of genotypic errors using the MICRO-CHECKER 2.2.1 software, a finding that was also proven by the low frequencies of null alleles explored by CER-VUS 3.0.3 ( Table 1). The amplification success of the multiplexes was high, reaching 197 (100%) of the microsatellite profiles. Robustness of the optimized assay was further verified by obtaining 100% correctly genotyped samples for 20 duplicated samples (~ 10% of the studied samples set as recommend by Pompanon et al. 27 www.nature.com/scientificreports/ significant linkage disequilibrium among any pair of the loci under study after Bonferroni correction (adjusted value of P < 0.004) was found, indicating that the studied loci were most probably not linked. Deviations from Hardy-Weinberg equilibrium were significant (P < 0.001) for only one (La24) of the 14 loci tested after Bonferroni correction ( Table 1). The average number of alleles (N A ) per locus was 17.6, ranging from 8 in locus La23 to 27 in loci La19 and 21. The set of diploid female samples was characterized by the highly observed heterozygosity (mean H O = 0.83) ranging from 0.75 in locus La24 to 0.92 in locus La8 in addition to the highly expected heterozygosity (mean H E = 0.85) ranging from 0.70 in locus La23 to 0.90 in loci La8 and 29. The mean polymorphic information content (PIC) ranged from 0.65 in locus La23 to 0.89 in loci La8 and 29 (Table 1). Discussion In this study, we successfully describe new loci and optimization of a multiplex assay for L. argiolus, the highly specialized parasitoid of social paper wasps. PacBio RSII platform sequencing allowed for reliable discovery of microsatellite markers. This method has been proven to perform very well in de novo microsatellite description and was recently often applied [28][29][30] . Application of this method is especially useful in non-model organisms in which no information about genomic data exists. In such cases, it also outperforms other methods by being more efficient and accurate 30,31 . This method is also relatively cost and time efficient compared to the classical methods of microsatellite development 9,28 . Out of 11,355 DNA sequences, approximately 1.7% contained tri-or tetra-nucleotide motifs exceeding ten repeats. Tri-and tetra-nucleotide loci were selected as they were proven to cause less difficulties in scoring and rounding 32 . This finding is important, especially since insects are considered problematic for microsatellite isolation and genotyping and are often biased with particularly high frequencies of null alleles 9,33,34 . Tri-and tetra-nucleotide loci have been suggested to be less frequent than di-nucleotide in the genome; however, they are less prone to amplifications errors, especially stuttering 29 . For this reason, primers and in silico amplifications were very carefully chosen. Sequences with high repeat numbers were selected as it was proven that such loci have higher polymorphism due to higher mutation rate caused by polymerase slippage 35 . The amplification success of the designed markers was very high (97.6%), and most of the loci amplified well, which points to the high quality of the RSII sequencing and well-defined parameters for primers design [28][29][30][31] . The relatively long reads (on average, exceeding 2 kb) facilitated the design of primers that enabled high PCR performance 28 . The approach of applying universal primer labelling 20 has been reported to be very efficient and cost effective in other studies 21,32,36 . In our current study, it was also proven to be a very useful and suitable method for primer labelling. The selected markers amplified well, and in most cases, produced (78%) clear bands and in 60% produced single peaks that permitted confident reading. Almost all of the tested markers were polymorphic, which may have been the consequence of selecting sequences with more than ten repeats. The rate of polymorphic loci in conventional methods of de novo microsatellite isolation may be as low as 30% 30,35 . This finding indicates that appropriate microsatellite sequences were selected for amplification tests in single specimens. This finding may also point to a high intrapopulation polymorphism of the studied species. However, a trade-off between high variability and possible artefacts may exist. The extremely polymorphic markers may be biased with high null allele frequency as a consequence of high a mutation rate in these loci 9,27,33 . This bias could have occurred in the LA11 locus. For multiplex optimization, direct fluorescent labelling was applied to reduce primer-dimer formation and enhance the PCR efficiency 32 , which was 100% in the studied sample set. Our data show that the 14 microsatellite loci selected for assay development are reliable markers for genetic analyses of L. argiolus individuals with a good quality of detection, high polymorphism, and low frequency of null alleles. It is true that one of the loci was not in HWE, but it should be emphasized that the HWE test was performed for pooled diploid individuals that came from different populations resulting most probably in Wahlund effect. We believe that the presented tool will be very useful in L. argiolus studies, especially in the context of its life history strategies after considering co-evolution of this haplodiploid parasitoid with its social host. Because oviposition decisions of female parasitoids primarily influence their fitness, we hypothesize that female of L. argiolus lays more eggs in larger nests of the paper wasp and exhibits adaptive sex ratio manipulation in response to the number of her offspring produced in a single nest of the host. The previously mentioned hypotheses could be now addressed by applying the developed marker set that would allow determination of kinship and could also be a very good tool for the rapid identification of sex of preimaginal stages of L. argiolus. We believe that application of microsatellite markers will yield many answers concerning the biology and genetic structure of this specialized parasitoid. Additionally, the deposited sequence data may be subject to further microsatellite searches and testing in other closely related species of this interesting group of parasitoids.
3,816.4
2020-09-30T00:00:00.000
[ "Biology", "Environmental Science" ]
{\mu}W-Level Microresonator Solitons with Extended Stability Range Using an Auxiliary Laser The recent demonstration of dissipative Kerr solitons in microresonators has opened a new pathway for the generation of ultrashort pulses and low-noise frequency combs with gigahertz to terahertz repetition rates, enabling applications in frequency metrology, astronomy, optical coherent communications, and laser-based ranging. A main challenge for soliton generation, in particular in ultra-high-Q resonators, is the sudden change of circulating intracavity power during the onset of soliton generation. This sudden power change requires precise control of the seed laser frequency and power or fast control of the resonator temperature. Here, we report a robust and simple way to increase the stability range of the soliton regime by using an auxiliary laser that passively stabilizes the intracavity power. In our experiments with fused silica resonators, we are able to extend the pump laser frequency stability range of microresonator solitons by two orders of magnitude, which enables soliton generation by slow and manual tuning of the pump laser into resonance and at unprecedented low power levels. Both single- and multi-soliton mode-locked states are generated in a 1.3-mm-diameter fused silica microrod resonator with a free spectral range of ~50.6 GHz, at a 1554 nm pump wavelength at threshold powers<3 mW. Moreover, with a smaller 230-{\mu}m-diameter microrod, we demonstrate soliton generation at 780 {\mu}W threshold power. The passive enhancement of the stability range of microresonator solitons paves the way for robust and low threshold microcomb systems with substantially relaxed stability requirements for the pump laser source. In addition, this method could be useful in a wider range of microresonator applications that require reduced sensitivity to external perturbations. The formation of a single soliton in microresonators typically requires the pump frequency to be red-detuned relative to the thermally-shifted cavity resonance [14]. However, due to the thermal and Kerr response of the cavity modes to laser fluctuations, reddetuned pump frequencies are unstable while blue-detuned frequencies are stable [42]. As a result, accessing soliton states is experimentally challenging. To trigger the soliton states, different methods have been developed, such as power kicking [43,44], frequency kicking [45,46], and thermal control [47]. Direct soliton generation can also be achieved by optimizing laser tuning speed and stopping at the right frequency. This method works in materials with weak thermo-optic effect like MgF2 [14]. Power and frequency kicking methods are based on abrupt changes in the pump power or frequency, respectively. These changes are much faster than the thermal drift of the resonator modes. Recent work in Si3N4 and MgF2 microresonators demonstrated that multi-soliton states can be deterministically switched to single-soliton states by reducing the number of solitons one by one through backward tuning of the pump frequency [20]. In addition, very recently, it has been demonstrated that spatial modeinteraction in microresonators can support soliton generation [21,48]. A key challenge for most applications using microresonator-based frequency combs is the stable long term operation of the comb. In this work, we report the passive enhancement of the stability range of microresonator solitons by using an auxiliary laser. The auxiliary laser passively compensates thermal fluctuations and Kerr shifts of the resonator modes due to drifts and fluctuations of the soliton pump laser. In addition, the auxiliary laser compensates sudden intracavity power changes when the microresonator enters the soliton regime. Using this method, the length of the soliton stability range (soliton steps) is extended from 100 kHz to 10 MHz, which enables access to singlesoliton states without specific requirements for pump laser tuning speed or power kicking techniques. Soliton states can be reached by arbitrary slow tuning of the laser into resonance, which significantly simplifies the soliton generation process. In particular this enables the access to soliton states in ultra-high-Q resonators with flawless mode spectra (no mode crossings), which has been previously challenging. The enhanced stability range enables us to generate solitons at a very low threshold power of 780 μW in a 230-μm-diameter microrod resonator (280 GHz mode spacing). In addition we demonstrate singleand multi-soliton states at 3 mW threshold power in 1.3-mm-diameter glass rods (50 GHz mode spacing). The single-soliton optical spectrum has a smooth, sech 2 -like shape without significant imperfections due to mode-crossings. Low power consumption of microresonator solitons is in particular important for out-of-the-lab applications of frequency combs e.g. in battery powered systems [49]. In panel (d) only the auxiliary laser is coupled into a resonator mode. When tuning the pump laser into resonance, shown in panel (e), the thermal shift of the resonator modes automatically reduces the amount of light coupled into the auxiliary resonator mode. Panel (f) shows the abrupt transition into a soliton state, which reduces the coupled power of the pump laser. In this state, the pump resonance splits into a C-resonance (resonance for light arriving out-of-sync with the soliton) and an Sresonance (resonance for light arriving in-sync with the soliton). The reduction in pump power in the soliton regime moves the auxiliary resonance back towards the auxiliary laser and thus compensates the power loss. Panel (g) shows the temporal evolution of the intracavity power when tuning the pump laser into resonance with a fixed frequency auxiliary laser. Panel (h) shows an experimental trace according to the scheme in panel (g). The 1.3-μm-auxiliary laser passively compensates changes of circulating power of the pump laser. Figure 1(a) shows the concept of using an auxiliary continuous wave laser to increase the stability range for microresonator solitons. The auxiliary laser at 1.3 μm is kept at a fixed frequency on a resonator mode while a soliton is generated by a second laser at 1.5 μm. The 1.3 μm laser provides a background signal that compensates fluctuations of the 1.5μm soliton laser at time scales that are slower than the cavity build-up time. At the output, the soliton pulse can be separated using a wavelength division multiplexer (WDM). Figure 1(b) illustrates the intracavity power before and after entering the soliton state. A small portion of the 1.3 μm auxiliary laser power is coupled into the resonator prior to the soliton generation. Once the soliton is formed, the intracavity power of the auxiliary laser passively rises to compensate the temperature variation of the resonator caused by the loss of intracavity pump power. In the soliton state, the intracavity field consists simultaneously of an intense soliton pulse, weak 1.5 μm CW background and 1.3 μm auxiliary CW background. The resonator used in the experiment is a fused silica microrod resonator, shown in Fig. 1(c) [50] with a Q-factor of 2×10 8 at 1.3 μm and 3.7×10 8 at 1.5 μm. By controlling the curvature of the resonator sidewalls during CO2 laser machining, the microrod resonator can be engineered to have an ultrahigh optical quality factor and minimal avoided crossings between different mode families. Figure 1(d) -(f) show how the auxiliary laser stabilizes the optically circulating power within the resonator to enhance the stability range of the soliton generation. First, the frequency of the 1.3 μm auxiliary laser is tuned into a high-Q optical resonator mode from the blue side and fixed on the blue side of the resonance, as shown in Fig. 1(d). Due to absorption of light, the resonator heats up, red-shifting the resonances at 1.3 μm and 1.5 μm. Note that the mode at 1.3 μm resonance shifts by a factor of 1.16=f1330/f1550 more than the 1.5 μm resonance as a result of the higher mode number. The 1.5 μm pump laser is now tuned into its resonance from the blue side, passing through a chaotic four-wave mixing regime. The rising intracavity pump power has the same thermal effect as the 1.3 μm auxiliary laser and also red-shifts both resonances (shown in Fig. 1(e)). This red shift of the resonances at 1.3 μm causes the 1.3 μm intracavity power to decrease, counteracting the temperature rise induced by the 1.5 μm pump laser. The temperature (and intracavity optical powers) of the microresonator will reach a stable equilibrium since the frequencies of both lasers are on the blue side of their optical resonance modes [42]. The intracavity power of the 1.3 μm auxiliary laser will be further reduced as the 1.5 μm pump laser scans into the resonance (t0-t1 in Fig. 1(g)). Once the pump laser gets close to zero detuning, the resonator abruptly transitions into the soliton regime. In this regime the pump resonance splits into a soliton resonance and a cavity resonance [20]. As shown in Fig. 1(f), the lowfrequency and small peak is the soliton-induced 'S-resonance'. Upon entering the soliton regime, the intracavity pump power decreases abruptly, blue shifting the resonances. As a result, the intracavity auxiliary power passively rises to stabilize the temperature of the resonator (t1…t2 in Fig. 1(g)). Figure 1(h) shows experimental results when fixing the frequency of the 1.3 μm auxiliary laser on the blue detuned side of its resonance while simultaneously tuning the 1.5 μm pump laser into the pump resonance. We indeed see the 1.3 μm laser compensating the intracavity power variation, keeping the total circulating power inside of the resonator stable, therefore passively stabilizing the temperature of the microresonator. This effect can be optimized by varying the parameters (optical power and frequency detuning from the resonance) of the 1.3 μm laser, allowing the effective thermal response of silica resonators to be reduced by two orders of magnitude. An animation of the concept is available online [51]. Figure 2(a) shows the schematic of the experimental setup. A 1.5 μm external cavity diode laser (ECDL) with a short-term linewidth of <10 kHz is used as the pump laser for generating a soliton frequency comb. A 1.3 μm laser is used as an auxiliary laser to compensate changes of the circulating power in the resonator. As mentioned above, as well as choosing a high-Q mode family with minimal avoided crossings for soliton generation, we also choose a high-Q optical mode at 1334 nm, which enables us to operate the auxiliary laser at a low power level similar to the pump laser power. The two lasers are combined with a WDM and evanescently coupled to the microresonator via a tapered optical fiber. Two fiber polarization controllers (PCs) are used to optimize the coupling efficiency of the auxiliary and pump light into the microresonator. As shown in Fig. 1(c), a 1.3 mm diameter microrod resonator is used for soliton generation with a FSR of 50.6 GHz. The optical modes used to generate the soliton frequency comb have a quality factor of 3.7×10 8 with a ~520 kHz linewidth (measured at 1554 nm) while the chosen auxiliary optical mode has a quality factor of 2×10 8 with a ~1.1 MHz linewidth (measured at 1334 nm). At the resonator output, the auxiliary and pump lights are separated by another WDM. One part of the 1.5 μm light is sent to an optical spectrum analyzer (OSA), and the rest is sent into a fiber Bragg grating notch filter to separate the generated comb light from the pump light. The comb light is sent to two photodetectors: one for monitoring the comb power (PD1), and the other (PD2) with a 50 GHz bandwidth for detecting the repetition rate of the generated soliton frequency comb on an electronic spectrum analyzer (ESA). The auxiliary light is monitored by a third photodiode (PD3). EXPERIMENTAL SETUP By optimizing the optical power of the 1334 nm auxiliary laser and its detuning from the optical resonance, both multi-and single-soliton states can be accessed by manually forward-tuning the pump laser into the soliton steps. Figure 2(b) shows the optical spectrum of a single soliton at a pump power of ~80 mW while ~60 mW of 1334 nm auxiliary power is used to compensate the thermal effect. The optical spectrum of the single soliton has a smooth, sech 2 -like shape (red dashed line in Fig. 2(b)). Note that there is no significant avoidedcrossing behavior visible in the spectrum range. The 3 dB bandwidth of the spectrum is around 1.3 THz, corresponding to a 240 fs optical pulse. Once excited, the soliton states can survive for many hours without active feedback locking. For longer soliton lifetimes, active feedback locking can be used [44]. The soliton state is further confirmed by measuring the RF spectrum at the comb's 50.6 GHz repetition rate (frep). The beat note from the photodiode is amplified and mixed down to 13.6 GHz with a 37 GHz microwave signal from a signal generator. The down-converted spectrum at 13.6 GHz is analyzed with an electronic spectrum analyzer. The inset in Fig. 2(b) shows the frep beat note when the microcomb is in the single-soliton state. Figure 2(c) shows the optical spectrum of a two-soliton state at a pump power of ~80 mW. The demonstrated technique constitutes a simple and robust way to access single solitons in microresonators making the system insensitive to pump laser frequency and power fluctuations. In particular this technique enables to access soliton states in resonators without mode crossings that naturally exhibit very narrow soliton steps. Fig. 3. Enhancement of soliton stability range. (a) Experimental traces of the 1550 nm intracavity power when scanning the pump laser frequency from blue to red detuning "without auxiliary laser" (upper panel) and "with auxiliary laser" (lower panel). The laser tuning speed is ~35 MHz/ms. The inset in the upper panel shows a trace of the narrow soliton step with a width of ~100 kHz without auxiliary laser. The lower panel shows the same resonance while the auxiliary laser is coupled into the resonator. The overall thermally broadened width of the resonance is reduced while the soliton stability range is increased by two orders of magnitude to ~10 MHz. (b) Experimental traces of the 1550 nm intracavity power for different multisoliton states (with auxiliary laser). To explore the full advantages of our proposed technique, we obtain a single-soliton frequency comb with both the pump laser and the auxiliary laser operating at very low optical powers. Figure 2(d) shows the optical spectrum of a single soliton state pumped by 3 mW optical power at 1554 nm with a few mW at 1334 nm auxiliary laser for compensating the thermal effect. The spectrum has a smooth sech 2 -like shape (red dashed line). In addition, using a smaller diameter (230 μm) microrod, a single-soliton state is accessed with 780 μW optical power (pump power in the tapered optical fiber), as shown in Fig. 2(e). To the best of our knowledge this is the first demonstration of a soliton microcomb at sub-mW power levels. INCREASE OF THE SOLITON STABILITY RANGE Soliton mode-locked states are generated by scanning the pump laser frequency from blue detuning to red detuning with respect to the resonator mode. Due to thermally induced (and Kerr effect induced) resonance frequency shifts, the measured power of the generated comb modes has a triangular shape [42]. This is shown in the upper panel of Fig. 3(a) for a measurement without the auxiliary laser. The pump wavelength, optical power, and laser scan speed are ~1554 nm, ~20 mW and ~35 MHz/ms, respectively. The width of the thermal triangle is ~700 MHz. At the end of the triangle shape (marked with a dash circle), "step-like" features are observable, which indicate the presence of soliton states. The inset in the upper panel of Fig. 3(a) shows a zoom into a single soliton step without auxiliary laser with a width of ~100 kHz (corresponding to a few microseconds for the used laser sweep speed). These kHz-scale soliton steps are usually too narrow to reliably generate solitons since any jitter of the pump laser frequency would lead to a loss of the thermally locked microresonator resonance. In contrast, the lower panel in Fig. 3(a) shows the microresonator resonance when the 1334 nm auxiliary laser is simultaneously coupled into the resonator. The 1554 nm pump laser is operated at the same parameters (laser scan speed and optical power) as in the measurement without auxiliary laser. With the auxiliary laser coupled into the resonator, the length of the single-soliton step is significantly increased by two orders of magnitude to more than 10 MHz. Note that the overall width of the thermally broadened resonance is reduced to ~25 MHz, such that the soliton regime spans nearly half of resonance frequency range. Figure 3(b) shows a few examples of different soliton steps at 1554 nm when the auxiliary laser is close to its resonance. Each step represents a different integer number of solitons circulating inside the resonator. Note that, with the auxiliary laser, the length of soliton steps in time depends on the laser scanning speed, instead of the thermal relaxation velocity. As a result, when scanning more slowly (~1MHz/ms), the soliton steps can even last for tens of milliseconds, which is four orders longer than that without the auxiliary laser. Figure 4(a) shows the influence of the auxiliary laser detuning from its resonance on the thermally broadened width of the 1554 nm resonance. During this measurement, the auxiliary resonance thermally stabilizes itself to the auxiliary laser. It can be seen that smaller detuning of the auxiliary laser leads to a better stabilization of the circulating intracavity power and thus a reduced thermal broadening effect at the 1554-nm-resonance. A mathematical description of the narrowing can be found in the Supplementary Information. Figure 4(b) shows the numerical calculations, based on the dynamical thermal behavior of the microresonator resonance in the presence of an auxiliary laser in a second resonance. The results are consistent with the experimental measurements in Fig. 4(a). Figure 4(c) shows both the experimentally measured and the numerically calculated width of the 1554-nmresonance as a function of the detuning of the 1334-nm-laser from the auxiliary resonance. All the previous calculations take into account a larger thermally induced resonance frequency shift of the mode at 1334 nm as a result of the higher mode number. We verify this by measuring the relative resonance shift of pump resonance and auxiliary resonance when heating up the resonator with one of the lasers (see Supplementary Information for more details). The results are displayed in Fig. 4(d) and show a slope of f1550/f1330 = 0.862, which is close to the expected value of 0.859 based on the ratio of the mode numbers. This larger resonance shift at 1.3 µm gives the auxiliary laser more leverage on the resonator temperature and enables its operation at reduced power compared to the pump laser. CONCLUSION In summary, we have demonstrated that the stability of microresonator mode spectra can be greatly enhanced by coupling an auxiliary laser into a second high-Q resonance. This increase the laser frequency range in which a microresonator generates Kerr solitons by two orders of magnitude and significantly reduces the sensitivity of microcomb states to pump laser frequency and power fluctuations. The scheme enables long-term optical frequency comb generation without active stabilization of pump laser frequency or power. This greatly relaxes stability requirement for laser sources in future fully chip integrated microcomb systems. The enhanced stability enables us to demonstrate the first microresonator solitons at sub-mW power levels. In addition, the auxiliary laser could be used as an active actuator to stabilize a soliton frequency comb as shown in [52]. We believe that our technique of passively stabilizing microresonator mode spectra could be applied to other resonator systems/applications that require insensitivity to perturbations by an external laser.
4,487
2018-09-26T00:00:00.000
[ "Physics" ]
Measuring Technological Progress of Smart Grid Based on Production Function Approach Production function theory combined with data envelopment analysis (DEA) and ridge regression analysis (RRA) is applied to evaluate the technological progress of the smart grid.The feasible conditions of production function models are determined by the DEA algorithm. RRA is applied to estimate the relevant parameters of the evaluation models under study. One of the significant steps in the design of the assessment algorithm is the structure of production function models. Therefore, the Cobb-Douglas, constant elasticity of substitution, and translog production functions are employed to evaluate the technological progress of the smart grid, respectively. The results of analysis and calculation mainly include the DEA relative efficiency, slacks in inputs and outputs of inefficient units, estimated parameters, and quantitative indices of technological progress. Introduction The smart grid is a new modern power grid, and it owns advanced metering technologies, information communication technologies, analysis and decision technologies, automatic control technologies, and highly integrated physical infrastructures [1].Different from traditional power grid, the intelligence is the most significant attribute, and it is also the core value of the smart grid, to improve the socioeconomic benefits for the public.Generally, the intelligent technologies of the smart grid mainly include advanced technologies and equipment in the generation, transmission, substation, distribution, and dispatching fields, and they will enhance the self-healing ability, the integration of information and communication, the highly efficiency of management, and the interaction with consumers to play a part in optimizing the operation of the system [2]. The evaluation of technological progress not only will be able to reflect the technological level of a smart grid, but also can measure the economic benefits brought from the applied advanced technologies.However, the smart grid as a comprehensive engineering is with a long construction period, intensive investments, and highly technical difficulties.It is very hard to quantitatively identify the development level of the smart grid.Hence, how to evaluate the construction effect of a smart grid and the intelligent technology availability has become one of the challenges for the current assessment research of the smart grid.It is necessary to present an evaluation methodology to measure technological progress of a smart grid. Although the construction state of smart grid is still in the initial stage, the evaluation research and practice of smart grid have been reported preliminarily.The US Electric Power Research Institute (EPRI) designed the assessment system for smart grid programs over the planning and construction periods, for the purpose of the identification of the technological levels and the metrics of smart grids.Furthermore, this work would be helpful to perform the costbenefit analysis of smart grids in US [3,4].Different from the EPRI, the US Department of Energy only outlined the overall development ideas and some major metrics for the smart grid in the report, not describing the specific interpretations in detail [5].The European Network of Transmission System Operators (ENTSO) also conducted an evaluation index system of investment grant project for the European smart grid, but not analyzing the technological benefits [6]. As known to all, the smart grid has been a hot topic in electrical engineering sector.An effective and scientific evaluation method is beneficial to the identify, the problems of smart grid construction, and advanced technology application.Hence, in this paper we attempt to present a methodology to evaluate the technical level of the smart grid based on the product function.The production function specifies the maximum output that can be produced with a given quantity of inputs.It is defined for a given state of engineering and technical knowledge.From the economic point of view, technology innovation occurs when new engineering knowledge improves production techniques for existing products.Such technological change is equivalent to a shift in the production function.Consequently, the production function models are developed to assess the technological progress and socioeconomic benefits of the smart grid.Moreover, we consider the application of the DEA methodology to determine the efficient production state of the smart grid, because efficient production is the necessary condition of application to production function theory [7][8][9].Through the analysis of DEA, the production function models are built based on the output, input, and technology items.The advantage of the proposed methodology is to quantitatively evaluate the impact of the smart grid technologies on economic benefits, which will show the intellectualization effects of a power grid.In addition, ridge regression is employed to estimate the parameters of production function models.Finally, case studies demonstrate the effectiveness of the proposed approaches [10]. This paper is organized as follows.Section 2 constructs some mathematical models with DEA and production function theory.In Section 3, the application of the proposed methodology is presented.Some discussion about the property of production function is done in Section 4. Finally, Section 5 summarizes the main conclusions and contributions of this paper. Data Envelopment Analysis. The DEA is an efficiency modeling approach that can be widely used to measure the relative efficiency of different decision-making units (DMUs).The DEA can not only analyze the simple input-output ratio, but also handle multiple input-output variables.The purpose of applying the DEA is to provide a judging standard that shows that the production state for the smart grid is efficient, and then the production function can be used to analyze the technological change.Otherwise, supposing the production based on the given inputs and outputs is inefficient, the slack analysis of DEA will offer the improved measure enabling the efficient perfect input-to-output state.Mathematically, the DEA algorithm is in essence a linear programming procedure.The formulation for the DEA methodology can be described as follows: where and are, respectively, the weight coefficients of input and output variables, is the amount of input utilized by the DMU , is the amount of output produced by DMU , the notation 0 is the designed unit for an optimization run, is a small positive number, is the dimension unit vector, and is the dimension unit vector.The above model is Charnes-Cooper-Rhodes (CCR) model that is suitable for DEA-based study of electric utilities [11] where is the scalar quantity that is technical efficiency score, is the decision variable of the DMU , and and are, respectively, the slack and surplus variables. Once the optimal solution * = 1, * = 0, * = 0, it illustrates that the DMU is called DEA efficient [12,13].The slack vectors including the input excess and the output shortfalls are defined as where Δ is the gap vector in inputs, Δ is the gap vector in outputs, is the input vector, and is the output vector. Production Function Theory. The production function focuses on the relationship between the amount of input required and the amount of output that can be obtained.Suppose is the output vector and 1 , 2 , . . ., is the combination of input variables; the production function can be generally described as It is noticeable that technological progress is an implicit variable and it is difficult to be calculated by the universal methods.Based on the idea of "residual value, " the technology as the independent variable can be separated from the production function.Hence, technological progress regarded as a residual value is calculated indirectly by this way.Some typical production functions are introduced as follows. Cobb-Douglas (C-D) Production Function.The C-D specification is described as a function with the input and maximum amount of output that can be produced using a combination of applied production technology [14].The inputs consist of the capital investment and the labor resource.The mathematical expression of a C-D production function is presented in [15] as follows: where is the technological progress variable, is the capital investment variable, is the labor resource variable, is the output variable, and and are, respectively, the output elasticities of capital and labor.It is remarkable that some assumptions play a key role in the derivation of C-D production function: constant returns to scale and perfect competition.Under the law of constant returns to scale, the sum of and is equal to one.Moreover, it is also assumed that the technological progress for the production is neutral. Constant Elasticity of Substitution (CES) Production Function.The classical CES production function derived by Arrow, Chenery, Minhas, and Solow in 1961 is one of the most widely used production functions.The CES production function is developed based on the assumption that the relationship between / (output per unit of labor) and (the wage rates) is independent of the stock of capital.However, the CES production function is also subject to the limitation that the value of the elasticity of substitution is constant although not necessarily equal to one.The explicit formula of a CES production function is described in [16] where is the proportional distribution parameter, is the scale parameter, > 1, = 1, or < 1, respectively, corresponds to increasing returns to scale, constant returns to scale, or decreasing returns to scale, and is the substitution parameter.In particular, while tends to zero, the CES production function will be transformed into the C-D production function, so C-D production function is the special form of CES production function. Translog Production Function. The translog production function imposes no more restrictions on returns to scale and the elasticity of substitution than the production functions above.The translog production function is recommended in [17], and the mathematical representation is defined as follows: where 0 , , , , , , , , , and are the undetermined parameters and is the time variable. A major advantage of the translog production function is that the elasticity of substitution for each input component is variable.Besides, the translog production function enables a richer specification of the relationships for the inputs compared to other production functions in the previous description.Nevertheless, the translog production function owns more parameters than the C-D and CES production functions, which means that the complexity of parameter estimation for translog production function will make a significant challenge. Parameter Estimation. Solving the parameter estimation problem is one of the most significant steps in the evaluation procedure using the production functions [18].Considering the characteristics of the estimated parameters in the proposed production functions, such as the collinearity and correlation properties, the ridge regression analysis (RRA) is adopted to perform the parameter estimation in this paper.The RRA is a dedicated to the analysis of the data of linear biased estimation regression method, and it is in essence an improved least-square estimation method [19].Therefore, it is suitable to employ the RRA to minimize the correlation effects of the variables.The fundamental principle of parameter estimation by the RRA is shown in brief [19]. Give the linear model where is the variable matrix, is the observed vector, is the estimated parameter, and is the error term.The ridge where is the transposed matrix of , is the identity matrix, and is the scalar parameter. Quantity Property of Technological Progress. Using the production functions, some quantitative indices representing technological progress in the production process need to be calculated.The evaluation indices are described in detail as follows. Rate of Technical Progress. The rate of technical progress denotes the effect of saving inputs per unit output in the assessment period.The derivation of the index is generally introduced as follows.Consider the following form of the general production function: The differential form of ( 10) is obtained as follows: where is the actual output growth speed. Technical Merit. The technical merit index indicates the technological level of the smart grid and it can be measured by the following form: where ( − 1) is the growth rate of technical progress at the time point − 1. Application In this section, the application of production function theory combined with DEA and RRA will be implemented.Figure 1 shows the overall evaluation process, in which the technological level of smart grid technologies and the technological progress of the smart grid can be displayed by means of the selected evaluation indices. The smart grid associated with a group of various technologies, attributes, and objectives covers comprehensive construction, where major breakthroughs in key technology and equipment should be achieved.It is a challenge to evaluate the technical level of a smart grid considering all concerns.Thus, it is suitable to select a specific attribute or goal of smart grid to study in detail.One of the representative objectives is integrating more clean energy, including solar and wind energy, into electric power grids, which is also taken as a classic example to implement the evaluation of technology level and technological progress for the integration of clean energy in this paper.The integration of large-scale clean energy is an important part of smart grid technologies.Generally, the clean energy turbine technology, integration technology into power grids, bulk storage devices, and power forecast technology have a significant impact on the clean energy development.Specifically, a highly efficient clean energy turbine can reduce cost and improve reliability.The optimal operation and sustainable construction are regarded as an effective suggestion for the improvement of clean energy integrated into power systems.The flexible bulk storage devices and the power forecast technology may overcome the volatility of renewable energy, so as to promote the utility of the clean energy widely.Therefore, to adapt the development of clean energy in the smart grid, the input-output relationship of such objective and the development level of intelligent technologies will be analyzed deeply. The data about the clean energy development plan in a regional power grid has been obtained in Table 1 which includes the forecast values of output and input variables in the next decade.The inputs contain labor and three kinds of investments which are the clean energy investments in capacities (CEIC), the bulk energy storage devices (BESD), and the construction investments of power grids (CIPG).The outputs include the reduced paying carbon taxes (RPCT), the benefits from the reduced fossil energy (BRFE), and the electricity sales of the clean energy (ESCE). According to the data about the clean energy development plan in Table 1, the feasibility of the production functions should be analyzed by the DEA technique.The DEA optimization model is solved by MATLAB optimization toolbox.The optimal solution of relative technical efficiency is gained and the performance of intelligent technologies can also be understood.Figure 2 shows the results of relative technical efficiency at the time points.It turns out that the relations between the inputs and outputs are DEA efficient in years 1, 2, 3, 5, 8, 9, and 10, respectively.Furthermore, it also illustrates that the technical efficiencies are available and the returns to scale are constant in these years.However, the ratio efficiency between the inputs and outputs is inefficient in years 4, 6, and 7.In order to identify the reason why the relative technical efficiency is not available, it is necessary to analyze the gaps of the outputs and inputs.The analysis will provide the suggestions of how to adjust the original outputs and inputs, so that the application of the production functions can be achieved.Figure 3 shows the state of slack variables of the inputs for power grids in the evaluation cycle.In Figure 3, it shows that the slack variables of investment 1, investment 2, and investment 3 are unequal to zero, which means that the investments are obviously in the idle states and the values of slack variables are equal to the idle quantity of the investments of power grids. The parameter estimation is also a significant step in overall procedure of the technological progress evaluation.Due to the advantage of RRA that is capable of coping with the problem of collinearity between each variable in the production functions, the results of the estimated parameters will be more accurate using the RRA technique.Tables 2 and 3 show the estimated parameter values using the C-D, CES, and translog production functions, respectively, with the scalar parameter of ridge regression = 0.11, 0.13, and 0.2.All the numerical results in Tables 2 and 3 are calculated by the SPSS software based on the observed measures of the input and output historical data [20].The results in Table 2 indicate that the economic scale is the constant returns to scale, owing to the equation of + = 1 and ≈ 1.The results in Table 3 demonstrate that all the coefficients of the translog production function are positive, which means that the technical progress is neutral and the trend will be accelerating in the evaluation cycle.In addition, the results from SPSS show 2 = 0.987, 0.951, and 0.865 for the C-D, CES, and translog production models, which indicates that the complex determined coefficients 2 are highly significant.Consequently, these estimated parameters are reasonable and nearly conform to the actual condition of the smart grid as well. Through the DEA examination analysis of multiple inputoutput variables and the parameter estimation of the production functions, the assessment of technological progress of the smart grid can be performed.The rate of technical progress, technical contribution to output growth, and technical merit, representing the indices of the technological progress level of the smart grid, can be calculated by the proposed production functions in the evaluation cycle.Figures 4 and 5 show the index results about the rate of technical progress and technical contribution to output growth, respectively.The results in Figure 4 indicate that the values of the rate of technical progress obtained from the translog production function are significantly smaller than other production functions.Except the results in the second year and the last year, the index values calculated by the CES and C-D production functions are approximately the same.The results in Figure 5 demonstrate that the numerical values of the technical contribution to output growth calculated by the translog production function The units of measurements of investment and income 1, 2, and 3 are hundred million dollars.The unit of measurement of labor force is ten thousand people.are distinctly less than the results obtained from others.Moreover, the difference between the translog production function and other production functions in the initial stage is slightly bigger than its later stage.With respect to the property of the rate of technical progress, the values calculated by the CES and C-D production functions are also approximately uniform.Synthesizing the data analysis for Figures 4 and 5, we can generally summarize that the calculation results obtained from the translog production function tend to be rather conservative, while the calculation results of the C-D and CES production functions display the optimistic property.Another point of view from the data analysis is that the calculation results obtained from the CES production function are close to the results of the C-D production function.The main reason is that on one hand they have the similar function expression and on the other hand they own the same application condition that the environment is the constant returns to scale. The index values of the technical merit are shown in Figure 6, and the results indicate that the technical merit of the smart grid is improved annually in the evaluation periods.The growth pattern of the technical merit is similar to an exponential function form.Due to the significant difference of the technical merit between the translog production function and other production functions, the numerical results of the translog production function are also less than the results of others.The technical merit results reveal that the technological level of smart grid technologies is enhanced yearly.It also means that intelligent technologies are generally used widely in the smart grid. The indices of the rate of technical progress, technical contribution to output growth, and technical merit represent the intelligent properties of the smart grid.Especially, the data results in Figure 5 show that the technological proportion is from about 20% to 30% for the C-D production function, which illustrates that the revenues from the intelligent technology are less than the investment and labor.It is necessary for managers to take effective measures to improve the intelligent level of the smart grid, furthermore promoting the extensive application of intelligent technologies. As for the proposed assessment models based on the production function approaches in this paper, a more important question is how to choose from these production functions to reflect the actual smart grid.In authors' opinion, it seems that the C-D production function could be used to evaluate the technological progress more properly in most cases.The reason is that the CES and translog production functions require more complex computation to estimate their parameters, which may result in the deterioration of calculation precision.Although a critical application condition of the C-D production function is limited to constant returns to scale, most actual situations in not only power systems but also other industries are thought to submit to it in general.However, if the decision-maker tends to implement more complex and detailed analysis to evaluate the technological progress, the CES and translog production functions will be recommended.Under the same conditions, C-D production function can be applied more widely and conveniently.Under some special circumstances, it should be noted that different production models have their corresponding and unique applied scopes, which will be discussed in the next section. Discussion As to the mathematical formulations of the production functions, the structure of the C-D production function is similar to the CES production function and they both have the unique inputs such as technology, capital investment, and labor.Moreover, the C-D and CES production functions can be widely used because of the simple expressions.However, for the translog production function, though it has more complex mathematical representation than other production functions, the law of restricting the fixed inputs and the constant elasticity of substitution may not be complied.Therefore, the translog production function can be applied on more academic areas considering more and more nonroutine factors.For example, the environmental factor is an important objective for smart grid development.Supposing the additional load demand as an environmental factor is contained in the new translog production function, the specific formulation is introduced as follows: where is the additional load demand. The additional load demand can be approximately forecasted according to the smart grid plans, technological innovation, and policy orientations.In [21], the forecast results of the additional load demand are given.Based on the data of the forecast additional load demand, the index values of technical contribution to output growth obtained from the different production functions can be shown in Figure 7. Translog production function I includes the additional load demand factor, while translog production function II is the original form of the translog production function previously mentioned.The difference from the results indicates that the influence of additional load demand cannot be ignored.In other words, the additional factors may conduct the different index results to represent technological level; consequently, it is necessary to focus on the impact of the various factors on the technological progress assessment of the smart grid. For the proposed methodology to evaluate the technological progress of smart grids, it describes an empirical relationship between specific output and inputs for a power grid.In the modeling process, the production functions are used to represent the output production generated from investment and labor inputs, as well as technology.For application to production function approaches, we assume that the input variables include , , and and the output variables are and .Hence, the technological progress of smart grids can be measured by this measure that is a parametric method in operations research and economics for the estimation of power system production.In addition, DEA technique as a nonparametric method is used to select the optimal inputs and output of production functions.DEA is a preprocessing using, underlying the application to production function approaches.The evaluation of technological progress proposes such mathematical problem that is a timeseries estimation of the production state of smart grids based on multiple inputs/outputs in power system planning and operation models.In the solving procedure, the following properties of the evaluation framework can be obtained.(i) The models approached by production functions are built on the assumption of DEA availability.(ii) Besides the input variables , , and determined by DEA, the parameters and also have the impact on the results of technology assessment.(iii) Data for a portion of the technological progress evaluation can provide the primary basis for exploration of the production function model, while the data used to implement the assessment can play a significant role in the accurate estimation. The evaluation framework can perform the technological progress assessment for smart grids from a macroview.Moreover, we assume that technical contribution to output growth represents technological progress in evaluation models.Different from the commonly evaluation methods, such as comprehensive assessment approach and cost-benefit analysis, this paper presents a novel evaluation model based on the parametric and nonparametric estimation methods to implement technology-based assessment for smart grids.The proposed methodologies can be used to evaluate the effects of the adopted intelligent technologies in smart grid construction, which is helpful to direct the future power system planning and operation. Conclusion This paper presents the evaluation methodology to measure the technological progress of the smart grid based on production function theory.The proposed method is mathematically formulated to analyze the relationship between multiple input and output variables of the smart grid.In the evaluation process, the DEA test is regarded as an important step to ensure the application condition of the production functions in economic law.The indices representing the technological progress characteristic of the smart gird are obtained from the adopted C-D, CES, and translog production.Moreover, the simulation results in case studies indicate that the tendency of technological levels is generally improved.The comparison analysis about the different production functions is performed in discussion, from which the application scope, modeling mechanism and engineering value of the production function can be understood.Finally, this study is a first strategic approach for the evaluation of technological progress of the smart grid from the macro view. Figure 1 : Figure 1: Block diagram of technological progress evaluation. Figure 2 : Figure 2: Input-output relative technical efficiency for power grids in evaluation cycle. Figure 3 : Figure 3: Schematic drawing of slack variables of inputs for power grids. Figure 4 : Figure 4: The rate of technical progress calculated by production functions. Figure 5 : Figure 5: The technical contribution to output growth calculated by production functions. Figure 6 : Figure 6: The technical merit calculated by production functions. Technical contribution to output growth (%) Translog production function I Translog production function II Figure 7 : Figure 7: Technical contribution to output growth calculated by translog production models. Table 1 : The input data of clean energy development plan. Table 2 : The results of parameter estimation of C-D and CES production functions. Table 3 : The results of parameter estimation of translog production functions.
6,101.2
2014-09-25T00:00:00.000
[ "Engineering", "Economics", "Environmental Science" ]
A Comparative Investigation of the Bile Microbiome in Patients with Choledocholithiasis and Cholecystolithiasis through Metagenomic Analysis While the precise triggers of gallstone formation remain incompletely understood, it is believed to arise from a complex interplay of genetic and environmental factors. The bile microbiome is being increasingly recognized as a possible contributor to the onset of gallstone disease. The primary objective of this study was to investigate distinctions in the microbial communities within bile specimens from patients with choledocholithiasis (common bile duct stones) and cholecystolithiasis (gallbladder stones). We employed massively parallel sequencing of the 16S rRNA gene to examine the microbial communities within bile samples obtained from 28 patients with choledocholithiasis (group DS) and cholecystolithiasis (group GS). The taxonomic composition of the bile microbial communities displayed significant disparities between the group DS and the group GS. Within the 16 prevalent genera, only Streptococcus, Ralstonia, Lactobacillus, and Enterococcus were predominantly found in the group GS. In contrast, the group DS displayed a more diverse range of genera. The alpha diversity of bile specimens was also notably lower in the group GS compared to the group DS (p = 0.041). Principal coordinate analysis unveiled distinct clustering of bile microbial communities depending on the location of the gallstone. Linear discriminant analysis effect size analysis, with a score threshold of >3 and the Kruskall–Wallis test (α < 0.05), recognized Bacilli and Lactobacillales as potential taxonomic markers for distinguishing patients with cholecystolithiasis limited to the gallbladder. Significant variations were found in the distribution and diversity of bile microbial communities between patients with choledocholithiasis and cholecystolithiasis. This observation suggests that alterations in the bile microbiome may contribute to the development of gallstones in these patients. Introduction Gallstone disease is a prevalent medical condition impacting millions of individuals across the globe.Gallstones are solid deposits that can develop in the gallbladder or bile duct, composed of a mixture of various substances such as calcium bilirubinate, calcium carbonate, calcium palmitate, calcium phosphate, glycoprotein, fatty acids, and cholesterol [1].The formation of gallstones is an intricate process influenced by a range of factors, including genetic and environmental elements.Additionally, the bile microbiome, the collection of microorganisms residing in the bile, has been increasingly acknowledged as a potential contributor to gallstone disease development [2,3].The gut-biliary microbiome, which encompasses the microbial communities in the gut and bile, plays a crucial role in all phases of gallstone formation.The gut microbiota can impact bile acid metabolism and absorption, thereby influencing bile composition and properties, potentially leading to gallstone formation [4,5].The biliary microbiome itself can also contribute to gallstone development by facilitating the precipitation and accumulation of cholesterol and other biliary components.Research has revealed distinct differences in the gut-biliary microbiome between individuals with gallstone disease and those without.Patients with gallstone disease tend to have a less diverse bile microbiome, with a higher prevalence of certain bacterial genera, such as Enterobacteriaceae, including Klebsiella and Escherichia.This has been demonstrated using both molecular analysis [6] and cultivation techniques [7,8].The microbial communities in the bile duct resemble those in the duodenum more than in other gastrointestinal regions, suggesting the duodenal microbiome's significance.However, the diversity of the biliary microbiota is lower compared to the duodenal microbiota [9].Another pertinent concern is the origin of bile microbial communities.It is suspected that retrograde infection with intestinal bacteria from the duodenum serves as the primary source of biliary infections [10,11].The major duodenal papilla acts as the sole anatomical barrier separating the duodenum and bile duct, appearing to be the gateway for the potential ascending invasion of intestinal bacteria [9].Protective mechanisms, such as the sphincter of Oddi, the immunological defense system, and the antimicrobial activity of bile salts, serve as defenses against intestinal bacteria invasion [7].Previous studies demonstrated that the microbial communities of three upper gastrointestinal (GI) tract sites (saliva, stomach, and duodenum) shared similarities in bacterial types.Given the proximity of the upper GI tract to the biliary tract, it is more likely to be the primary source of bile bacteria than the lower GI tract [12]. Modern sequencing technologies, like massively parallel 16S rRNA gene sequencing and shotgun metagenomics, have facilitated more comprehensive examinations of the bile microbiome.These techniques offer detailed insight into the taxonomic composition and functional capabilities of the biliary microbiome, including the identification of previously uncultured or unknown bacterial species.Massively parallel sequencing (MPS) analysis can also unveil changes in the biliary microbiome associated with specific health conditions like gallstones or inflammation, potentially revealing valuable biomarkers or therapeutic targets.Thus, MPS has significantly broadened our knowledge of the biliary tract microbiome and its role in health and illness [6,9,13].Various studies have employed metagenomic sequencing to illustrate variations in the metabolic profile, bacterial diversity, and physiological states of the microbial communities in the CBD for choledocholithiasis [6,14,15] and the gallbladder for cholecystolithiasis [16,17].However, as far as we are aware, no research has compared metagenomic differences in bile microbial communities based on the gallstone's location in patients with choledocholithiasis or cholecystolithiasis.Such a comparative investigation is essential to assess the potential impact of the microenvironment on biliary bacteria between the gallbladder and bile duct and determine the extent of bacterial compositional changes in the less hospitable biliary system. This study aimed to explore the distinctions in microbial communities within the bile of patients affected by these two gallstone-related ailments and to assess whether any substantial disparities existed between the two groups.By scrutinizing the bile microbiome in individuals with choledocholithiasis and cholelithiasis, this research may contribute to a deeper understanding of the microbiome's role in these medical conditions. Results The study comprised 28 patients, of whom 11 (39%) were male (Table 1).The median age of the entire patient cohort was 62 years, with an age range spanning from 27 to 88 years.Within this patient population, ten individuals with choledocholithiasis (group DS) presented gallstones exclusively in the CBD, while seven had gallstones in both the CBD and the gallbladder.In contrast, all 11 individuals with cholecystolithiasis (group GS) exclusively had gallstones within the gallbladder.Notably, there were no significant differences in the size, number, or radiopacity of the gallstones between the two groups.However, it is important to mention that gallstone recurrence was only observed in the group DS.Additional baseline characteristics for both the DS and group GS are shown in Table 1. Taxonomic Composition Proportions in Bile Microbial Communities Five dominant phyla and 16 prevalent genera were identified within the microbial communities in both the DS and group GS.Notably, there were significant variations in the relative abundance and prevalence of these bacterial taxa between the two groups.The dominant phyla in the group DS included Proteobacteria, Firmicutes, Fusobacteria, Bacteroidetes, and Actinobacteria.In contrast, the group GS was primarily dominated by Proteobacteria and Firmicutes (Figure 1a).Within the 16 prevalent genera, only Streptococcus, Ralstonia, Lactobacillus, and Enterococcus were predominantly found in the group GS.In contrast, the group DS displayed a more diverse range of genera.(Figure 1b).The proportion of Proteobacteria was notably higher in the group DS compared to the group GS, with significant differences observed in the composition of Proteobacteria, particularly the presence of Ralstonia.In contrast, Firmicutes, including Enterococcus, Lactobacillus, and Streptococcus, were more abundant in the group GS than in the group DS.However, the composition of Firmicutes itself did not significantly differ between the two groups.Additionally, Verrucomicrobia was exclusively found in the group DS. We also conducted a comparative analysis of the taxonomic composition within four selected taxa-Bacteroides, Enterobacteriaceae, Prevotella, and Proteobacteria-known for their significance in human gut microflora.Significant differences in the relative abundance of these taxa were observed between the DS and group GS (Figure 2). Alpha Diversity Analysis in Bile Microbial Communities To assess species richness and diversity in both the group DS and the group GS, we employed rarefaction curves based on the number of sequences obtained for each sample.In this study, the rarefaction curves in both group DS (Figure 3a) and group GS (Figure 3b) suggest that the sequencing depth was adequate for estimating species richness and diversity within each sample.Rank-abundance curves were used to compare the microbial community structures of bile samples in group DS (Figure 3c) and group GS (Figure 3d).refers to the population of identified phylum or genus strains less than 1%.The taxonomic profiling of the microbiome at the genus level reveals that 1% of the composition in higher taxonomic ranks is classified as unassigned in group DS, whereas none is observed in group GS.At the phylum level, unassigned taxa were not identified in either group DS or GS.The x-axis represents the value in percentage. We also conducted a comparative analysis of the taxonomic composition within four selected taxa-Bacteroides, Enterobacteriaceae, Prevotella, and Proteobacteria-known for their significance in human gut microflora.Significant differences in the relative abundance of these taxa were observed between the DS and group GS (Figure 2). to the population of identified phylum or genus strains less than 1%.The taxonomic profiling of the microbiome at the genus level reveals that 1% of the composition in higher taxonomic ranks is classified as unassigned in group DS, whereas none is observed in group GS.At the phylum level, unassigned taxa were not identified in either group DS or GS.The x-axis represents the value in percentage. Figure 1. Representation of the averaged taxonomic composition proportions within the bile microbial communities at the phylum and genus levels for the group DS and the group GS.Stacked bar charts illustrating the taxonomic composition at the phylum (a) and genus (b) levels.ETC (et cetera) refers to the population of identified phylum or genus strains less than 1%.The taxonomic profiling of the microbiome at the genus level reveals that 1% of the composition in higher taxonomic ranks is classified as unassigned in group DS, whereas none is observed in group GS.At the phylum level, unassigned taxa were not identified in either group DS or GS.The x-axis represents the value in percentage. We also conducted a comparative analysis of the taxonomic composition within four selected taxa-Bacteroides, Enterobacteriaceae, Prevotella, and Proteobacteria-known for their significance in human gut microflora.Significant differences in the relative abundance of these taxa were observed between the DS and group GS (Figure 2).To assess species richness and diversity in both the group DS and the group GS, w employed rarefaction curves based on the number of sequences obtained for each samp In this study, the rarefaction curves in both group DS (Figure 3a) and group GS (Figu 3b) suggest that the sequencing depth was adequate for estimating species richness an diversity within each sample.Rank-abundance curves were used to compare the micr bial community structures of bile samples in group DS (Figure 3c) and group GS (Figu 3d).The differences observed in the shapes of the rank-abundance curves suggest distin microbial community structures between the two groups.An in-depth analysis of alp diversity using diversity indices, such as ACE, Chao1, and Jackknife, was performed explore the variations in bile microbial communities between the groups.The results these indices indicate that bile specimens from the gallbladder in the GS group show no significant difference in bacterial diversity compared to those from the CBD becau The differences observed in the shapes of the rank-abundance curves suggest distinct microbial community structures between the two groups.An in-depth analysis of alpha diversity using diversity indices, such as ACE, Chao1, and Jackknife, was performed to explore the variations in bile microbial communities between the groups.The results of these indices indicate that bile specimens from the gallbladder in the GS group showed no significant difference in bacterial diversity compared to those from the CBD because this difference was not statistically significant (Figure 4a-c).However, when we normalized the reads of these specimens to a uniform number based on gene copy numbers, the alpha diversity, as measured by the number of identified species for species richness, was significantly lower in the group GS compared to the group DS (Wilcoxon rank sum test, group DS vs. group GS, p = 0.041) (Figure 4d).Additionally, alpha diversity analysis using NPShannon (p = 0.001), Shannon (p = 0.002), Simpson (p = 0.001), and phylogenetic (p = 0.034) diversity indices demonstrated statistically significant differences between the groups (Figure 4e-h). ized the reads of these specimens to a uniform number based on gene copy numbers, th alpha diversity, as measured by the number of identified species for species richness, wa significantly lower in the group GS compared to the group DS (Wilcoxon rank sum tes group DS vs. group GS, p = 0.041) (Figure 4d).Additionally, alpha diversity analysis usin NPShannon (p = 0.001), Shannon (p = 0.002), Simpson (p = 0.001), and phylogenetic (p 0.034) diversity indices demonstrated statistically significant differences between th groups (Figure 4e-h). Beta Diversity Analysis in Bile Microbial Communities We performed beta diversity analysis of the bile microbial communities within th groups at the genus level utilizing massively parallel 16S rRNA gene sequencing.Ou analysis involved principal coordinate analysis (PCoA) employing generalized UniFra and UniFrac metrics.The outcomes demonstrated distinct clustering patterns between th group DS (represented by blue solid dots) and the group GS (represented by green soli dots) based on the gallstone location (Figure 5a).We also utilized the unweighted pa group method with arithmetic mean (UPGMA) hierarchical clustering analysis to reve differences in abundance and diversity between the group DS (blue empty boxes) and th group GS (green empty boxes) using generalized UniFrac (Figure 5b) and UniFrac (Figur 5c).We calculated and presented diversity indices in a representative box plot using pe mutational multivariate analysis of variance (PERMANOVA) to quantitatively assess th Beta Diversity Analysis in Bile Microbial Communities We performed beta diversity analysis of the bile microbial communities within the groups at the genus level utilizing massively parallel 16S rRNA gene sequencing.Our analysis involved principal coordinate analysis (PCoA) employing generalized UniFrac and UniFrac metrics.The outcomes demonstrated distinct clustering patterns between the group DS (represented by blue solid dots) and the group GS (represented by green solid dots) based on the gallstone location (Figure 5a).We also utilized the unweighted pair group method with arithmetic mean (UPGMA) hierarchical clustering analysis to reveal differences in abundance and diversity between the group DS (blue empty boxes) and the group GS (green empty boxes) using generalized UniFrac (Figure 5b) and UniFrac (Figure 5c).We calculated and presented diversity indices in a representative box plot using permutational multivariate analysis of variance (PERMANOVA) to quantitatively assess the diversity differences between the two groups (Figure 5d,e).The results highlighted significant dissimilarities in the bile microbial communities between patients with and without gallstones, suggesting a potential role for these communities in the formation of gallstones (p = 0.008). diversity differences between the two groups (Figure 5d,e).The results highlighted si icant dissimilarities in the bile microbial communities between patients with and wit gallstones, suggesting a potential role for these communities in the formation of gallst (p = 0.008).(d,e) Calculation of diversity index differences between the group DS and the grou presented in a representative box plot using permutational multivariate analysis of variance ( MANOVA). Discovery of Taxonomic Biomarkers in Bile Microbial Communities We utilized linear discriminant analysis effect size (LEfSe) analysis to pinpoint t nomic biomarkers that exhibited significant differences between the group DS (com ing patients with gallstones in the CBD and the group GS (comprising patients with stones in the gallbladder).We identified six genera as potential biomarkers, including cilli and Lactobacillales, which displayed the most substantial distinctions between the groups.In our analysis, we employed a linear discriminant analysis (LDA) score thres (d,e) Calculation of diversity index differences between the group DS and the group GS presented in a representative box plot using permutational multivariate analysis of variance (PERMANOVA). Discovery of Taxonomic Biomarkers in Bile Microbial Communities We utilized linear discriminant analysis effect size (LEfSe) analysis to pinpoint taxonomic biomarkers that exhibited significant differences between the group DS (comprising patients with gallstones in the CBD and the group GS (comprising patients with gallstones in the gallbladder).We identified six genera as potential biomarkers, including Bacilli and Lactobacillales, which displayed the most substantial distinctions between the two groups.In our analysis, we employed a linear discriminant analysis (LDA) score threshold of >3, along with the Kruskall-Wallis test utilizing a significance level of α < 0.05.The findings indicate that these specific taxa have the potential to serve as biomarkers for distinguishing patients with different types of gallstones (Figure 6). of >3, along with the Kruskall-Wallis test utilizing a significance level of α < 0.05.Th findings indicate that these specific taxa have the potential to serve as biomarkers for di tinguishing patients with different types of gallstones (Figure 6). Discussion The inquiry into the contribution of bacteria to gallstone formation is a longstandin question.With the advent of omics technologies, bacterial genes associated with gallston have been unequivocally identified.In this study, we employed massively parallel 16 rRNA gene sequencing to explore the differences in the metagenomic profiles of microbi communities in bile, focusing on the location of gallstones.Specifically, we compared m crobial communities in bile from the CBD in group DS and from the gallbladder in grou GS.Our investigation confirmed the presence of several bacterial phyla in bile, includin Proteobacteria, Firmicutes, Fusobacteria, Bacteroidetes, and Actinobacteria.These fin ings are consistent with prior research, which identified these phyla as among the mo abundant in bile.Of note, in contrast to some earlier studies, our analysis did not dete the phylum Synergistetes in the samples [6,14,16].The bile microbial communities in grou DS exhibited higher bacterial diversity and shared more similarities with intestinal micr biota than those in group GS.This observation suggests that the environment within th CBD of choledocholithiasis patients resembles the normal digestive tract.Previous studi indicated a correlation between gut microbiota and increased inflammation, as well as th development of gallstones [18,19].Additionally, research has highlighted connections b tween the microbiota in the CBD and conditions like primary sclerosing cholangitis [20 gallstones [6], and upper GI tract [9].Our study found that the microbial composition the bile of choledocholithiasis patients was more intricate than that of cholecystolithias patients and resembled typical intestinal microbiota.Genera like Prevotella, Bacteroide and Bifidobacterium may serve as reliable indicators for assessing dietary habits and lif style [21].For example, Prevotella has been linked to non-industrial, agrarian societies wi diets primarily based on vegetables rich in polysaccharides and fiber [22].In contrast, Fi micutes (specifically the Enterococcus genus) and Proteobacteria (particularly the Enteroba teriaceae family) were commonly found in the bile of gallstone patients [23]. Discussion The inquiry into the contribution of bacteria to gallstone formation is a longstanding question.With the advent of omics technologies, bacterial genes associated with gallstones have been unequivocally identified.In this study, we employed massively parallel 16S rRNA gene sequencing to explore the differences in the metagenomic profiles of microbial communities in bile, focusing on the location of gallstones.Specifically, we compared microbial communities in bile from the CBD in group DS and from the gallbladder in group GS.Our investigation confirmed the presence of several bacterial phyla in bile, including Proteobacteria, Firmicutes, Fusobacteria, Bacteroidetes, and Actinobacteria.These findings are consistent with prior research, which identified these phyla as among the most abundant in bile.Of note, in contrast to some earlier studies, our analysis did not detect the phylum Synergistetes in the samples [6,14,16].The bile microbial communities in group DS exhibited higher bacterial diversity and shared more similarities with intestinal microbiota than those in group GS.This observation suggests that the environment within the CBD of choledocholithiasis patients resembles the normal digestive tract.Previous studies indicated a correlation between gut microbiota and increased inflammation, as well as the development of gallstones [18,19].Additionally, research has highlighted connections between the microbiota in the CBD and conditions like primary sclerosing cholangitis [20], gallstones [6], and upper GI tract [9].Our study found that the microbial composition in the bile of choledocholithiasis patients was more intricate than that of cholecystolithiasis patients and resembled typical intestinal microbiota.Genera like Prevotella, Bacteroides, and Bifidobacterium may serve as reliable indicators for assessing dietary habits and lifestyle [21].For example, Prevotella has been linked to non-industrial, agrarian societies with diets primarily based on vegetables rich in polysaccharides and fiber [22].In contrast, Firmicutes (specifically the Enterococcus genus) and Proteobacteria (particularly the Enterobacteriaceae family) were commonly found in the bile of gallstone patients [23]. The microbial communities in the bile of individuals between group DS and group GS exhibited substantial differences in terms of the types and quantities of specific bacterial strains.Prior research indicated that individuals with choledocholithiasis tend to have higher levels of Firmicutes and Proteobacteria in their bile, particularly Enterobacteriaceae [24].Factors like bacterial slime, bacterial resistance in bile, and the formation of biofilms are presumed to play crucial roles in gallstone formation.Notably, prolonged exposure to bile salts is known to induce biofilm formation among enteric pathogens within the Enterobacteriaceae family.This pertains to extensively studied bacteria, such as Salmonella and Shigella species, as well as emerging pathogens, including E. coli, K. pneumoniae, Enterococcus spp., and Clostridium spp.[25].Furthermore, Biofilm formation and anaerobic energy metabolism are considered potential microbial mechanisms involved in gallstone formation.The bacterial composition of stones and identified enterobacteria such as Enterobacter spp., Enterococcus spp., Escherichia spp., Klebsiella spp., and Salmonella spp. as contributors to gallstone formation.In our investigation, we observed a reduced diversity of bacteria in the bile samples from individuals in group GS compared to those from individuals in group DS.This decreased bacterial diversity in the bile of cholecystolithiasis patients may be attributed to the stagnant bile conditions that are often associated with this condition.Stagnant bile can foster the overgrowth of particular bacterial species, such as Enterococcus spp., which can contribute to the development of gallstones.Additionally, the group GS exhibited significantly lower alpha diversity, as determined by species richness, compared to the group DS. The microbial composition in the bile of patients with gallstones determined by 16S rRNA amplicon sequencing displayed lower diversity compared to the microbiota found in the duodenum [9].While many bacterial taxa were reduced in bile samples, there were notable abundances of the Enterobacteriaceae genera, such as Klebsiella, Escherichia, and Pyramidobacter [26].The connection between decreased microbial diversity and gallstone disease has been emphasized by previous research [9,16,27,28], highlighting the need to consider an individual's overall gut microbiota composition when assessing their risk of developing this condition.Patients with recurrent cholelithiasis may exhibit an imbalance in their bile microbial communities, potentially contributing to gallstone formation [15].Bile microbial communities in patients with primary CBD stones appear to be more evenly distributed than those in patients with recurrent CBD stones.Among patients with recurrent CBD stones, Proteobacteria and Firmicutes are the predominant genera, with high levels of Proteobacteria and Synergistetes and lower levels of Bacteroidetes and Actinobacteria.In this study, Bacilli and Lactobacillales could serve as potential biomarkers for distinguishing patients with cholelithiasis in the gallbladder, as identified by LEfSe analysis with an LDA score threshold of >3 and the Kruskall-Wallis test (α < 0.05).Metabolic profiling of bile microbial communities in a prior study [6] demonstrated that bile samples were enriched in pathways related to glutathione reductase and putative iron-dependent peroxidase, which are associated with oxidative stress resistance.This implies that bile microbial communities play a role in maintaining redox metabolism and bacterial balance.The observed increase in flagellar assembly suggests that the microbes in the biliary environment may be more mobile.Bile specimens displayed enrichment in pathways related to ascorbate/aldarate metabolism, propanoate metabolism, and glycolysis/gluconeogenesis, whereas starch/sucrose and pentose phosphate metabolism pathways were depleted [29]. This study had several limitations.Firstly, both groups of patients underwent invasive procedures, such as endoscopic retrograde cholangiopancreatography (ERCP) or laparoscopic cholecystectomy (LC), which makes it challenging to distinguish the impact of the procedure itself from the effects of the underlying disease.Secondly, patients with choledocholithiasis or cholelithiasis had complex medical histories, including conditions like diabetes, obesity, and high cholesterol, which could have influenced the composition of their bile microbial communities.Thirdly, while we excluded patients who had used antibiotics within the past three months, some of them may have taken other types of medications during this period, introducing potential bias into the results.Future studies should collect more comprehensive information on medication usage in the months leading up to ERCP to mitigate this potential bias.Fourthly, it is important to acknowledge that the study had a relatively small sample size, which may constrain the ability to detect smaller effects and generalize the findings to a broader population.However, it is essential to consider the challenges associated with obtaining bile samples from healthy individuals.Ethical and safety considerations are paramount, given that collecting bile samples usually involves invasive procedures such as ERCP, percutaneous transhepatic cholangiography, or surgery.These procedures inherently carry risks, including infection, bleeding, and pancreatitis.Subjecting healthy individuals to such risks for research purposes raises ethical concerns that must be carefully addressed.Even though LEfSe is a valuable tool for differential abundance analysis and can help mitigate spurious bias related to a small sample size, it still has limitations due to the sample size.This suggests that there may be some aspects that were not explored in this study and that further investigations with larger sample sizes would be advantageous. Specimen Collection A total of 28 patients who had been diagnosed with either choledocholithiasis or cholecystolithiasis were enrolled at the Department of Internal Medicine, Daejeon St. Mary's Hospital (Daejeon, Republic of Korea).The first group (DS; n = 17) consisted of patients with choledocholithiasis who were treated by ERCP.The second group (GS; n = 11) consisted of patients with cholecystolithiasis who were treated by LC.The inclusion criteria for patient selection in the study were stringent, requiring individuals to meet specific requirements.These criteria included having no history of endoscopic sphincterotomy or biliary surgery, the absence of acute cholangitis or acute cholecystitis at the time of diagnosis, no antibiotic therapy for three months preceding procedures, and not taking probiotics or any other medications known to significantly impact the gut microbiome.This was performed to ensure that the study results were not confounded by previous treatments or acute illnesses that could impact the microbial communities in the bile.A 10 mL bile specimen was collected from each patient during either ERCP or LC, and the collection procedure was performed in a sterile manner.During ERCP, bile specimens were collected using side-viewing endoscopes (TJF240/JF-260V; Olympus, Tokyo, Japan) and sterile sphincterotome catheters to avoid contamination.During LC, bile specimens were aspirated from the gallbladder into sterile disposable syringes before removing the gallbladder.All specimens were immediately placed in sterile Falcon 15 mL conical tubes (Corning Inc., New York, NY, USA) and stored at −80 • C until further analysis.This procedure ensured that the bile specimens remained in a stable condition until the bile microbiome was analyzed. DNA Extraction Total DNA was extracted from each non-centrifuged bile specimen using the FastDNA ® SPIN Kit for Soil (MP Biomedicals, Santa Ana, CA, USA) following the manufacturer's protocol.The quantity of DNA was measured using a Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA, USA), and the quality of DNA was estimated using the E-Gel electrophoresis system (Life Technologies). Massively Parallel 16S rRNA Gene Sequencing Polymerase chain reaction (PCR) amplification was performed with the extracted DNA using fusion primers that targeted the V3 to V4 regions of the 16S rRNA gene.These fusion primers were designed for bacterial identification and were as follows: 341F (5 ′ -AATGATACGGCGACCACCGAGATCTACACXXXXXXXXCGTCGGCAGCGTCA GATG TGTATAAGAGACAGCCTACGGGNGGCWGCAG-3 ′ ) and 805R (5 ′ -CAAGCAGAA GACGGCATACGAGATXXXXXXXXGTCTCGTGGGCTCGGAGATGTGTATAAGAGAC AGGACTACHVGGGTATCTAATCC-3 ′ ).The target region primer sequences were underlined.The fusion primers were constructed in the following order: P5 (P7) graft binding, the i5 (i7) index, nextera consensus, the sequencing adaptor, and the target region sequence.The PCR amplifications followed these conditions: initial denaturation at 95 • C for 3 min, followed by 25 cycles of denaturation at 95 • C for 30 s, primer annealing at 55 • C for 30 s, and extension at 72 • C for 30 s, with a final elongation at 72 • C for 5 min.The ampli-fied PCR products underwent purification, and non-target products were removed using CleanPCR (CleanNA Alphen aan den Rijn, Waddinxveen, The Netherlands).The purified PCR product was assessed using 1% agarose gel electrophoresis and the Bioanalyzer 2100 (Agilent, Palo Alto, CA, USA) with a DNA 7500 chip.Mixed amplicons were pooled and subjected to 2 × 250 bp paired-end sequencing covering the amplified 16S V3-V4 region using the MiSeq Sequencing system (Illumina Inc., San Diego, CA, USA) at Chunlab, Inc. (Seoul, Republic of Korea), following the manufacturer's instructions. Bioinformatic Analysis The raw reads were initially subjected to quality checks, and low-quality reads (Phred quality score < Q25) were filtered using Trimmomatic version 0.32.After quality control, the paired-end sequence data (mean, 35,750 reads; range, 20,239 to 72,371) were merged with the fastq_mergepairs command of VSEARCH version 2.13.4 with the default parameters [30].Primer sequences were trimmed using Myers and Miller's alignment algorithm with a similarity cutoff of 0.8.Non-specific amplicons that did not encode 16S rRNA were detected using nhmmer in the HMMER software package version 3.2.1 with HMM profiles [31].Unique reads were extracted, and redundant reads were clustered using the derep_fulllength command of VSEARCH [30]. Analysis of Taxonomic Profiling The 16S-based microbial taxonomic profiling (MTP) platform of EzBioCloud Apps (ChunLab, Inc., Seoul, Republic of Korea, https://www.ezbiocloud.net/;accessed on 2 February 2022) was utilized to estimate the metagenomic differences in bile microbial communities.Briefly, taxonomic assignments were performed using the EzBioCloud 16S rRNA database [32], followed by more precise pairwise alignment [30].Chimeric reads were filtered out using the UCHIME algorithm for reads with <97% similarity [33], and the non-chimeric 16S rRNA database from EzBioCloud was used.Sequences were clustered into operational taxonomic units (OTUs) by 97% identity, and taxonomic positions of representative sequences in each OTU cluster were assigned [34].Following chimeric filtering, reads that could not be matched to a specific species with less than 97% similarity in the EzBioCloud 16S rRNA database were gathered, and the cluster_fast command [30] was employed to carry out de novo clustering to create additional OTUs.Subsequently, OTUs consisting of a single read were excluded from further analysis.After performing taxonomic profiling on each specimen, we employed the comparative MTP analyzer within the EzBioCloud Apps for a comparative examination of the specimens.For the purpose of comparing diversity indices among specimens, read numbers were normalized via random subsampling, and the diversity indices were computed using Mothur [35].The computed alpha diversity indices included ACE, Chao, Jackknife, NPShannon, Shannon, Simpson, and phylogenetic diversity, in addition to rarefaction curves and rank abundance curves.An alpha significance level of 0.05, along with an effect size threshold of 3, was employed as criteria for this study.Beta diversity distances were calculated to assess variations in species complexity using various algorithms, such as Bray-Curtis, Fast UniFrac, Generalized UniFrac, and Jensen-Shannon.PCoA clustering analysis was conducted using the comparative MTP analyzer to evaluate differences in species complexity.PCoA plots were generated to facilitate a comparison of microbiota composition among specimens [36].LEfSe analysis was employed to identify significantly differential taxa between groups based on functional profiles predicted by the PICRUSt [37] and MinPath [26] algorithms.LEfSe places importance on both statistical significance and biological relevance in the identification of biomarkers [38]. Statistical Analysis The metagenomic disparity in bile microbial communities between the two groups was assessed using statistical tests, including the Kruskall-Wallis test, Mann-Whitney U test, and Wilcoxon rank-sum test in the R software, specifically version R.3.1.2from the R Foundation for Statistical Computing in Vienna, Austria.A significance level of p < 0.05 was applied to all statistical analyses. Conclusions Our study revealed that the bile microbial community in patients with choledocholithiasis exhibited higher diversity and increased abundance of specific bacterial strains compared to patients with cholelithiasis.The heightened diversity in the bile microbial community of choledocholithiasis patients suggests the presence of a more intricate and dynamic ecosystem within their bile.Notably, the increased prevalence of certain bacterial strains, namely Ralstonia, Lactobacillus, and Enterococcus, is of particular interest.These strains are known to be associated with inflammation, and inflammation is a recognized risk factor for gallstone formation.Therefore, it is plausible that these bacteria could contribute to gallstone formation by promoting inflammation within the bile ducts.Our study represents a valuable addition to the field of gallstone disease research and holds potential in the identification and development of novel and more effective strategies for the prevention and treatment of this condition.Further experiments are warranted, including the joint analysis of serum or urine metabolomics along with bile microbiota. Author Contributions: Conceptualization, W.P.; methodology, J.P.; software, J.P.; validation, W.P.; formal analysis, J.P. and W.P.; investigation, W.P. and J.P.; resources, W.P.; data curation, W.P.; writingoriginal draft preparation, W.P. and J.P.; writing-review and editing, W.P. and J.P.; visualization, W.P.; supervision, W.P.; project administration, W.P.; and funding acquisition, J.P. All authors have read and agreed to the published version of the manuscript.Informed Consent Statement: Written informed consent was obtained from the subjects for participation in the clinical and molecular analyses and the publication of the data included in this study. Figure 1 . Figure 1.Representation of the averaged taxonomic composition proportions within the bile microbial communities at the phylum and genus levels for the group DS and the group GS.Stacked bar charts illustrating the taxonomic composition at the phylum (a) and genus (b) levels.ETC (et cetera)refers to the population of identified phylum or genus strains less than 1%.The taxonomic profiling of the microbiome at the genus level reveals that 1% of the composition in higher taxonomic ranks is classified as unassigned in group DS, whereas none is observed in group GS.At the phylum level, unassigned taxa were not identified in either group DS or GS.The x-axis represents the value in percentage. Figure 2 . Figure 2. Comparative analysis of taxonomic composition within four selected taxa known for their significance in the human gastrointestinal tract.The group DS exhibited a higher relative taxonomic abundance of Bacteroides (a), Enterobacteriaceae (b), Prevotella (c), and Proteobacteria (d) compared to the group GS. Figure 1 . Figure 1.Representation of the averaged taxonomic composition proportions within the bile microbial communities at the phylum and genus levels for the group DS and the group GS.Stacked bar charts illustrating the taxonomic composition at the phylum (a) and genus (b) levels.ETC (et cetera) refersto the population of identified phylum or genus strains less than 1%.The taxonomic profiling of the microbiome at the genus level reveals that 1% of the composition in higher taxonomic ranks is classified as unassigned in group DS, whereas none is observed in group GS.At the phylum level, unassigned taxa were not identified in either group DS or GS.The x-axis represents the value in percentage. Figure 2 . Figure 2. Comparative analysis of taxonomic composition within four selected taxa known for their significance in the human gastrointestinal tract.The group DS exhibited a higher relative taxonomic abundance of Bacteroides (a), Enterobacteriaceae (b), Prevotella (c), and Proteobacteria (d) compared to the group GS. Figure 2 . Figure 2. Comparative analysis of taxonomic composition within four selected taxa known for their significance in the human gastrointestinal tract.The group DS exhibited a higher relative taxonomic abundance of Bacteroides (a), Enterobacteriaceae (b), Prevotella (c), and Proteobacteria (d) compared to the group GS. Figure 3 . Figure 3. Visualization of rarefaction and rank abundance curves for the DS and group GS.Rarefa tion curves and species richness indices indicate the extent of comprehensive sampling in group D (a) and group GS (b).The broader span of rank abundance curves reflects higher relative spec abundance, and the smoother curve on the Y-axis signifies greater evenness in the bacterial dist bution in group DS (c) and group GS (d). Figure 3 . Figure 3. Visualization of rarefaction and rank abundance curves for the DS and group GS.Rarefaction curves and species richness indices indicate the extent of comprehensive sampling in group DS (a) and group GS (b).The broader span of rank abundance curves reflects higher relative species abundance, and the smoother curve on the Y-axis signifies greater evenness in the bacterial distribution in group DS (c) and group GS (d). Figure 4 . Figure 4. Analysis of alpha diversity in bile microbial communities in the DS and group GS at th genus level using massively parallel 16S rRNA gene sequencing.Evaluation of alpha diversity fo species richness (a-d) and diversity index (e-h) within bile microbial communities collected fro the common bile duct in the choledocholithiasis (group DS) and the gall bladder in the cholelithias (group GS). Figure 4 . Figure 4. Analysis of alpha diversity in bile microbial communities in the DS and group GS at the genus level using massively parallel 16S rRNA gene sequencing.Evaluation of alpha diversity for species richness (a-d) and diversity index (e-h) within bile microbial communities collected from the common bile duct in the choledocholithiasis (group DS) and the gall bladder in the cholelithiasis (group GS). Figure 5 . Figure 5. Beta diversity assessment within bile microbial communities in the DS and group G the genus level using massively parallel 16S rRNA gene sequencing.(a) Principal coordinate a sis (PCoA) highlighting distinct clustering, indicative of differences in overall bile microbial munities between the group DS (blue solid dot) and the group GS (green solid dot) based on stone location.(b,c) Hierarchical clustering analysis using the unweighted pair group method arithmetic mean (UPGMA) revealed variations in abundance and diversity between the grou (blue empty box) and the group GS (green empty box) based on generalized UniFrac (b) UniFrac (c).(d,e) Calculation of diversity index differences between the group DS and the grou presented in a representative box plot using permutational multivariate analysis of variance ( MANOVA). Figure 5 . Figure 5. Beta diversity assessment within bile microbial communities in the DS and group GS at the genus level using massively parallel 16S rRNA gene sequencing.(a) Principal coordinate analysis (PCoA) highlighting distinct clustering, indicative of differences in overall bile microbial communities between the group DS (blue solid dot) and the group GS (green solid dot) based on gallstone location.(b,c) Hierarchical clustering analysis using the unweighted pair group method with arithmetic mean (UPGMA) revealed variations in abundance and diversity between the group DS (blue empty box) and the group GS (green empty box) based on generalized UniFrac (b) and UniFrac (c).(d,e) Calculation of diversity index differences between the group DS and the group GS presented in a representative box plot using permutational multivariate analysis of variance (PERMANOVA). Figure 6 . Figure 6.Illustration of the taxonomic distribution in the DS and group GS generated by linear d criminant analysis effect size (LEfSe) with an LDA score threshold > 3 and the Kruskall-Wallis te set at a significance level of 0.05. Figure 6 . Figure 6.Illustration of the taxonomic distribution in the DS and group GS generated by linear discriminant analysis effect size (LEfSe) with an LDA score threshold > 3 and the Kruskall-Wallis test set at a significance level of 0.05. Funding: This paper was supported by the Fund of Biomedical Research Institute, Jeonbuk National University Hospital.Institutional Review Board Statement: This study was approved by the Institutional Review Board (IRB) of Daejeon St. Mary's Hospital, The Catholic University of Korea (approval number: DC17TESI0045; date of approval: 14 June 2017). Table 1 . Baseline characteristics of patients with choledocholithiasis (group DS) and with cholelithiasis (group GS).
8,976.8
2024-03-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Macroscopic bursting in physiological networks: node or network property? Activity pattern modalities of neuronal ensembles are determined by node properties as well as network structure. For many purposes, it is of interest to be able to relate activity patterns to either node properties or to network properties (or to a combination of both). When in physiological neural networks we observe bursting on a coarse-grained time and space scale, a proper decision on whether bursts are the consequence of individual neurons with an inherent bursting property or whether we are dealing with a genuine network effect has generally not been possible because of the noise in these systems. Here, by linking different orders of time and space scales, we provide a simple coarse-grained criterion for deciding this question. Introduction Neuronal bursting activity is a ubiquitous physiological state described by a precipitate train of spikes followed by a quiescent period. The phenomenon can be observed on the node or on the network level, and it can be produced by neurons bursting by their own virtue ('inherent bursting') or by neurons that individually respond with regular spiking but exhibit a bursting behavior when embedded as a node into a network (possibly conditional on a particular input to the network that drives the latter into a particular functional mode). Bursting has several distinct functional roles. Within the neocortex's layer IV, bursting activity emerges as a collective phenomenon [1], enabling the amplification of weak thalamic input into the network. In neuronal embryonic cultures, we generally observe bursting activity after a few days of implementation, when the neuronal network starts to develop its structure. In this case, bursting is the fingerprint of the search by the network for its optimal configuration, which is indicated by an avalanche structure of the firing events (see figure 1), the size of which has a power-law characteristic [2][3][4][5]. The most obvious instance of a functional role of bursting is the neuronal contact to muscles, where bursting is required for muscle contraction [6]. Besides that, frequency features of bursting neurons have also been shown to be important due to their resonant properties, which allow them to transmit reliably selective information between neuronal circuits [7,8]. Not least, bursting or synchronized activity hallmarks a number of important conditions in human health, as it has been shown to be closely related, e.g., to the emergence of epilepsy [9] and migraine [10] and to pacemaker function [11]. The experimental situation that we focus on in our investigation is a coarse-grained, macroscopic one, where we have no microscopic access to individual neuronal spiking or where we have too many elements to deal with on an individual basis. We will show that the two main alternatives that lead to bursting produce different effects on the network level (that may have physiological relevance), and we will provide a mixed qualitative and quantitative analysis of the difference between the two situations. As bursting is an intrinsically nonlinear phenomenon, we compare two extreme cases of nonlinear models: the weak nonlinear coupling of linear phase oscillators (Kuramoto case [12,13]) and the coupling of intrinsically nonlinear oscillators ('Rulkov neurons' [14]). Because of their distinguished position among the extant models of neuronal dynamics, both have been abundantly used for modeling neurons and neuronal networks. Authoritative surveys and examples of their potential use for these tasks are provided, e.g., in [15,16] for the Kuramoto model and in [17,18] for the Rulkov model. Due to their prominence, it is, unfortunately, impossible to do justice to the vast literature available. As Rulkov's model recovers essentially all behaviors of neuronal firing observed in physiology (even regular firing), we may see it as the generic nonlinear model covering the whole of the modeling space, beyond simple phase oscillators onwards to the strongly nonlinear behavior of bursting neurons. Bursting is characterized in all cases by two time scales: a fast one responsible for individual spiking and a slow one responsible for bursting. From a macroscopic and large time-scale point of view, a regular interburst interval between two successive bursts can be considered as the correspondence to one complete oscillation of a regular neuron; i.e., the burst is considered as a single event. The interburst frequency will then be defined by the number of bursts per unit time. In this sense, a phase θ can be associated with the angular position of an equivalent rotating oscillator, and the interburst angular frequency ω is the average of the angular frequencies evaluated over some time series. When coupled, inherently bursting neurons are able to show regular firing and phase synchronization [14]. Based on this, the emergence of neuronal phase synchronization can be seen as equivalent to the mechanism of phase synchronization for coupled Kuramoto oscillators. Regarding phase dynamics, an explicit mapping between the Rulkov model within a given regime of firing, and the Kuramoto model was recently developed [19]. We will see that, surprisingly, this no longer holds if we, instead, consider frequency as the observable; the frequency ω of coupled bursting neurons and the frequency of Kuramoto oscillators depend distinctively on the coupling strength between the neurons. In contrast to synchronized coupled Kuramoto oscillators, where the coarse-grained frequency (that often displays burst-like characteristics) does not vary with coupling strength (figure 2(a)), for synchronized coupled inherently bursting Rulkov neurons, the mean interburst frequency (MIF) decreases if the coupling strength ε is increased. This unexpected response of inherently bursting neurons is corroborated by physiological observations of coupled pyloric dilator neurons from the lobster stomatogastric ganglion. (For these neurons, it is well known that when they are coupled with artificial electrical synapses, their MIF changes with the coupling strength [20].) One aim of this paper is to explain this unexpected behavior and to exhibit that for the modeling of certain collective neuronal phenomena, the choice of phase oscillators for node dynamics may be insufficient. To work these points out, we first closely follow and then extend to some point the original analysis provided by Rulkov [14]. ω ω hosting a unimodal symmetric frequency distribution. Frequency dependence of synchronizing Kuramoto neurons We first recollect how the emergent macroscopic frequency depends on the coupling for the prominent Kuramoto oscillator model of neurons (i.e., weakly coupled linear phase oscillators). It is well known [13] that increased coupling among the oscillators leads to the emergence of synchronized phase behavior. For our investigations, the frequency of oscillation i ω of the oscillator i θ , i = 1,…, N, where N is the network size, will be chosen randomly from a symmetric and unimodal probability density function g ( ) ω , and the coupling that we shall consider is global: (˙i θ is the temporal phase evolution of the ith neuron, and ε is the global coupling strength.) For the simplest case of the coupling of two oscillators (for the earliest experiments regarding biological neurons see [21,22]), we may perform the transformation and consider the condition to frequency synchronization˙0 φ = . Frequency synchronization will be achieved for for all values of the coupling strength c ε ε > [23,24]. Although for a larger number of oscillators the analytics become increasingly difficult [25], it is known that the frequency average of the coupled Kuramoto oscillators continues to be the average frequency of the uncoupled Kuramoto oscillators, where this property is independent of network topology and network size. This invariance of the mean frequency of Kuramoto oscillators is, however, for most modeling of physiological systems, an unrealistic limitation. Burst-frequency dependence of Rulkov neurons Most physiological ensembles of neurons that undergo a synchronization process also contain neurons with inherent bursting behavior. (As we will see, neurons may also change from regular to bursting behavior, depending on physiological conditions.) A generic and convenient framework of neurons with different physiological responses is given by Rulkov's two-dimensional (2D) map [14] based on a fast x-variable and a slow y-variable, n n n 1 2 . Through a decomposition of the fast from the slow time-scale, it is possible to arrive from the 2D system at a 1D system, where the slow variable is replaced by the parameter γ as n n 1 2 This dimensional reduction simplifies the bifurcation analysis [14]. As we shall see, the behavior of the 2D system is well described by the simplified system. The gray region in figure 4 shows the accessible parameter space for the 2D Rulkov model. The reduced Rulkov model hosts three important dynamical features: a saddlenode bifurcation, a flip bifurcation, and a crisis [26]. The saddle-node bifurcation, which consists in a collision between a stable and an unstable fixed point, occurs when the following four conditions are satisfied: [27]. From equation (4) for F x ( , ) 0 0 γ , the α and γ that satisfy the four conditions are given by the relation [28]. In this case α and γ must obey the relation which is represented in figure 4 by a full red line. A crisis is a sudden change in the chaotic attractor (here: its disappearance). This happens if the maximum of function F(x) is mapped into a stable fixed point [29]. This condition is satisfied if the relation holds, which is represented in figure 4 by the full green line. From the bifurcations of the reduced Rulkov model, we may understand how the individual dynamics separates into different regimes. Whereas for 2.0 α ≲ the xand y trajectories -will be attracted to a fixed point (region I in figure 4), for 2.0 2.58 α ≲ ≲ the y-variable will oscillate between two saddle-node bifurcation points, and the x-variable will be confined to a periodic motion (region II in figure 4). For 2.58 4.0 α ≲ < we have coexistence of the saddle-node bifurcation and of the flip bifurcation points, which produces the triangle bursting. In this case, when the y-variable reaches the largest saddle-node bifurcation point ( max γ ), the x-variable starts to oscillate rapidly as the y-variable decreases towards the flip bifurcation point ( fp γ ), and the amplitude of the oscillation decreases. When the fast oscillations disappear, the bursting terminates, and the trajectory is attracted to the fixed point. After the orbit reaches the stable fixed point, the y-variable slowly increases towards max γ , which completes the loop (region III in figure 4). The square bursting happens if 4.0 4.62 α < ≲ : in this case the y-variable grows up to max γ . At this point, a chaotic attractor emerges, and the y-variable decreases towards the crisis point ( cs γ ). At this point the chaotic attractor vanishes, and, consequently, bursting is terminated. After that, the orbit is attracted by the stable fixed point, and the y-variable increases its value towards max γ again (region IV in figure 4). Already in Rulkov's original paper [14], a hint can be found that in some area of the parameter space, ω might be basically inversely proportional to the distance fp max γ γ | − |. However, no indication was given as to whether this holds generally, or for what network topologies this would be the case. We infer from figure 4 that if the argument holds, then the strength of ε may have a noticeable impact on the burst frequency ω. To pinpoint this, we numerically simulated different network topologies (for diffusive coupling, small-world networks, and scale-free networks) that all exhibit essentially identical behavior. In this paper, though, we restrict ourselves to the case of globally coupled networks. A burst starts whenever y n max γ = . Defining the oscillation period as the time between the beginning of two successive bursts, we obtain a corresponding phase (φ) as k n n n n n n n As is shown in figure 5(a), in the triangle-bursting regime, the distance between the flip bifurcation point ( fp γ ) and max γ increases linearly in α. ω, being inversely proportional to the distance between these two points, fp max 1 ω γ γ ∝ | − | − , thus decreases (figures 5(b) and (c)). If we increase α within the square-bursting regime, the distance between the crisis bifurcation point cs γ and max γ decreases. The mean interburst frequency thus increases until, towards the end of the square-bursting regime, the proximity to another regime of neuronal activity at 8 3 3 4.62 α = ≈ , and the small distance between the bifurcation points causes other types of modulations that for the interburst frequency become relevant, forcing MIF to decay again. We proceed from the single-neuron case to the collective behavior by investigating the behavior of a globally coupled set of Rulkov neurons where ε is the coupling strength, N is the network size, and i = 1,…, N. Also for synchronized Rulkov neurons, a reduction of the 2D model to a 1D model is possible, so that We may now use the same arguments that we have used for isolated Rulkov neurons. The critical points for global coupling corresponding to equations (5)-(7) are given by (see [26] for more details) 18 (1 ) 2 (2 6(1 ) ) 3(1 ) (dashed lines), there is a displacement of the bifurcation lines and, as consequence, a change in the MIF. It is now evident that increased coupling increases the distance between the bifurcation points ( figure 6(a)). For triangle bursting, MIF decreases almost linearly with the coupling and decreases for square bursting with an approximately quadratic dependence ( figure 6(b)). While the dependence between frequency and bifurcation point distances decreases in an approximately linear manner for triangle bursting, for square bursting the decrease is nonlinear ( figure 6(c)). These results demonstrate that, indeed, inside a given neuronal activity regime (triangle bursting, square bursting), the increase of the distance between the bifurcation points in consequence of an increase of ε (or of α), decreases the interburst frequency ω. This dependence is observed at the neuronal level ( figure 5(c)), as well as if the neurons are coupled ( figure 6(c)). If the increase (or decrease) of the coupling strength is too big, there will be additional, non-continuous changes in neuronal activity. Abrupt changes can be seen upon a change between activity regimes, e.g., when, upon the increase of α, we change from triangle bursting into square bursting ( figure 5(b)), or in dependence of the coupling strength. The difference between the simple dependence on the coupling strength ε by the Kuramoto model versus the essentially strictly monotonous dependence interrupted by events of bifurcations exhibited by Rulkov neurons should therefore be seen as the hallmark of a preponderance of inherently versus non-inherently bursting neurons. Conclusions Despite the differences between the Rulkov and Kuramoto neuron models, in the periodic regime 2.0 2.58 α ≲ ≲ the coupling effect in the spike frequency can be neglected, and the Kuramoto model can be used to model phase and frequency synchronization. In the bursting regime of neurons, 2.58 α ≳ , the differences between the models matter, as for inherently bursting neurons, consistently a decrease of MIF with the coupling strength was observed for global, small-world, and scale-free topologies of sizes varying from N = 100 up to N = 10000. Related observations made earlier for different systems ( [30] for diffusive coupling, [31] for smallworld networks, and [32] for scale-free networks) corroborate the interpretation that our observation deals with a general feature of inherently bursting neurons. We have focused on the MIF of Rulkov's model of bursting neurons. This should, however, not be considered a particular modeling case, but rather as a generic framework that reflects dynamical properties essential for both regular and bursting behavior. Neurons that can become inherently bursting may therefore be the more typical case than neurons that do not offer this possibility. The dependence of the MIF on the coupling exhibited by these neurons (figure 6) then would (at least in matters of frequency) prohibit a modeling by simple phase oscillators, which still is the predominant approach. The apparent independence of the observed phenomenon with respect to network topology makes it a rare example of a nontrivial invariant of network topology. In view of the exhibited different origins that can underlie bursting, we suggest that our observation will be helpful on different levels of physiological coarse-graining. Particularly on higher levels of abstraction of the representation of physiology (i.e., working with higher levels of modularity), it will be to a lesser degree evident whether for modeling subsystems, a phase oscillator model suffices, or whether a bursting model is necessary. Our results provide a simple experimentally accessible indicator for deciding this question. It is conceivable that for neuronal networks with well-controlled architecture, experimental procedures (e.g., the addition of serotonin or related substances) could be developed to mimic an increase or decrease of the coupling strength ε among the nodes of a biological neural network, obtaining in this way the desired information about the dominant nature of the nodes. Traditionally, most emphasis in the analysis of complex physiological networks has been dedicated to the microscale and the macroscale. Our results, however, put a warning sign against too straightforward an extrapolation from a microscopic level to the macroscopic scale: mesoscale effects, such as the pattern of bursting investigated here from this angle, can generate quite unexpected effects that can cause these extrapolations to fail. This implies that increased efforts on the mesoscale will be necessary to better understand physiological neuronal systems that span different levels of hierarchical organization.
4,061.4
2015-05-27T00:00:00.000
[ "Computer Science" ]
Biology for biomimetics I: function as an interdisciplinary bridge in bio-inspired design In bio-inspired design, the concept of ‘function’ allows engineers and designers to move between biological models and human applications. Abstracting a problem to general functions allows designers to look to traits that perform analogous functions in biological organisms. However, the idea of function can mean different things across fields, presenting challenges for interdisciplinary research. Here we review core ideas in biology that relate to the concept of ‘function,’ including adaptation, tradeoffs, and fitness, as a companion to bio-inspired design approaches. We align these ideas with a top-down approach in biomimetics, where engineers or designers start with a problem of interest and look to biology for ideas. We review how one can explore a range of biological analogies for a given function by considering function across different parts of an organism’s life, such as acquiring nutrients or avoiding disease. Engineers may also draw inspiration from biological traits or systems that exhibit a particular function, but did not necessarily evolve to do so. Such an evolutionary perspective is important to how biodesigners search biological space for ideas. A consideration of the evolution of trait function can also clarify potential trade-offs and biological models that may be more promising for an application. This core set of concepts from evolutionary and organismal biology can aid engineers and designers in their search for biological inspiration. Introduction From mini-drones inspired by insect flight [1,2] to natural product discovery [3,4] and the naked mole rat as a study organism in cancer biology [5,6], we have much to learn from the over 10 million species of organisms on earth. Bio-inspired design is a problem-solving approach that looks to how organisms tackle problems analogous to ours through evolutionary adaptations acquired over millions of years [7,8]. Bio-inspired approaches have become increasingly common over the last two decades in fields as diverse as engineering, chemistry, medicine and architecture [9][10][11][12]. Taking inspiration from biology greatly expands the generation of novel ideas and technologies [13][14][15], especially when engineers and designers are collaborating with biologists [16][17][18]. Bio-inspired approaches often improve design solutions [14,19] and can be more sustainable in terms of material and energy use [20]. While bio-inspired design can be a powerful problem-solving approach, it comes with challenges of being incredibly interdisciplinary [21,22]. In many cases, practitioners of bio-inspired design are limited by the siloed nature of human work, and biologists are often not involved in the design process-only 10%-40% of collaborations involve biologists [9,23]. Conducting bio-inspired design without an extensive biology background is possible, but difficult. Practitioners new to biology can be overwhelmed by biological diversity, trapped by the classic examples in biomimetics, limited by search terms, or misguided in their selection of biological models [24,25]. To overcome these challenges, we seek to bring more biological concepts, and biologists themselves, into the entire biomimetic process [9,17]. Here, we give an overview of the idea of function in biology and bioinspired design to help biodesigners generate better search terms, access a greater diversity of innovative biological models, and lay the groundwork for selecting the most relevant biological models. This review is as much for engineers interested in bioinspired design as it is for biologists who want to work with designers and engineers in this collaborative space. An overview of 'function' as a bridge For an engineer or designer looking to biology for creative ideas, the concept of 'function' provides a bridge from human applications to analogous biological traits [18,[26][27][28][29][30][31][32]. For instance, the online database AskNature categorizes the diversity of biological organisms based on the function of their traits [32]. These same functions describe human challenges that commonly come up in various engineering and design applications. As a result, an engineer interested in improving air filtration systems might search 'how does nature filter' to find dozens of biological mechanisms for filtration-from flamingoes to marine tunicates. A number of other databases and tools for bio-inspired design also use the concept of function as the bridge between biology and engineering [33][34][35]. The idea of function is not new to biology; it has long played a central role across biological disciplines [36][37][38]. For instance, in organismal biology, 'form and function' often speaks to how trait characteristics affect trait performance for the individual [39,40] and in ecosystem ecology, biologists often speak of traits that drive the 'functional roles' of different species [41,42]. The functional approach in bio-inspired design as allowed an explosion of biomimetic research over the last two decades ( [9,27]; note we use bioinspired design and biomimetics interchangeably [43]). However, distinct approaches to learning from nature and profession-specific jargon can get in the way of interdisciplinary work between biology and engineering [44][45][46]. Engineers and biologists often approach the idea of function in different ways [47,48], and functions of human products do not necessarily map easily onto biological functions [49]. Engineers and designers studying biology often focus on the 'immediate function' of a trait, and often assume that natural selection has molded traits to perform perfectly in their environment (e.g. [50]). Such misconceptions of natural selection predispose biomimetic approaches to overlook the limits of applying evolved, biological traits to human design [51][52][53]. Indeed, misunderstandings about biology, and separation from the biologists who could provide clarity, are common issues in bio-inspired design [54,55]. While there are many tools to aid designers in the biomimetic process, we are missing a more thorough integration of biology throughout the biomimetic process [9,17,21,48,56]. In this manuscript, we review core biology content relevant to the use of 'function' as a bridge between biology and engineering in bio-inspired design. We do not expect engineers to become biologists or biologists to become engineers. Rather, we want to integrate essential insights from evolutionary and ecological biology into the biomimetics process so that engineers and biologists can be more effective collaborators [48,56,57]. We structure this 'conceptual review' as a companion to a 'problem-based,' 'challenge-to-biology,' or 'technology-pull' approach in bio-inspired design (figure 1 [21,58]). Such an approach begins with an analysis of the human challenge and its translation into functions [32]. These functions are used to find relevant biological analogies, which inspire solutions to the original challenge. In this paper, we synthesize key concepts in evolutionary biology (e.g., adaptation, tradeoffs, function) that are relevant to the steps of this bio-inspired design approach. Throughout this text, we will use the action of 'crushing' as a guiding example with which we can apply and practice the key concepts in evolutionary biology. We focus in on this singular example for simplicity and continuity of communication, not because 'crushing,' or engineering more broadly, are the only relevant fields. We encourage the reader to bridge from this example to their domain of interest. Articulate design function In a top-down approach to bio-inspired design, where we move from challenge to biology, we often begin with a problem analysis to refine the initial problem statement [32,[59][60][61]. For example, say we were interested in improving the design of a jackhammer. We know that these machines can produce tremendous forces to crush materials like rock and concrete. However, these same forces risk harming the machine operator due to strong vibrations (e.g., Raynaud's syndrome [62]). Thus, a broad challenge of 'improving machines' becomes the more specific task of 'protecting the operators' hands from harmful vibrations' . After narrowing the problem, we must articulate it in a way that we can then bridge to biology. Searching biology journals for 'jackhammers' or even 'protecting hands' will not yield relevant results. This is where the idea of 'function' helps by creating a bridge between the problem statement and an expanded biological solutions space; this is the first step in allowing biodesigners to move beyond literal analogies. In design and engineering, 'function' can speak to uses and activities, structural or mechanical systems, or any number of ways that a product or building works or operates, such as energy use or insulation. In order to tap into the wealth of biological models, we must describe an engineering or design [58] (steps 1-8 on the right) for technology-pull or problem-driven approaches. In this manuscript, we use this process as a backdrop and layer on a more direct consideration of the concept of 'function,' shown in blue. After identifying and analyzing a problem (step 1), we first identify 'design functions' as part of abstracting a technical problem (step 2). In doing so, we can use the concept of function as a bridge to biology, and transpose our problem to organisms and biological traits (step 3). Next, we can broaden the range of potential biological models by exploring not only analogous functions, but also functions across fitness contexts, extremes across species, and indirect analogies (step 4). By considering tradeoffs across biological functions, we can help refine the most appropriate biological model (step 5). While this review focuses on finding biological models, subsequent steps in the bioinspired design process works to understand and abstract the biological strategies, then transpose these mechanisms to technology. [58] © 2017 IOP Publishing. Reproduced with permission. All rights reserved. challenge using language that is less connected to the particularities of human design and thus more applicable to searching biological knowledge [21,58]. In the case of improving jackhammer design, we might be interested in finding biological models that 'crush' , or 'apply force.' We will be especially interested in those biological models that perform these functions 'without causing damage to self.' With these search terms in hand, we might use various databases to start exploring ideas, starting with a list of synonyms for 'crush,' such as 'smash, break, shatter, or mash' . As in most design processes, the first step is to generate a list of ideas that is as broad as possible. We next review biological concepts related to function that allow us to expand the search of biological models in the biomimetic process (figure 1). Explore analogous functions Now that we have our design function, we can start to explore the biological world for possible solutions to our human challenges (figure 1). If we are trying to improve the design of machines that crush things or 'generate force,' we might first consider animals that are also crushing or smashing things in their environment, such as their food. The beak of the finch is an example of a biological trait that comes to mind for the function of 'crush,' and also a classic model to study how evolution works. We will consider this example to illustrate how biologists understand the idea of function as well as what other biological concepts that inform the bio-inspired design process. Imagine a finch beak crushing a seed (figure 2), which allows the finch to access energy and nutrients from the seed. The amount of force generated depends on the depth and length of the finch beakshorter beaks generate more force (consistent with a basic lever system), and thicker beaks withstand this force, thereby preventing damage to the finch's skull (figure 2 [63,64]). In the Galapagos Islands, finches with shorter, thicker beaks are more likely to survive droughts because they can access the energy and nutrients in seeds that are hard to crack-in dry periods, food is hard to come by and these tough seeds become an important resource (figure 2 [65,66]). Finches with shorter and thicker beaks that survive the drought pass any genes related to beak shape to the offspring that form the next generation, resulting in a population shift in beak shape and underlying gene frequencies over time [65,66]. Such changes in gene frequencies in a population over a few generations are often termed 'microevolutionary' processes [65,67]. Over time, these gradual shifts can result in a biological trait adapted to an environmental challenge, such as cracking a seed, and divergence across species to match different environments ('macroevolution'). We can build on this initial finch example to search biological diversity for more biological models that 'crush' or 'smash.' What other species or traits crush or smash? While the finch beak is a classic example of 'crushing' in biology, it is far from the winner in a contest across species to generate greatest crushing force. For instance, smasher mantis shrimp can generate 1500 N of force as they attack their prey [68] and hyena jaws can crush bone to extract marrow [69,70] (figure 3). We can expand our list of organisms for inspiration to the extremes of crushing and smashing, in this case, all in the context of nutrition. Figure 2. The beak of the finch: form and function. Ground finches with shorter and thicker beaks produce stronger bite forces [63]. During droughts, thick, armored seeds increase in relative abundance, selecting for finches with thicker beaks. Following a severe drought in 1977, the relative frequency of finches with deep beaks increased [65,66]. Images by Lizzie Harper, graphs modified from [63] and [66]. Illustration by Lizzie Harper www.lizzieharper.co.uk ©2022. [ Explore function across fitness contexts In our examples so far, we have focused on 'crushing' with respect to extracting nutrients or energy from food (figures 2 and 3). However, we know that organisms crush or smash things for many different reasons. While we may be interested in the immediate function of a biological trait, to generate a more complete list of biological models for consideration, we should explore across the many ways an immediate function of a trait (e.g. "beak crushing") may contribute to the 'fitness' of an organism. Fitness is a central concept in evolutionary biologyit is the reproductive contribution of an individual to the gene pool of subsequent generations [71,72]. However, it is challenging to quantify as fitness goes beyond just 'number of offspring'-it is affected by a range of traits that describe how an individual (and its subsequent relatives) survive and reproduce in an environment. These traits are sometimes termed 'life history traits' [73][74][75], and range from survival to reproductive age and lifespan to number of offspring in a given reproductive event and frequency of reproductive events [76]. Life history traits are affected by underlying traits related to the necessities of life-energy and nutrient gain, avoiding disease and predators, and buffering oneself against abiotic onslaughts. We refer to these as 'fitness contexts,' summarized in table 1. Considering how a function relates to a range of fitness contexts can help us to further broaden our initial list of biological models in the biomimetic Figure 3. Finding a diversity of biological models that emulate the direct analogy of 'crushing.' Mantis shrimp, hyenas and crocodiles all crush or smash their prey. To consider extremes across species, we must control for variation across animals in body size. Modified from [68]. Images by Lizzie Harper (hyena), Daniel Yudi Miyahara Nakamura (mantis shrimp, Wikimedia commons), or in the public domain (crocodile, creative commons). Reproduced with permission from [67]. © 2019, Oxford University Press. Illustration by Lizzie Harper www.lizzieharper.co.uk ©2022. This Mantis Shrimp parental care image has been obtained by the author(s) from the Wikimedia website where it was made available by Daniel Yudi Miyahara Nakamura under a CC BY 4.0 licence. It is included within this article on that basis. It is attributed to Daniel Yudi Miyahara Nakamura. Reproduced from rawpixel.com. Image stated to be in the public domain. CC0 1.0. 4), and many other species use weapons to crush, pry, smash opponents in contests over mates [77]. What about in the realm of homeostasis and physiology? Here we might think of physical crushing or degradation as part of digestion, such as the gizzard of a bird, a muscular organ filled with grit that crushes food particles after swallowing (figure 4). Use immediate and ultimate functions to identify tradeoffs Considering the immediate function of a biological trait, and how it ultimately contributes to fitness, can point us to a diversity of potential biological models (figure 4), and can also give clues to possible tradeoffs which may be relevant in bio-inspired design [78]. To illustrate this, let us return to the beak of the finch (figure 2), which, it turns out, does more than just crush seeds (figure 5). While a shorter and thicker beak is more likely to break through a tough seed and result in survival through droughts [69,79], short, thick beaks also result in simpler songs that are preferred less by mates [80][81][82]. In addition, small overhanging structures at the end of the beak are particularly effective for parasite removal, but not necessarily crushing seeds [83,84]. Thus, we may be primarily interested in how the beak crushes seeds for food, but the fact that the beak is also singing to attract a mate and preening to remove predators, can result in tradeoffs. Evolution may not 'optimize' the trait with respect to the function an engineer cares the most about in their own design-evolution instead is acting on 'fitness.' These observations illustrate an inherent tension between how an engineer and a biologist might come at the idea of 'function' which can influence how we move between the disciplines [48,49]. For a human-built machine, 'function' speaks to what the machine does, which may be evaluated with specific performance metrics such as how fast the machine moves or how much force it generates. To a biologist, this is similar to what philosophers call the 'causal role' or 'proximate function' [38,85,86]. In other words, this is what a trait is doing in the here and now for the organism-what we have termed here as the 'immediate function' (figure 6). However, when biologists discuss the function of an organism's trait, they also refer to the evolutionary forces that brought the trait into existence. This is often called the 'ultimate function' of a trait, or what philosophers call the 'etiological function' [87]. This distinction matters to biodesigners because delineating the immediate versus the ultimate functions of a trait can help in understanding tradeoffs across functions, and thus limitations to copying a particular trait in a human application. The fact that biological traits do not serve one immediate function can result in tradeoffs. In the finch, the beak may have an immediate function of crushing a seed, but it is the range of ways the beak contributes to fitness which explains how the beak came into existence through evolution by natural selection [88]. In other words, the immediate function of a trait that an engineer may be interested in is not always the same as the ultimate function (figure 5 [85,89,90]). Engineers and designers do recognize the long-term causes of function in their own design, such as the iterative history of a product [91]. However, in general, human problem solvers tend to focus on 'function' in an immediate sense, while biologists often look to the natural processes that brought a current function into existence [87,92]. We can use this broader evolutionary view to overcome some of the trade-offs and limitations inherent in drawing inspiration from biological traits. Overcoming trade-offs: different organisms, different tradeoffs Biological traits are not always 'optimized' with respect to a design function of interest as they are often doing a range of things for the organism. In general, we can overcome this challenge by looking across species for ideas, as different species often come at the same function through different ways, and sometimes through innovations that allow them to overcome or reduce a tradeoff. First, we can look to organisms that have reached the extremes of a formfunction relationship that is relevant to biodesign applications (e.g., figure 3). Species at the extreme of a form-function relationship often have unique mechanisms that underlie that adaptation. For example, mantis shrimp generate forces by punching their prey with their front limbs; adaptations in this limb and supporting structures (the saddle) allow fast movements and shock-absorption during the blow [68] (figure 3). We might also consider extreme selective conditions, which may prioritize selection on a function of interest over other functions. For example, variation in beak structure may be more closely tied to seed crushing in a bird species in a desert environment where resources are limited and selection for is strong, while selection on song quality and mate choice is relaxed (e.g. [93]). Second, trade-offs between multiple functions performed by the same trait may be navigated differently across different species. Some species may generate extreme forces because their biology results in different form-function relationships between a trait and different fitness contexts. For example, woodpecker skulls can withstand very high decelerations (on the order of 1000 g) when hitting their head on trees [94,95]. The same beak used for acquiring energy is also used to advertise to mates because woodpeckers attract mates by banging their heads on things (drumming), not by singing, as we see in finches [96]. This alignment of the formfunction relationship across selective contexts suggests that perhaps the biological trait is more likely to be 'optimized' with respect to an immediate function of interest (withstanding force), relative to a species where this relationship varies across fitness contexts ( figure 7). Often, over evolutionary time, the emergence of specialization around a particular biological function may further shift the tradeoff landscape, reducing possible tradeoffs. For example, the evolution of organs specialized for crushing, such as a gizzard (figure 4), reduces interactions between potential functions of a trait. In other words, the emergence of a new trait (the gizzard), reduces the function of the beak towards crushing, altering the trade-off landscape. Move beyond the direct analogy Initial explorations in bio-inspired design often gravitate towards direct analogies between a design function and a biological function, such as a machine crushing and a beak crushing. However, we can expand the range of biological models we might consider by moving beyond a direct analogy (figure 8). We often discover relevant traits by considering the opposite function to the function initially considered [32]. For instance, 'withstanding force' may be relevant to improving jackhammer design as much as 'generating force' . The shape and materials in turtle shells or mussel shells may give ideas on how the structure of a hammer handle could absorb the shock of impact (figure 8 [97,98]). To creatively expand your list of possible biological analogies, we can think about biological models where the evolved function of a trait is entirely unrelated to the design function. For example, tree roots are capable of moving and crushing rock and cement as they snake their way through the ground in search of nutrients or water [99,100]. While roots obtaining resources does contribute to the individual's fitness, the design function of 'crushing' did not evolve due to selection on crushing. To emphasize this point, we use a somewhat absurd example. Tree branches are capable of crushing entire cars when a tree comes crashing down in a windstorm (figure 8). In the first example, crushing arose as a byproduct of selection on root 'foraging' [101], whereas in the second exampling, crushing arise as a byproduct of a very large organism that is susceptible to falling in windstorms [102]. In the tree crushing examples (figure 8), the designer might still be inspired by the biological trait, even if the evolved function is not aligned with the design function. For example, perhaps the architecture of tree roots provides an idea for generating force. Here, a designer is building on a biological trait in a novel design context [103]. For example, NASA has recently engineered sound absorbing devices based on clusters of reed stalks, which happen to be highly effective sound absorbers [104]. However, these wetland plants did not evolve to absorb sound; instead, the acoustic properties of reeds are a byproduct of selection on reed structure and emerges in a group of reeds. In finches, beaks contribute to fitness by generating forces when cracking seeds to acquire energy and nutrients. However, they contribute to fitness in other functional contexts where force generation is less important (preening, singing). In this example, there are tradeoffs for the trait characteristics such as beak depth between the function of interest in an engineering context (force) and other aspects of trait function from an evolutionary perspective (preening, singing). In other words, evolution is not necessarily optimizing 'generating force': the improvement of one function performed by this trait will likely come at the expense of the deterioration of another function performed by the same trait. All finch images are by Lizzie Harper. Illustration by Lizzie Harper www. lizzieharper.co.uk ©2022. (B) Functional alignment: In contrast, for woodpeckers, beak traits related to force generation are related to fitness in similar ways in both a foraging context and a mate selection context, as they drum to advertise for mates, not sing. Thus, we might say the functions are 'aligned.' Additionally, woodpeckers often rely on non-beak traits for parasite removal, such as anting behavior, resulting in no link between the trait of interest and performance in a disease avoidance context. In this case, evolutionary selection for better food acquisition and mate attraction are aligned around beak and skull characteristics related to generating force. Top and bottom woodpecker images are by Lizzie Harper. Illustration by Lizzie Harper www. lizzieharper.co.uk ©2021. Middle is CC-4.0 by the von Wright brothers (Svenska Faglar). Reproduced from rawpixel.com. Image stated to be in the public domain. CC0 1.0. Identifying and clarifying examples of such coopted and emergent function is important in bioinspired design in part because it allows the designer to move into new creative space. Determining the evolutionary origins of a trait, relative to the design applications, helps to refine how a bio-designer will search biological space for inspiration. For example, to explore other examples of organisms incidentally crushing things in their environment (like the tree crushing a car), one might first explore the evolution of large size in animals [105] or wind resistance in trees [106,107]. Such a search might lead to studies of how elephants withstand forces while running [108] or how whales experience forces while jumping out of the water [109], which could generate insights related to withstanding force. In these examples, the biological model did not evolve to crush, but considering crushing as an emergent property of that trait can link it to the focal problem. Bio-inspired design can benefit from clarifying the functional alignment between design and evolution [110]. Finally, we note that in many cases, the evolutionary context for a biological trait may be uncertain. The immediate function of a trait may be clearbutterfly wing scales reflect light in a particular waybut the evolutionary function of the trait is unclear (e.g., does it function in mate choice or predator avoidance?). We may be able to surmise function based on other examples, or we may need to study the model further. And in other cases, knowing the full evolutionary story of a trait may not always be necessary for the utility of the trait in bio-inspired design. For instance, the ultimate function of shark denticles that inspired 'sharklet' is unclear, but the product is still useful [111]. Regardless, a more thorough understanding of the related biological traits opens more creative doors for bio-inspired design and reduces some of the limitations of copying biological traits that may be 'imperfect' from an engineering perspective. Conclusions and next steps In this review, we have developed a companion guide to the idea of 'function' for bio-designers to consider alongside the steps of a biomimetic process ( figure 1 [58]). In the first steps of the bio-inspired design process, we translate our challenge of interest to design functions. These functions are used as a bridge between application and biology. We then offer key steps to generate a greater range of biological models for a given function: (1) explore analogous functions in the biological world, (2) explore across fitness contexts, (3) explore across extremes of a function, (4) consider tradeoff structure, and (5) explore indirect analogies. Considering the immediate versus the ultimate function of a biological trait can give clues to tradeoffs across different functions. To avoid the limitations of copying biological traits that may not be optimized for a particular function of interest, explore a range of biological models, as they each come with different tradeoff structures. The topics reviewed here allow us to navigate the ideas of function as a bridge between biology and engineering in the biomimetic process. While this paper gives an overview of these steps, we have also built a set of activities that can be used in the classroom or in a design exploration (see appendix). The process reviewed here is the first step in building a diverse set of biological ideas for inspiration in the design process. The next step, as we detail in the next paper in this series, involves expanding this list even further, by building a toolset for navigating the vast space of biological diversity. While this paper has focused on the top-down, or challenge-to-biology approach in biomimetics, it is important to note that the concept of function also works to move from biology to challenges, or the bottom-up approach in biomimetics. In considering the immediate and ultimate functions of biological traits and adaptations, we can brainstorm which human applications may benefit from further studying that organism or trait. Data availability statement No new data were created or analysed in this study. Acknowledgments This work was supported by a grant from the John Templeton Foundation on Function as a Bridge between Biology and Design, within the broader "Science of Purpose" program (Award 10996). We are grateful to students in ESR's course in bioinspired design (GCC3015/5015) and animal behavior (EEB3412W) for input and comments over the years on the concepts included in this manuscript. We are grateful to comments and critiques provided by the Snell-Rood lab, and members of the broader Templeton project, including Mary Guzowski, William Weber, Jessica Rossi-Mastracci, Amanda Hund, Mike Travisano, Ruth Shaw, Alan Love, Mark Borrello, and Gillian Roehrig. Certain images in this publication have been obtained by the author(s) from the Wikipedia/Wikimedia website, where they were made available under a Creative Commons licence or stated to be in the public domain. Please see individual figure captions in this publication for details. To the extent that the law allows, IOP Publishing disclaim any liability that any person may suffer as a result of accessing, using or forwarding the image(s). Any reuse rights should be checked and permission should be sought if necessary from Wikipedia/Wikimedia and/or the copyright owner (as appropriate) before using or forwarding the image(s). Author contributions E S R led conceptualization and funding acquisition. Content was developed by both authors, with writing led by E S R. D S led content critiques and revisions, with both authors editing the manuscript. Key terms Biological trait: A feature or subunit of an organism, such as a leg, liver, or behavioral response. Evolution: A change in gene frequencies in a population over time or space. Natural selection: Variation across individuals in survival and reproduction underlain by differences in traits (aka "phenotypes"). Can lead to evolution by natural selection when variation in fitness is tied to underlying genetic variation. Fitness: The genetic contribution of an individual to future generations, generally a function of reproduction (of self or relatives) and survival (of self, offspring and relatives). Immediate function: What is the trait doing right now for an individual? For instance, the immediate function of 'hunger' is to drive an individual to eat. Ultimate function: The evolutionary or longerterm function of a biological trait-how does a trait contribute to fitness, and how does this explain how it came into existence? For instance, the ultimate function of 'hunger' is to obtain nutrients, which contribute to fitness; thus, genes tied to this physiological drive increased in frequency in populations over time. Emergence of function: When a function of interest in a human application emerges from a system of evolving parts, but without selection on that particular function. For instance, ecosystems may store carbon but are not necessarily selected to do so. Appendix: Companion activities to 'Function as a bridge in Bio-inspired Design' In this activity, students will go through the initial steps of a bio-inspired design process, using a topdown or challenge-to-biology approach. We will draw on a more extensive exploration of the biology to help expand the idea space in creative ways. The current activity is written for a non-majors undergraduate course lab period, but can be modified depending on time constraints and participant background. Problem analysis 1. Mind map a big problem. Choose an overarching problem-depending on the class, this could be 'climate change,' 'pandemics,' or 'building envelopes' . Explode this problem into components through a mind map-with the problem at the center of a page, start drawing out as many pieces of this problem as one can think of, linking related pieces with lines. Choose and refine a sub-problem. Highlight common themes in sub-problems (either within or between mind-maps). Choose one of these subproblems to explore in more detail. Find a relevant Wikipedia page or article to learn a little more about the sub-problem to refine it further. For example, you might go from 'climate change' to 'green energy' to 'solar panels' to 'solar cells that work well at high latitudes' . Generate a list of design functions. After learning more about a sub-problem, list as many related 'functions' that you might be trying to build into a new product or application. Think about what you want this product to do-what verbs come to mind? For the solar cell example, this may be things like 'capture or harvest light,' especially in low light conditions. Consult a thesaurus to help expand your list of design functions. Biological analogies: explore immediate and evolutionary function 4. Start a list of biological analogies. For your focal problem, begin a list of biological analogies-what biological traits come to mind that are performing an analogous function for an organism? List what initially pops into your head, and then use various resources to expand this list, such as the database 'Asknature,' field guides to different taxonomic groups, a walk in the woods (or natural history museum), or talking to someone with expertise in different organisms. We will continue to add to this list of 'biology analogies' . Choose one trait and map to fitness. From your initial list, choose one analogy that plays out at the level of an individual organism, not a system or group of organisms. For instance, in the solar cell example, you might focus on the pigments of a butterfly wing, but avoid system-level analogies like how a forest reflects light. How does this trait contribute to the survival and reproduction (fitness) of this organism-consider the function that originally led you to this organism, but push yourself to think of how this same trait would apply to other fitness contexts (see table 1). It may help to do a little research on the biology or natural history of the species using field guides, Wikipedia, or other resources. Explore function across fitness contexts. Take a look at your initial list of biological analogies. Can you assign each one to the different ways in which a trait may contribute to fitness (table 1). Is there a category that is missing from your list? If so, continue to explore; for instance, perhaps most of the ideas that initially come to mind have to do with energy or nutrition-can you find examples having to do with defense? In looking at your list, are there traits that show up where the evolutionary function is unclear? Biological analogies: explore function extremes 7. Find the record holders. Start to push your list of biological analogies into the extremes. Can you find the organisms that stand out with respect to your function of interest. This may require some literature searching (see step #12) beyond a Google search, which often first hits on the charismatic record holders that get attention in the media. Map the alignment between form and function for a few of these systems. Choose a handful of organisms from your growing list and try to sketch out the relationship between form and function-for instance, as beak depth goes up, 'function' of the beak in terms of bite force goes up, but performance of the beak in terms of singing rate goes down. In many cases, you will likely be limited by research on these formfunction relationships and you may have to hypothesize a relationship based on knowledge of related species, physical interactions, etc. Does the likelihood of tradeoffs apply differently across your species? Move beyond the direct analogy 9. Explore the opposite function. Return to your list of design functions (#2). Expand this list by considering the opposite of these action verbs. Does this add to your list of biology analogies? Explore when there is no biological function. In your list of biology analogies, which stand out as having no analogous evolutionary function-for instance, the traits 'do' something that may be of a design interest, but it is unclear whether they evolved to do this. (In some cases, it may be unclear!) Can you expand this list of biological models where a function of interest is a byproduct of selection on something else? Move beyond basic databases-using literature searches. Many bio-inspired design databases are currently the tip of the iceberg of biological diversity. You can expand your list of biological analogies further with searches of the biology literature. The database Wed-of-Science is particularly useful as you can search using Boolean operators, and also select databases that go far back in the literature. Google scholar can be complementary as it searches full text, but you have less control over your search. An example search for papers related to bird beaks that produce force might be: (beak or beak) and (function * or performanc * or 'form-and-function') and (bite * or force * ) Move beyond literature searches-ask a biologist. While there is a lot we can do with the help of the internet, sometimes it is even more helpful to ask an expert. How do you find a biologist to talk with? One of the easiest ways is through Web of Sciencefor a given search, look at the 'authors' side tab; you may want to restrict your search to the most recent ten years to get people actively working in an area. You can further narrow the list to researchers in a given location. Send them an email (or two)-biologists often love to talk about their organism!
8,904
2023-07-10T00:00:00.000
[ "Biology" ]
Spherical α-MnO2 Supported on N-KB as Efficient Electrocatalyst for Oxygen Reduction in Al–Air Battery Traditional noble metal platinum (Pt) is regarded as a bifunctional oxygen catalyst due to its highly catalytic efficiency, but its commercial availability and application is often restricted by high cost. Herein, a cheap and effective catalyst mixed with α-MnO2 and nitrogen-doped Ketjenblack (N-KB) (denoted as MnO2-SM150-0.5) is examined as a potential electrocatalyst in oxygen reduction reactions (ORR) and oxygen evolution reactions (OER). This α-MnO2 is prepared by redox reaction between K2S2O8 and MnSO4 in acid conditions with a facile hydrothermal process (named the SM method). As a result, MnO2-SM150-0.5 exhibits a good catalytic performance for ORR in alkaline solution, and this result is comparable to a Pt/C catalyst. Moreover, this catalyst also shows superior durability and methanol tolerance compared with a Pt/C catalyst. It also displays a discharge voltage (~1.28 V) at a discharge density of 50 mA cm−2 in homemade Al–air batteries that is higher than commercial 20% Pt/C (~1.19 V). The superior electrocatalytic performance of MnO2-SM150-0.5 could be attributed to its higher Mn3+/Mn4+ ratio and the synergistic effect between MnO2 and the nitrogen-doped KB. This study provides a novel strategy for the preparation of an MnO2-based composite electrocatalyst. Introduction The efficiency of oxygen reduction reactions (ORR) and oxygen evolution reactions (OER) are critical to the energy conversion efficiency of metal air batteries because of their sluggish reaction kinetics [1]. In recent years, Pt-based materials have been developed as typical catalysts for ORR/OER with high catalytic activity [2][3][4]. However, their widespread application in commerce is seriously hindered, because the noble metal Pt is very expensive, and its reserves in earth are scarce. Therefore, in recent years, non-precious metal-based materials were extensively studied for developing a comparable candidate for Pt-based catalysts. Especially, a manganese-based material (such as manganese oxide) with satisfying catalytic activity has been developed recently, because it possesses a lot of advantages, including cheapness, abundance, environmental friendliness, structural flexibility, and bifunctional catalytic activity for ORR/OER [5][6][7][8][9]. Nevertheless, many factors have been found to play important roles in improving the catalytic activity of manganese dioxide for ORR. Firstly, the crystalline phase of manganese dioxide is critical. It has been reported that α-MnO 2 exhibits better catalytic activity than other crystalline phases, because of its abundant di-µ-oxo bridges [10][11][12]. Secondly, micromorphology also has a great influence on its performance. Previous studies had shown that metal oxides with nanostructures exhibited good electrocatalytic performance because of their relatively large surface area and big pore volume, which exposes more active sites and facilitates full contact with electrolyte [13][14][15][16]. Thirdly, Mn 3+ is believed to favor ORR/OER due to the single electron occupation in σ*-orbital (e g ). Therefore, more content of Mn 3+ in MnO 2 could promote its electrocatalytic performance [17,18]. Although manganese-based materials have good electrocatalytic performance as reported, the superiority of synergic catalysts cannot be neglected. Recently, the research emphasis of manganese oxide has focused on ion doping and its composition with other materials, and results indicate that it has better catalytic property than bare manganese oxide. For example, Fe (or Co) ion-doped MnO 2 nanosheets (MONSs) grown on the internal surface of macroporous carbon showed improved ORR catalytic activity compared with the un-doped one, because of the co-electrocatalytic function of MnO 2 and Fe (or Co ion) [19]. Moreover, the catalyst of Mn 2 O 3 -doped MnO supported by reduced grapheme oxide (rGO) was proved to have a better ORR catalytic performance and stronger stability than that of pure MnO. It was believed that the coexisted metal oxide with different valences and rGO had promoted the catalytic performance [20]. In addition, carbon is one of the most important materials for electron transfer, and it could improve the catalytic activity. The coupling of MnO 2 with carbon materials may improve its catalytic activities. Herein, MnO 2 spheres are synthesized by the redox reactions between K 2 S 2 O 8 and MnSO 4 in acid conditions (denoted as the SM method). These MnO 2 spheres mixed with nitrogen-doped Ketjenblack (N-KB) are used as a catalyst for ORR/OER application. This study examines the morphology, structure, and electrochemical properties of MnO 2 samples by scanning electron microscope (SEM), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and electrochemical testing. The relationships between the synthesized conditions and electrocatalytic activity of these MnO 2 -N-KB catalysts are also discussed. Finally, the proposed enhanced catalytic mechanism of these composited catalysts is investigated by controlling the Mn 3+ content in MnO 2 . This work offers a new strategy for the scalable preparation of more efficient MnO 2 bifunctional oxygen catalysts for ORR and OER. Preparation of MnO 2 First, 1.69 g of manganese sulfate monohydrate, 1.74 g of potassium sulfate, and 2.71 g of potassium persulfate were dissolved into 80 mL of deionized water, agitating at least 5 min to form a homogenous aqueous solution. Then, 4 mL of 37% hydrochloric acid was added into the aqueous solution. After agitating another 5 min, the resulting solution was transferred into a 100-mL Teflon lined stainless steel shell autoclave, heated to 120 • C in an oven, and kept at the temperature for 12 h. After it was cooled to room temperature naturally, the as-obtained product was filtrated under vacuum with a membrane of 0.15-µm pore diameter and dried overnight at 80 • C. The obtained product was denoted as MnO 2 -SM120-12 (S: potassium persulfate; M: manganese sulfate). For comparison, the control samples, such as MnO 2 -SM150-0.5 and MnO 2 -SM120-0.5, were also prepared. For the samples of MnO 2 -SM150-0.5, the synthesis process was the same with that of the MnO 2 -SM120-12, but the reaction temperature and reaction time were 150 • C and 30 min, respectively. For the samples of MnO 2 -SM120-0.5, the synthesis process was the same with that of the MnO 2 -SM120-12, but the reaction temperature and reaction time were 120 • C and 30 min, respectively. Preparation of N-KB First, 0.2 g of ketjenblack (KB) and 1.2 g of melamin were dispersed in 80 mL of deionized water by ultrasonic treatment for 30 min. Then, the resulting solution was enclosed into a 100-mL Teflon lined stainless steel shell autoclave, heated to 120 • C in an oven, and kept at the temperature for 24 h. After it was cooled down to room temperature naturally, the as-obtained product was filtrated under vacuum with a membrane of 0.15-µm pore diameter, and dried overnight at 80 • C. After being ground by agate mortar for more than 10 min, the resulting powder was transferred to a piece of porcelain boat, which was then covered with another piece of porcelain boat, and further wrapped by copper foil. The treated porcelain boat was placed into a tube furnace, and then heated to 650 • C for 2 h at a heating rate of 5 • C min −1 in argon flow. After that, it was naturally cooled down to room temperature, and the as-prepared sample was denoted as N-KB. Characterization The morphologies of the as-prepared catalysts were characterized by using the scanning electron microscope (FIB 600i, FEI, Hillsboro, OR, USA). The structures of these samples were carried out by X-ray diffraction (XRD, Rigaku D/Max 2550, Tokyo, Japan) with Cu-Kα radiation (λ = 1.5406 Å). The elemental and valence state analysis were characterized by X-ray photoelectron spectroscopy (XPS, K-Alpha1063 spectrometer, Thermo Scientific Co., Waltham, MA, USA). Electrochemical Measurements For electrochemical measurements, 2 mg of as-prepared MnO 2 and 4 mg of N-KB were dispersed in 950 µL of anhydrous ethanol by sonication for 20 min. Then, 50 µL of Nafion solution (5 wt %) was added and sonicated for another 20 min to get a homogeneous catalytic ink. Then, 8 µL of the ink was loaded onto the surface of glassy carbon disk (the diameter is 5 mm, homemade electrode), and the catalyst loading amount was 0.0815 mg cm −2 (calculated by the mass of MnO 2 ). For comparison, the commercial 20 wt % Pt/C (Johnson Matthey, Royston, UK) was also prepared with the same method. Linear sweep voltammetry (LSV) and cyclic voltammetry (CV) ORR were performed using a RDE (rotating disk electrode) as working electrode, a double fluid boundary Ag/AgCl electrode as the reference electrode, and a platinum wire as the counter electrode in 0.1-M of KOH solution saturated with oxygen on a CHI760E electrochemical workstation. All of the potentials were finally converted to the values versus reversible hydrogen electrode. The ORR catalytic stabilities were evaluated by half-wave potential decay (∆E 1/2 ) before and after the accelerated durability test (ADT). The ADT was performed by using these catalysts in ORR for 5000 cycles. These experiments were carried out in O 2 -saturated 0.1 M of KOH solution at room temperature, and the voltage was selected from 0.57 V to 0.82 V (versus (Reversible Hydrogen Electrode) RHE) with a scan rate of 100 mV s −1 . Methanol tolerance testing of catalysts was carried out in O 2 -saturated 0.1 M of KOH and 1.0 M of CH 3 OH-mixed electrolyte [21]. To further verify the ORR mechanism, the RRDE (rotating ring disk electrode) technique was used, and the peroxide percentage and the electron transfer number were calculated based on the equations, which were given as follows [22]: where I d represents the disk current, I r represents the ring current, N represents the current collection efficiency of the Pt ring (0.37), and n means the electron transfer number [22,23]. The OER activities of the as-prepared samples were also carried out by RDE experiments at a scan rate of 10 mV s −1 with a rotation speed of 1600 rpm. The current density was operated from 0 to 16.5 mA cm −2 . The long-term durability OER measurements of the catalysts were performed by using chronopotentiometry. The tests were conducted at a current density of 10 mA cm −2 , and the test time was 8000 s. Finally, the EIS testes were scanned in the frequency range of 10 5 -0.1 Hz at 1.665 V (versus RHE) with the amplitude of 5 mV in 0.1 M of KOH solution [24]. Electrochemical Test of Al-Air Batteries For an Al-air full batteries test, a polished aluminum stripe was used as anode, and 6 mol L −1 of a KOH solution containing 0.01 mol L −1 of Na 2 SnO 3 , 0.0075 mol L −1 of ZnO, and 0.0005 mol L −1 of In(OH) 3 was used as electrolyte. The air electrodes were composed of a gas diffusion layer, a foam nickel current collector, and a catalytic layer. The catalytic layer was fabricated as follows. The as-prepared catalysts (10 mg), N-KB (30 mg), and the 60 wt % polytetrafluoroethylene (PTFE) aqueous solution (~50 mg) were mixed and agitated continuously until a paste appeared. Then, this paste was rolled with a glass rod until it turned into a 2 cm × 2 cm film. In the end, the film and the gas diffusion layer were pressed onto the two sides of nickel foam under the pressure of 10 MPa, and dried at 60 • C overnight. For comparison, the air electrode using commercial 20 wt % Pt/C catalyst was also fabricated with the same method. The full battery performance was measured with a Neware Battery Testing System (Shenzhen, China). A homemade electrochemical cell was used for Al-air battery measurements, with the net volume size of 50 mm × 32 mm × 50 mm, and an air hole with a diameter of 10 mm was used for the test [22,25]. Micromorphological and Microstructural Properties The micromorphology of the synthesized samples was characterized by SEM. Figure 1 shows the SEM images of MnO 2 -SM120-12 (a-c), MnO 2 -SM120-0.5 (d-f), and MnO 2 -SM150-0.5 (g-i). As we can see from these images, all three samples display spherical morphology. The average size of the MnO 2 -SM120-12 sample was~5.0 µm, and these spheres were composed of nanorods with average diameters of 50 nm. The average diameter of the MnO 2 -SM150-0.5 spherical particles was~4.2 µm, and are composed of nanorods with an average diameter of 27 nm. The spherical MnO 2 -SM120-0.5 sample shows an average diameter of~2.9 µm, and is composed with nanorods (whose average diameter is about 20 nm). The MnO 2 -SM150-0.5 shows a smaller size than MnO 2 -SM120-12, which is mainly because of the higher hydrothermal reaction temperature and shorter hydrothermal reaction time. Although the manganese dioxide formed at a higher reaction speed during the production process, the samples with a smaller particle size could be produced in less reaction time [10]. However, the MnO 2 -SM150-0.5 sample shows a larger size than MnO 2 -SM120-0.5; this should be attributed to the effect of the higher reacting temperature, which causes a quicker reaction speed, and the quicker speed leads to a bigger size in the same reaction time. 541) and (312) facets, respectively. These sharp peaks indicated the good crystallization property of the samples, and there no other diffraction peaks were observed in these patterns, further indicating the good crystallization property of these samples. The standard card of α-MnO 2 (PDF#44-0141) was presented for comparison, and these diffraction peaks matched well with this standard card. It revealed that these as-synthesized MnO 2 particles are α-MnO 2 particles. Obviously, the diffraction peaks intensity of MnO 2 -SM150-0.5 (orange line) were inferior to those of the other two samples; this is probably because more defects existed in this sample, which could improve its catalytic activities. Moreover, it has been generally accepted that α-MnO 2 exhibits better catalytic activity than other crystalline phases because of its abundant di-µ-oxo bridges [11]. Thus, the α-MnO 2 particles prepared here could be used for high-performance catalytic application. XPS spectra were carried out for further elemental and valence state analysis of MnO 2 -SM120-12, MnO 2 -SM120-0.5, and MnO 2 -SM150-0.5 samples. As shown in Figure 3, the high-resolution XPS spectra of Mn 2p for MnO 2 -SM120-12 (Figure 3a), MnO 2 -SM120-0.5 (Figure 3b), and MnO 2 -SM150-0.5 ( Figure 3c) are presented, and four peaks located at 642.30, 643.25, 653.80, and 654.80 eV were obtained by peak-differentiating technique. These peaks were assigned to the Mn 3+ (2p 3/2 ), Mn 4+ (2p 3/2 ), Mn 3+ (2p 1/2 ), and Mn 4+ (2p 1/2 ) species, respectively [6,26,27]. Moreover, based on the XPS results, the perk areas of Mn 3+ and Mn 4+ in each sample are presented in Table 1, and the ratio values of Mn 3+ /Mn 4+ (Figure 3d) for MnO 2 -SM120-12, MnO 2 -SM120-0.5, and MnO 2 -SM150-0.5 were calculated as 0.813, 0.512 and 0.965, respectively. As shown in Figure 3d, obviously, the Mn 3+ /Mn 4+ ratio in MnO 2 -SM150-0.5 (0.965) was higher than that in MnO 2 -SM120-12 (0.813), which should be ascribed to the higher reacting temperature that causes the faster reaction rate. It is believed that the faster reaction rate could cause more Mn 3+ content in MnO 2 . A higher content of Mn 3+ in MnO 2 can lead to a better electrocatalytic performance, due to the single electron occupation in the σ*-orbital (e g ) of Mn 3+ [17,18,28]. In addition, a shorter reacting time could produce MnO 2 -SM150-0.5 with a smaller size (Figure 1), which would impact its electrocatalytic performance. The Mn 3+ /Mn 4+ ratio in MnO 2 -SM150-0.5 (0.965) was also higher than that in MnO 2 -SM120-0.5 (0.512), which should be because the faster reaction rate could cause more Mn 3+ content in MnO 2 [10]. ORR Activity and Stability The LSV curves for the ORR of MnO 2 -SM120-12, MnO 2 -SM120-0.5, and MnO 2 -SM150-0.5 are shown in Figure 4a; these measurements were carried out in 0.1 M of KOH solution at a rotating speed of 1600 rpm. The MnO 2 -SM150-0.5 sample showed a better ORR catalytic performance than MnO 2 -SM120-12 at the same test conditions, because the size of the particles in the MnO 2 -SM150-0.5 sample was smaller, and it contained more Mn 3+ in comparison with the MnO 2 -SM120-12 sample [28,29]. It is clear that MnO 2 -SM150-0.5 also exhibited a better ORR catalytic performance than the MnO 2 -SM120-0.5 samples, mainly because of the higher Mn 3+ content. As shown in Figure 4b, MnO 2 -SM150-0.5 (supported on N-KB) exhibited a much better ORR catalytic performance than bare N-KB, with a half-wave potential of 0.76 V and a limiting current density of 6.0 mA cm −2 . This phenomenon should be attributed to the synergetic catalytic activity of α-MnO 2 and N-KB. It is believed that the intrinsically abundant di-µ-oxo bridges in α-MnO 2 could facilitate the ORR process [7,12,19,28]. As observed, the limiting current density (6.0 mA cm −2 ) of MnO 2 -SM150-0.5 (supported on N-KB) was higher than Pt/C (~5.0 mA cm −2 ), despite its lower half-wave potential (0.76 V) compared with Pt/C (~8.2 V). As shown in Figure 4c, the average n value of MnO 2 -SM150-0.5 was 3.85 (from 3.78 to 3.95), confirming a four-electron (4e − ) oxygen reduction mechanism. The catalytic stabilities of the MnO 2 -SM150-0.5 and Pt/C samples in ORR were evaluated by the half-wave potential decay (∆E 1/2 ) before and after the accelerated durability test (ADT) [30]. The ADT was performed by using these catalyst in an ORR for 5000 cycles. These experiments were carried out in an O 2 -saturated 0.1 M of KOH solution at room temperature, and the voltage was selected from 0.57 V to 0.82 V (versus RHE) with a scan rate of 100 mV s −1 . As shown in Figure 5a, the half-wave potential of the MnO 2 -SM150-0.5 sample (supported on N-KB) exhibited a negative shift of~33 mV after 5000 cycles; this result is slightly higher than that of Pt/C (~22 mV, Figure 5b), which is probably because of the inferior electroconductibility of MnO 2 compared with the noble metal Pt. OER Activity and Stability In addition, the OER activities of three as-prepared samples (MnO 2 -SM120-12, MnO 2 -SM120-0.5, and MnO 2 -SM150-0.5) were carried out for further comparing their electrocatalytic performances, and they were tested by RDE experiments at the scan rate of 10 mV s −1 with a rotation speed of 1600 rpm. Generally, OER activities are judged by the potential at the current density from 0 mA cm −2 to 16.5 mA cm −2 [31]. As presented in Figure 6a, MnO 2 -150-0.5 showed a more negative shift than the MnO 2 -120-12 and MnO 2 -SM120-0.5 samples when the current density increased from 0 mA cm −2 to 16.5 mA cm −2 , which means that the MnO 2 -15-0.5 could catalyze OER at a lower overpotential than the MnO 2 -120-12 and MnO 2 -SM120-0.5 samples. In other words, the MnO 2 -150-0.5 sample exhibited much better OER kinetic behavior than the MnO 2 -120-12 and MnO 2 -SM120-0.5 samples. Similar to the ORR, the higher content of Mn 3+ and smaller size of the α-MnO 2 nanorods also played a vital role on the OER. Moreover, the stability tests of the OER activity of the MnO 2 -SM150-0.5 and Pt/C samples are presented. The long-term durability measurements of the catalysts were performed by using chronopotentiometry. The tests were conducted at a current density of 10 mA cm −2 , and the test time was 8000 s. As presented in Figure 6b, after reaction for 8000 s, MnO 2 -SM150-0.5 showed a good stability in OER, but slightly not as good as Pt/C, indicating that the practical catalytic performance of MnO 2 -SM150-0.5 needs further improvement to replace the Pt/C. EIS Performance The charge transfer efficiency of the catalyst plays an important role in the OER process, as the high electron transfer efficiency could indicate the high catalytic activity. The electrochemical impedance spectroscopy (EIS) is a good method for understanding charge transfer efficiency, because the arc radius size of the EIS curve could indicate the value of electrical resistance. The lower resistance of the sample implies the high electroconductibility. Thus, the EIS method is used for deep insights into the OER process. The EIS tests were scanned in the frequency range of 10 5 -0.1 Hz at 1.665 V (versus RHE) with the amplitude of 5 mV in 0.1 M of KOH solution [17,32]. The Nyquist plots are shown in Figure 7, in which the EIS data (Figure 7 left) have been fitted according to the equivalent circuit (Figure 7 right). The equivalent circuit consisted of R s , R f , R ct , C, and CPE, representing the uncompensated solution resistance, intrinsic resistance of the catalyst, charge transfer resistance, capacitance of catalyts, and constant phase element of the double layer, respectively. All of the fitting parameters are listed in Table 2 Methanol Tolerance Performance The methanol tolerance of the catalysts is usually used to evaluate the performance of ORR catalysts in DMFCs (direct methanol fuel cells). The LSV and CV experimental groups (by using the MnO 2 -SM150-0.5 sample (supported on N-KB) and Pt/C samples as catalysts) and control group were carried out in an O 2 -saturated 0.1 M of KOH electrolyte for testing the methanol tolerance. These results are presented in Figure 8. As we can see in Figure 8a, when the ORR process was carried out in an O 2 -saturated 0.1 M of KOH electrolyte with 1.0 M of methanol, the MnO 2 -SM150-0.5 catalyst exhibited excellent methanol tolerance properties, because there was no negative shift of onset potential and no oxidation currents of methanol, but rather only a slight decrease of the limiting current density (~0.25 mA cm −2 ) (Figure 8b). However, a strong oxidation current of methanol is shown in Figure 8d with the Pt/C sample by comparison with the background line. Moreover, a larger negative shift of onset potential for ORR (from~1.0 V to~0.52 V) is observed in Figure 8c, indicating the poor methanol tolerance of the Pt/C sample in comparison with the MnO 2 -SM150-0.5 catalyst [9]. Application in Al-Air Battery For further evaluating the practical catalytic performance of the MnO 2 -SM150-0.5 sample in an Al-air battery, cell voltages at various current densities and constant current discharge tests were carried out. The commercial Pt/C sample was also investigated for comparison. As the results show in Figure 9a, overall, the cell polarization curve with the MnO 2 -SM150-0.5 sample was better than that of Pt/C. Specifically, when the discharge current density was lower than 150 mA cm −2 , the cell voltages of MnO 2 -SM150-0.5 were higher than those of the Pt/C sample. However, the cell voltages of the MnO 2 -SM150-0.5 sample were almost equal to those of Pt/C at the range of 150 mA cm −2 -180 mA cm −2 . As shown in Figure 9b, MnO 2 -SM150-0.5 showed a discharge voltage platform of~1.24 V, which was slightly higher than that of Pt/C (~1.19 V) at the end of the discharge test with a constant current density of 50 mA cm −2 in homemade Al-air batteries. However, it can be observed that the MnO 2 -SM150-0.5 sample took about 2 h to achieve the smoothing discharge voltage platform, which was longer than that of Pt/C (less than 1 h), indicating that the practical catalytic performance of MnO 2 -SM150-0.5 needs further improvement to replace the Pt/C. Conclusions In this work, three kinds of α-MnO 2 microspheres composed with nanorods were synthesized in acid conditions using K 2 S 2 O 8 and MnSO 4 as raw materials by a facile hydrothermal process. The influences of Mn 3+ content on the electrocatalytic activity of ORR/OER were also studied. These results demonstrated that catalysts with more Mn 3+ content play an important role in electrocatalytic application. Especially, the MnO 2 -SM150-0.5 sample with higher Mn 3+ content showed a better electrocatalytic performance than the MnO 2 -SM120-0.5 and MnO 2 -SM120-12 samples. The half-wave potential (E 1/2 ) of the MnO 2 -SM150-0.5/N-KB sample was 0.76 V (versus RHE), and the limiting current density was about 6.0 mA cm −2 . This result could be comparable to those of Pt/C (0.82 V and~5.0 mA cm −2 , respectively). Moreover, the MnO 2 -SM150-0.5 sample showed an excellent methanol tolerance compared to the Pt/C sample. In addition, the MnO 2 -SM150-0.5 sample exhibited good ORR catalytic stability; as its half-wave potential only negatively shifted~33 mV after 5000 cycles. Besides, the MnO 2 -SM150-0.5 sample exhibited a higher discharge voltage (1.28 V) at a density of 50 mA cm −2 than the Pt/C catalyst (1.19 V) when used in homemade Al-air batteries as cathode catalysts. Thus, this strategy for the preparation of α-MnO 2 could provide a scalable preparation method for significant ORR/OER application.
5,801.6
2018-04-01T00:00:00.000
[ "Chemistry" ]
Influence of Temperature-Dependent Properties of Aluminum Alloy on Evolution of Plastic Strain and Residual Stress during Quenching Process To lessen quenching residual stresses in aluminum alloy components, theory analysis, quenching experiments, and numerical simulation were applied to investigate the influence of temperature-dependent material properties on the evolution of plastic strain and stress in the forged 2A14 aluminum alloy components during quenching process. The results show that the thermal expansion coefficients, yield strengths, and elastic moduli played key roles in determining the magnitude of plastic strains. To produce a certain plastic strain, the temperature difference increased with decreasing temperature. It means that the cooling rates at high temperatures play an important role in determining residual stresses. Only reducing the cooling rate at low temperatures does not reduce residual stresses. An optimized quenching process can minimize the residual stresses and guarantee superior mechanical properties. In the quenching process, the cooling rates were low at temperatures above 450 ◦C and were high at temperatures below 400 ◦C. Introduction Heat-treatable aluminum alloys are widely used to fabricate forged components used in aerospace and aircraft industry for weight reduction.Solution quenching and aging treatments are applied to the aluminum alloy components to obtain high mechanical properties [1].For this purpose, fast cooling rates are required to avoid or limit precipitation during the quenching process [2].However, high cooling rates result in serious inhomogeneous deformations and lead to high residual stresses [3], which deteriorates the mechanical properties and dimensional stability [4,5], and also have important impact on fatigue properties [6][7][8].Therefore, it is important to study how to control the residual stresses. Residual stresses can be relieved by plastic deformation.For example, Koç et al. [9] found that compression and stretching processes could reduce the residual stress of 7050 forged blocks by more than 90%.However, this technique cannot be used for complicated cross-section components [10].Many researchers employed vibration methods to release the residual stresses [11,12] and studied their mechanisms [13].However, this technique is confined to large components since the high amplitude of vibration deforms the thinner or smaller components [5].Heat treatment methods are also applied to relieve the residual stresses.Dong et al. [10] successfully lowered the residual stress with uphill quenching and thermal-cold cycling processes.Sun et al. [14] relieved the residual stress by repeated heating of the samples at high heating rates and subsequent artificial aging treatment.Although the technologies described above are effective in relieving the residual stresses, the need of special tools and procedures increases their costs.Therefore, an economical approach is to minimize the residual stresses during quenching.Residual stresses decrease with reducing cooling rates.Our previous work showed that, as compared to the mechanical properties, the residual stress decreased faster with decreasing cooling rates [15].Dong et al. [10] balanced the residual stress and mechanical properties of the thin aluminum alloy plates with warm water at 80 • C. The mechanical performances are mainly determined by the cooling rates in the quenching sensitivity temperature range.The quenching sensitivity can be studied by time-temperature-transformation/properties (TTT/TTP) curves.For example, by using of TTP curves, Li et al. [16] showed that the mechanical performances of the 6063 aluminum alloy are determined by the cooling rates during the quenching sensitivity temperature range from 410-300 • C. Based on the results, they proposed a step quenching method to balance the mechanical properties and residual stresses.Its cooling rates are high in the quenching sensitivity temperature range to guarantee mechanical performances, and they are low in the others ranges to reduce residual stresses.Our previous work [15] showed that such a step quenching method could balance the residual stress and mechanical performances.However, the step quenching technology is mainly designed based on the characteristics of the quench-induced phase transformation of the material.It is best to utilize the characteristics of quenching residual stress evolution and quench-induced phase transformation during quenching to design the quenching technology.Nallathambi et al. [17] studied the influence of thermal, metallurgical, and mechanical properties on the final distortion and residual stresses during quenching.However, the effect of material properties on the evolution of residual stress at different temperatures is still unclear.This information is needed to guide the designing of quenching technology. This work investigated the effect of temperature-dependent material properties of forged 2A14 aluminum alloy on the evolution of stress and plastic strain during quenching process using a constructed model and numerical simulation methods.Moreover, the effect of cooling rates on residual stresses was studied by quenching the samples with different quenching technologies.Table 1 listed all the nomenclatures used in this paper. Theoretical Analysis Residual stresses are determined by inhomogeneous plastic strains.They usually increase with increasing magnitude of plastic strain.During quenching, inhomogeneous temperature distribution causes inhomogeneous thermal expansion strain (ε T ), which causes the thermal stresses and strains to Metals 2017, 7, 228 3 of 13 maintain the force and shape balance.When thermal stress exceeds the yield strength, plastic strain occurs.Material properties change with temperatures.The yield strengths and elastic moduli of 2A14 aluminum alloy decrease with increasing temperatures.This implies that the characteristics of evolution of thermal stresses and plastic strains may be different at different temperatures during quenching.In this work, a model with two units was proposed to investigate these characteristics (Figure 1).As shown in Figure 1a, the temperatures of the two units are initially the same, leading to the same thermal expansions.The thermal expansion was obtained using Equation (1), where 0 • C was taken as the reference temperature.As quenching proceeded, a temperature difference (T g ) between the two units appeared, as shown in Figure 1b; it resulted in strains to balance the unequal thermal expansions.To simplify the analysis, we presumed that the final heights of the two units are the same (Equation ( 2)).Moreover, the two units have the same magnitude of thermal stresses.The strain comprises elastic strain and plastic strain, which are described by Equation (4) in reference [18].In this model, we presumed that the thermal stress always exceeds the yield strength (Equation ( 3)).Combining Equations ( 1)-( 4), the plastic strain of unit i is obtained from Equation (5).Equations ( 6)-( 8) proposed three variants, A, B, and C, to simplify the formation of Equation (5).Equation ( 9) was obtained by combining Equations ( 5)-( 8), and was used to investigate the influence of temperature-dependent material properties on plastic strains at different temperatures, but with the same temperature difference.As shown in Table 2, the plastic moduli were obtained from a previous work [19].The other material properties are described in Section 2.3. where α, E, E p , and σ 0.2 are the thermal expansion coefficient, elastic modulus, plastic modulus, and yield strength, respectively.T is the temperature of the unit.∆T is the temperature difference between T and reference 0 • C. ε T , ε, ε e , ε p , and σ are the thermal expansion, total strain, elastic strain, plastic strain, and thermal stress, respectively.The subscript i is the number of the unit. Quenching Experiment: Material and Heat Treatment Quenching residual stresses are affected by products' material properties, structure and inhomogeneous temperature distribution during quenching.The temperature distribution is determined by the cooling rates; increasing cooling rate increases the temperature gradient.The influence of cooling rates on residual stress was investigated by quenching the forged 2A14 aluminum alloy samples with different quenching technologies.The samples were machined from a commercial forged component 78 mm in thickness; their chemical compositions are listed in Table 3.As shown in Figure 2a, the size of the samples is 110 mm × 100 mm × 70 mm, and its thickness direction is along the z-axis.Their x, y, and z directions are along the long transverse direction, short transverse direction, and thickness direction of the component, respectively.Similar to our previous work [15], the two end surfaces (110 mm × 100 mm) were taken as the main heat-dispersing surfaces, while the other four surfaces were encapsulated with about 20-mm-thick asbestos.As shown in Table 4, the samples were quenched immediately after the solution treatment at 500 °C for 4 h.Sample A1 was quenched with room temperature water (about 20 °C).Sample A2 was first quenched in room temperature water until the temperature at the central point P0 decreased to about 410 °C, and then it was cooled in room temperature air.The transfer time was smaller than 1 s.Sample A3 was quenched by a step quenching technology using a spray-quenching device designed by us, as shown in Figure 3. Detailed information of the device is present in our submitted patent (CN 201710016344.5).As shown in Figure 2a, the temperatures at points P0 and P1 during quenching were monitored by using Φ1 mm naked type K thermocouples deeply embedded into the samples.The cooling history of P0 was used to stand for the cooling rates of the samples with different quenching technologies.The temperature difference between points P0 and P1 was used to estimate the inhomogeneous temperature distribution of the samples.Quenching residual stresses are affected by products' material properties, structure and inhomogeneous temperature distribution during quenching.The temperature distribution is determined by the cooling rates; increasing cooling rate increases the temperature gradient.The influence of cooling rates on residual stress was investigated by quenching the forged 2A14 aluminum alloy samples with different quenching technologies.The samples were machined from a commercial forged component 78 mm in thickness; their chemical compositions are listed in Table 3.As shown in Figure 2a, the size of the samples is 110 mm × 100 mm × 70 mm, and its thickness direction is along the z-axis.Their x, y, and z directions are along the long transverse direction, short transverse direction, and thickness direction of the component, respectively.Similar to our previous work [15], the two end surfaces (110 mm × 100 mm) were taken as the main heat-dispersing surfaces, while the other four surfaces were encapsulated with about 20-mm-thick asbestos.As shown in Table 4, the samples were quenched immediately after the solution treatment at 500 • C for 4 h.Sample A1 was quenched with room temperature water (about 20 • C).Sample A2 was first quenched in room temperature water until the temperature at the central point P0 decreased to about 410 • C, and then it was cooled in room temperature air.The transfer time was smaller than 1 s.Sample A3 was quenched by a step quenching technology using a spray-quenching device designed by us, as shown in Figure 3. Detailed information of the device is present in our submitted patent (CN 201710016344.5).As shown in Figure 2a, the temperatures at points P0 and P1 during quenching were monitored by using Φ1 mm naked type K thermocouples deeply embedded into the samples.The cooling history of P0 was used to stand for the cooling rates of the samples with different quenching technologies.The temperature difference between points P0 and P1 was used to estimate the inhomogeneous temperature distribution of the samples.Refering to References [10,20], the residual stresses of the as-quenched samples were measured by the slitting method.As shown in Figure 2b, a wire-electrode cutting machine was used to cut incrementally along the cutting plane.With the incremental increase in the depth (aj) of the cutting plane, the residual stresses of the blocks were released.The strains (εy) in the y direction were measured by using foil strain gauges (BX120-5AA) with 5 mm gauge lengths, which were connected in 1/4 bridging mode.The central line of the gauge was at the mid-length of the sample.The strains were the functions of cutting depths (aj).We presumed that the residual stress σy(z) along the cutting plane is the function of z.As in Equation ( 10), it can be described by the polynomial Pi (70 − z) and the undetermined coefficient Ai.At the same time, as in Equation ( 11), the measured stains (εy) at different depths (aj) can be described by the undetermined coefficient Ai and compliance function Ci (aj).Ci (aj) is the strain at the measured point with the depth (aj) of the cutting plane increasing incrementally, when Pi (70 − z) stress was applied along the z direction of the sample.The numerical simulation method was used to calculate Ci (aj).As shown in Equation ( 12), the least square method is used to obtain Ai using the measured strains and calculated strains given in Equation (11).In this Refering to References [10,20], the residual stresses of the as-quenched samples were measured by the slitting method.As shown in Figure 2b, a wire-electrode cutting machine was used to cut incrementally along the cutting plane.With the incremental increase in the depth (aj) of the cutting plane, the residual stresses of the blocks were released.The strains (εy) in the y direction were measured by using foil strain gauges (BX120-5AA) with 5 mm gauge lengths, which were connected in 1/4 bridging mode.The central line of the gauge was at the mid-length of the sample.The strains were the functions of cutting depths (aj).We presumed that the residual stress σy(z) along the cutting plane is the function of z.As in Equation ( 10), it can be described by the polynomial Pi (70 − z) and the undetermined coefficient Ai.At the same time, as in Equation ( 11), the measured stains (εy) at different depths (aj) can be described by the undetermined coefficient Ai and compliance function Ci (aj).Ci (aj) is the strain at the measured point with the depth (aj) of the cutting plane increasing incrementally, when Pi (70 − z) stress was applied along the z direction of the sample.The numerical simulation method was used to calculate Ci (aj).As shown in Equation ( 12), the least square method is used to obtain Ai using the measured strains and calculated strains given in Equation (11).In this Samples Solution Treatment Quenching Technology Ageing Treatment Refering to References [10,20], the residual stresses of the as-quenched samples were measured by the slitting method.As shown in Figure 2b, a wire-electrode cutting machine was used to cut incrementally along the cutting plane.With the incremental increase in the depth (a j ) of the cutting plane, the residual stresses of the blocks were released.The strains (ε y ) in the y direction were measured by using foil strain gauges (BX120-5AA) with 5 mm gauge lengths, which were connected in 1/4 bridging mode.The central line of the gauge was at the mid-length of the sample.The strains were the functions of cutting depths (a j ).We presumed that the residual stress σ y (z) along the cutting plane is the function of z.As in Equation ( 10), it can be described by the polynomial P i (70 − z) and the undetermined coefficient A i .At the same time, as in Equation ( 11), the measured stains (ε y ) at different depths (a j ) can be described by the undetermined coefficient A i and compliance function C i (a j ).C i (a j ) is the strain at the measured point with the depth (a j ) of the cutting plane increasing incrementally, when P i (70 − z) stress was applied along the z direction of the sample.The numerical simulation method was used to calculate C i (a j ).As shown in Equation ( 12), the least square method is used to obtain A i using the measured strains and calculated strains given in Equation (11).In this paper, the residual stress (Equation ( 10)) was not calculated, and, instead, the measured strains were used to estimate the residual stress of the samples, which underwent different quenching treatments. The tensile test samples in x direction were machined from the mid-thickness part of the aged samples, and were used to evaluate the mechanical properties.Three test specimens were taken from every sample with dimensions of 2 × 8 mm 2 and a gauge length 30 mm, utilizing a 25 mm gauge length extensometer according to GB/T 1685-2013.The samples were tested at a strain rate of 0.0011 s −1 . Numerical Simulation The numerical simulation software ABAQUS standard with coupled temperature-displacement analysis was used to further study the influence of material properties and cooling rates at different temperatures on plastic strain and residual stress during quenching.The size of the model was the same as that of the samples used for the heat treatment experiments.Only the two end surfaces (110 mm × 100 mm) exchanged heat with water during quenching.Due to the symmetry, only an eighth of the sample was used to reduce calculating time, as shown in Figure 4.The displacement in the normal direction of the three symmetry planes was restricted.Similar to the experiments, only end surface of this model exchanged heat with the environment.The element type used in this model is an 8-node thermally coupled brick, trilinear displacement and temperature element (C3D8T), and the number of the elements was 6160.The incremental time in the analysis is chosen automatically by the computer program and the solution technique is full Newton method. Metals 2017, 7, 228 6 of 13 paper, the residual stress (Equation ( 10)) was not calculated, and, instead, the measured strains were used to estimate the residual stress of the samples, which underwent different quenching treatments. The tensile test samples in x direction were machined from the mid-thickness part of the aged samples, and were used to evaluate the mechanical properties.Three test specimens were taken from every sample with dimensions of 2 × 8 mm 2 and a gauge length 30 mm, utilizing a 25 mm gauge length extensometer according to GB/T 1685-2013.The samples were tested at a strain rate of 0.0011 s −1 . Numerical Simulation The numerical simulation software ABAQUS standard with coupled temperature-displacement analysis was used to further study the influence of material properties and cooling rates at different temperatures on plastic strain and residual stress during quenching.The size of the model was the same as that of the samples used for the heat treatment experiments.Only the two end surfaces (110 mm × 100 mm) exchanged heat with water during quenching.Due to the symmetry, only an eighth of the sample was used to reduce calculating time, as shown in Figure 4.The displacement in the normal direction of the three symmetry planes was restricted.Similar to the experiments, only end surface of this model exchanged heat with the environment.The element type used in this model is an 8-node thermally coupled brick, trilinear displacement and temperature element (C3D8T), and the number of the elements was 6160.The incremental time in the analysis is chosen automatically by the computer program and the solution technique is full Newton method.The density of this material is set constant at a value of 2800 kg m −3 .Figure 5 shows the yield stress at different plastic strains used in the simulation model, where the yield points at different temperatures are the small value of the curves.As shown in Table 5, the thermal expansion coefficients, elasticity moduli, conductivities, and specific heat capacities were obtained from the The density of this material is set constant at a value of 2800 kg•m −3 .Figure 5 shows the yield stress at different plastic strains used in the simulation model, where the yield points at different temperatures are the small value of the curves.As shown in Table 5, the thermal expansion coefficients, elasticity moduli, conductivities, and specific heat capacities were obtained from the literature [21,22].The convective heat transfer coefficient of aluminum/air was set constant at 0.2 kW•m −2 •s −1 .literature [21,22].The convective heat transfer coefficient of aluminum/air was set constant at 0.2 kW m −2 •s −1 .The temperature-dependent material properties changed with temperature, and affected the evolution of thermal stress and plastic strain at different temperatures during the quenching process.The thermal expansion coefficient, elastic modulus, and yield strength play key roles in the evolution of thermal stress and plastic strain.Numerical simulation was used to analyze the effect by comparing the properties of samples M0-M3 with different material properties, as shown in Table 6.Sample M0 used the measured material properties of the studied material, as shown in Table 5 and Figure 5.For samples M1-M3, only one kind of measured material properties was adjusted and they are the thermal expansion coefficient, elastic modulus, and yield strength, respectively.The material properties were adjusted by taking the value of the corresponding material properties at 20 °C as reference values, and multiplying the reference values with the multiplier factors at different temperatures (as shown in Figure 6) to obtain the corresponding values at different temperatures.The Mises stress and equivalent plastic strain (PEEQ) along the central line (L0) were used to estimate the level of residual stresses and plastic strains of the samples.The temperature-dependent material properties changed with temperature, and affected the evolution of thermal stress and plastic strain at different temperatures during the quenching process.The thermal expansion coefficient, elastic modulus, and yield strength play key roles in the evolution of thermal stress and plastic strain.Numerical simulation was used to analyze the effect by comparing the properties of samples M0-M3 with different material properties, as shown in Table 6.Sample M0 used the measured material properties of the studied material, as shown in Table 5 and Figure 5.For samples M1-M3, only one kind of measured material properties was adjusted and they are the thermal expansion coefficient, elastic modulus, and yield strength, respectively.The material properties were adjusted by taking the value of the corresponding material properties at 20 • C as reference values, and multiplying the reference values with the multiplier factors at different temperatures (as shown in Figure 6) to obtain the corresponding values at different temperatures.The Mises stress and equivalent plastic strain (PEEQ) along the central line (L0) were used to estimate the level of residual stresses and plastic strains of the samples.literature [21,22].The convective heat transfer coefficient of aluminum/air was set constant at 0.2 kW m −2 •s −1 . Samples Material Figure 5. Yield strengths at different temperatures.The temperature-dependent material properties changed with temperature, and affected the evolution of thermal stress and plastic strain at different temperatures during the quenching process.The thermal expansion coefficient, elastic modulus, and yield strength play key roles in the evolution of thermal stress and plastic strain.Numerical simulation was used to analyze the effect by comparing the properties of samples M0-M3 with different material properties, as shown in Table 6.Sample M0 used the measured material properties of the studied material, as shown in Table 5 and Figure 5.For samples M1-M3, only one kind of measured material properties was adjusted and they are the thermal expansion coefficient, elastic modulus, and yield strength, respectively.The material properties were adjusted by taking the value of the corresponding material properties at 20 °C as reference values, and multiplying the reference values with the multiplier factors at different temperatures (as shown in Figure 6) to obtain the corresponding values at different temperatures.The Mises stress and equivalent plastic strain (PEEQ) along the central line (L0) were used to estimate the level of residual stresses and plastic strains of the samples.ABAQUS was also used to study the influence of cooling rates on the residual stresses by simulating the quenching process.Prior to this, the heat transfer coefficients of aluminum/water were obtained by using the Deform 2D inverse heat transfer module.The measured temperature vs. time curves at point P0 were used.The Mises stresses, principal stresses in the y direction (S22), equivalent plastic strain (PEEQ), and plastic strains in the y direction (PE22) along the central line (L0) were used to estimate the plastic strains and stresses of the samples. Influence of Material Properties at Different Temperatures on the Evolution of Stress and Strain Material properties change with temperatures.Thus, the evolution of stress and plastic strain are different at different temperatures.This section will investigate the evolution of plastic strain at different temperatures during quenching, and study how the changing of material properties affect the evolution of residual plastic strain and stress after quenching treatment. First, the two-unit model (Figure 1) was used to investigate the plastic strains at different temperatures with the same temperature differences (T g ).Replace the coefficients of Equation ( 9) with the material properties and set T g as 50 • C, 100 • C, and 150 • C, respectively.The plastic strains (ε p i ) at different temperatures of unit i were obtained.As shown in Figure 7, the plastic strain with increasing temperature difference and temperature.The results indicate that to produce a certain plastic strain, the temperature difference should be increased with decreasing temperature.Consequently, the temperature differences at high temperatures should be reduced to minimize the residual stresses.ABAQUS was also used to study the influence of cooling rates on the residual stresses by simulating the quenching process.Prior to this, the heat transfer coefficients of aluminum/water were obtained by using the Deform 2D inverse heat transfer module.The measured temperature vs. time curves at point P0 were used.The Mises stresses, principal stresses in the y direction (S22), equivalent plastic strain (PEEQ), and plastic strains in the y direction (PE22) along the central line (L0) were used to estimate the plastic strains and stresses of the samples. Influence of Material Properties at Different Temperatures on the Evolution of Stress and Strain Material properties change with temperatures.Thus, the evolution of stress and plastic strain are different at different temperatures.This section will investigate the evolution of plastic strain at different temperatures during quenching, and study how the changing of material properties affect the evolution of residual plastic strain and stress after quenching treatment. First, the two-unit model (Figure 1) was used to investigate the plastic strains at different temperatures with the same temperature differences (Tg).Replace the coefficients of Equation ( 9) with the material properties and set Tg as 50 °C, 100 °C, and 150 °C, respectively.The plastic strains ( ) at different temperatures of unit i were obtained.As shown in Figure 7, the plastic strain increases with increasing temperature difference and temperature.The results indicate that to produce a certain plastic strain, the temperature difference should be increased with decreasing temperature.Consequently, the temperature differences at high temperatures should be reduced to minimize the residual stresses.Figure 8 shows the simulation results for the samples (M0-M3) with different material properties.Equivalent plastic strain (PEEQ) and Mises stresses along the central line (L0) represent the plastic strains and residual stresses of the samples, respectively.As shown in Figure 8b,c, the plastic strains and residual stresses of the samples M1-M3 are lower than those of the sample M0. Figure 8a compares the adjusting material properties used for samples M1-M3 with the ones for sample M0.Like the treatments in Section 2.3, the values of every kind of measured material properties at 20 °C were considered to be reference values, and the material properties at different temperatures were divided by the corresponding reference value.Compared with the material properties used for sample M0, the thermal expansion coefficients used for sample M1 are smaller, the elastic moduli used for sample M2 are smaller, and the yield strengths for sample M3 are bigger.Reducing thermal expansion coefficients reduced thermal expansion and thermal stresses during quenching, which resulted in the decrease of plastic strains.Reducing the elastic modulus resulted in an increase in the allowable elastic strain at a certain thermal strain, and this led to the reduction in the plastic strain during quenching.The increase in the yield strength reduced the plastic strain at a Figure 8 shows the simulation results for the samples (M0-M3) with different material properties.Equivalent plastic strain (PEEQ) and Mises stresses along the central line (L0) represent the plastic strains and residual stresses of the samples, respectively.As shown in Figure 8b,c, the plastic strains and residual stresses of the samples M1-M3 are lower than those of the sample M0. Figure 8a compares the adjusting material properties used for samples M1-M3 with the ones for sample M0.Like the treatments in Section 2.3, the values of every kind of measured material properties at 20 • C were considered to be reference values, and the material properties at different temperatures were divided by the corresponding reference value.Compared with the material properties used for sample M0, the thermal expansion coefficients used for sample M1 are smaller, the elastic moduli used for sample M2 are smaller, and the yield strengths for sample M3 are bigger.Reducing thermal expansion coefficients reduced thermal expansion and thermal stresses during quenching, which resulted in the decrease of plastic strains.Reducing the elastic modulus resulted in an increase in the allowable elastic strain at a certain thermal strain, and this led to the reduction in the plastic strain during quenching.The increase in the yield strength reduced the plastic strain at a certain thermal stress.Therefore, the plastic strains and residual stresses of samples M1-M3 are smaller than the ones of sample M0. Influence of Cooling Rates at Different Temperatures on Residual Stress The section above indicates that to produce a certain plastic strain, the magnitude of temperature difference should be increased with decreasing temperatures.For real components, the residual stresses are determined by inhomogeneous plastic deformations.The evolution of plastic strain and stress is affected by the temperature distributions in the components during quenching.The temperature difference increased with increasing cooling rate.At the same time, the cooling rates determine mechanical properties, and this relationship has been studied using time-temperatureproperties (TTP) curves in our previous work [15].To optimize the quenching process, this section will investigated the influence of cooling rates at different temperatures on the evolution of plastic strain and final residual stress by quenching the samples with different quenching technologies. Figure 9 shows the cooling rates at P0, temperature differences between points P0 and P1, strains measured by the slitting method, and tensile properties of the samples.The results show that reducing the cooling rates in a low temperature range does not reduce residual stresses.However, an optimized step quenching process can minimize the residual stresses and result in good mechanical properties.As shown in Figure 9a,b, the cooling rates and temperature differences between P0 and P1 of Samples A1 and A2 are almost the same in the temperature range from 500 to 420 °C, and the cooling rates and temperature differences of Sample A2 are much lower than those of Sample A1 at temperatures below 400 °C.However, the residual stresses of the two samples are almost the same, as shown in Figure 9c.This means that changing the cooling rates at low temperatures does not change the residual stresses for the Samples A1 and A2.In the case of Sample A3, the cooling rates are lower at temperatures above 300 °C, especially above 450 °C, than those of Sample A1.The cooling rates are higher than those of the sample A1 at temperatures below about 300 °C.The temperature differences of Sample A3 are lower at temperatures above 400 °C and slightly higher at temperatures below 350 °C.However, the residual stresses of Sample A3 are much lower than Influence of Cooling Rates at Different Temperatures on Residual Stress The section above indicates that to produce a certain plastic strain, the magnitude of temperature difference should be increased with decreasing temperatures.For real components, the residual stresses are determined by inhomogeneous plastic deformations.The evolution of plastic strain and stress is affected by the temperature distributions in the components during quenching.The temperature difference increased with increasing cooling rate.At the same time, the cooling rates determine mechanical properties, and this relationship has been studied using time-temperature-properties (TTP) curves in our previous work [15].To optimize the quenching process, this section will investigated the influence of cooling rates at different temperatures on the evolution of plastic strain and final residual stress by quenching the samples with different quenching technologies. Figure 9 shows the cooling rates at P0, temperature differences between points P0 and P1, strains measured by the slitting method, and tensile properties of the samples.The results show that reducing the cooling rates in a low temperature range does not reduce residual stresses.However, an optimized step quenching process can minimize the residual stresses and result in good mechanical properties.As shown in Figure 9a,b, the cooling rates and temperature differences between P0 and P1 of Samples A1 and A2 are almost the same in the temperature range from 500 to 420 • C, and the cooling rates and temperature differences of Sample A2 are much lower than those of Sample A1 at temperatures below 400 • C.However, the residual stresses of the two samples are almost the same, as shown in Figure 9c.This means that changing the cooling rates at low temperatures does not change the residual stresses for the Samples A1 and A2.In the case of Sample A3, the cooling rates are lower at temperatures above 300 • C, especially above 450 • C, than those of Sample A1.The cooling rates are higher than those of the sample A1 at temperatures below about 300 • C. The temperature differences of Sample A3 are lower Metals 2017, 7, 228 10 of 13 at temperatures above 400 • C and slightly higher at temperatures below 350 • C.However, the residual stresses of Sample A3 are much lower than those of Sample A1 due to the lower cooling rates at high temperatures.This is because plastic strain occurred more easily at high temperatures, according to the results and analysis presented in Section 3.1.Moreover, as shown in Figure 9d, the tensile properties of sample A3 are close to those of Sample A1, because the cooling rates are high in the quenching sensitivity range from 300 to 400 • C [15].This means that residual stresses and mechanical performances can be balanced by employing an optimized quenching technology with low cooling rates in high temperature range and high cooling rates in other temperature ranges. Metals 2017, 7, 228 10 of 13 those of Sample A1 due to the lower cooling rates at high temperatures.This is because plastic strain occurred more easily at high temperatures, according to the results and analysis presented in Section 3.1.Moreover, as shown in Figure 9d, the tensile properties of sample A3 are close to those of Sample A1, because the cooling rates are high in the quenching sensitivity range from 300 to 400 °C [15].This means that residual stresses and mechanical performances can be balanced by employing an optimized quenching technology with low cooling rates in high temperature range and high cooling rates in other temperature ranges.The numerical simulation method was used to further study the influence of cooling rates on the evolution of residual stress.Figure 10 shows the evolution of the plastic strain at P0, plastic strains and residual stresses along L0.As shown in Figure 10a, the plastic strains (plastic strains in the y direction (PE22) and equivalent plastic strain (PEEQ)) at P0 of Sample A1 and A2 vs. temperature curves are almost the same.The plastic strains reached the magnitudes at about 490 °C and remained unchanged thereafter; even the cooling rates of Sample A2 are much lower below 400 °C.According to the results and analysis presented in Section 3.1, to produce a certain plastic strain, the temperature difference should be increased with decreasing temperatures.This explains that reducing the cooling rates at low temperature range does not reduce the plastic strains at P0. Consequently, as shown in Figure 10b,c, the plastic strains (PE22 and PEEQ) and residual stress (Mises stress and principal stresses in the y direction (S22)) along L0 of the two samples are almost the same.Moreover, in the case of Sample A3, the step quenching technology produces lower plastic strains and residual stresses along L0 due to the low cooling rates at high temperatures.The results coincide with the experimental results in the above paragraph.The numerical simulation method was used to further study the influence of cooling rates on the evolution of residual stress.Figure 10 shows the evolution of the plastic strain at P0, plastic strains and residual stresses along L0.As shown in Figure 10a, the plastic strains (plastic strains in the y direction (PE22) and equivalent plastic strain (PEEQ)) at P0 of Sample A1 and A2 vs. temperature curves are almost the same.The plastic strains reached the magnitudes at about 490 • C and remained unchanged thereafter; even the cooling rates of Sample A2 are much lower below 400 • C. According to the results and analysis presented in Section 3.1, to produce a certain plastic strain, the temperature difference should be increased with decreasing temperatures.This explains that reducing the cooling rates at low temperature range does not reduce the plastic strains at P0. Consequently, as shown in Figure 10b,c, the plastic strains (PE22 and PEEQ) and residual stress (Mises stress and principal stresses in the y direction (S22)) along L0 of the two samples are almost the same.Moreover, in the case of Sample A3, the step quenching technology produces lower plastic strains and residual stresses along L0 due to the low cooling rates at high temperatures.The results coincide with the experimental results in the above paragraph. Discussion The influence of temperature-dependent material properties on the evolution of plastic strain at different temperatures during the quenching process was investigated, and the influence of cooling rates at different temperatures on the evolution of plastic strain and residual stress for forged 2A14 aluminum alloy components during the quenching process was also analyzed. During quenching process, the thermal expansions at high-temperature locations than at lowtemperature locations.The difference in the thermal expansions resulted in thermal strains and stresses to balance the inhomogeneous thermal expansions.This usually produced inhomogeneous plastic strains, which resulted in residuals stress after the quenching treatment. Equation (9) shows that the thermal expansion coefficients, elastic moduli, and yield strengths play key roles in determining the magnitude of plastic strains.Figure 8 shows that compared with the sample M0, decreasing the thermal expansion coefficients and increasing yield strengths at high temperatures decreases the residual plastic strains and stresses along the central line L0 of the sample M1 and M3, respectively; further, decreasing the elastic moduli decreases the residual plastic strains and stresses along line L0 of the sample M2.This can be explained as follows: During the quenching process, decreasing thermal expansion coefficient decreased the thermal expansion and thermal stress, which resulted in a decrease in the plastic strain.Further, increasing yield strength reduced the plastic strain at a certain thermal stress; reducing the elastic modulus resulted in the rise of allowable elastic strain at a certain thermal strain, and then led to the reduction in the plastic strain.The reduction in the plastic strains during quenching decreased residual plastic strains and stresses of the components. The paragraph above implies that increasing thermal expansion coefficient and decreasing the yield strength causes an increase in the plastic strain.However, decreasing elastic modulus decreases the plastic strain.For the studied material, the thermal expansion coefficient increase with the increasing temperature; the elastic modulus and yield strength decrease with increasing temperature.Using the model in Section 2.1, the evolution of the plastic strain at different temperatures during quenching process was studied.Figure 7 shows that the plastic strain increases with increasing Discussion The influence of temperature-dependent material properties on the evolution of plastic strain at different temperatures during the quenching process was investigated, and the influence of cooling rates at different temperatures on the evolution of plastic strain and residual stress for forged 2A14 aluminum alloy components during the quenching process was also analyzed. During quenching process, the thermal expansions at high-temperature locations than at low-temperature locations.The difference in the thermal expansions resulted in thermal strains and stresses to balance the inhomogeneous thermal expansions.This usually produced inhomogeneous plastic strains, which resulted in residuals stress after the quenching treatment. Equation (9) shows that the thermal expansion coefficients, elastic moduli, and yield strengths play key roles in determining the magnitude of plastic strains.Figure 8 shows that compared with the sample M0, decreasing the thermal expansion coefficients and increasing yield strengths at high temperatures decreases the residual plastic strains and stresses along the central line L0 of the sample M1 and M3, respectively; further, decreasing the elastic moduli decreases the residual plastic strains and stresses along line L0 of the sample M2.This can be explained as follows: During the quenching process, decreasing thermal expansion coefficient decreased the thermal expansion and thermal stress, which resulted in a decrease in the plastic strain.Further, increasing yield strength reduced the plastic strain at a certain thermal stress; reducing the elastic modulus resulted in the rise of allowable elastic strain at a certain thermal strain, and then led to the reduction in the plastic strain.The reduction in the plastic strains during quenching decreased residual plastic strains and stresses of the components. The paragraph above implies that increasing thermal expansion coefficient and decreasing the yield strength causes an increase in the plastic strain.However, decreasing elastic modulus decreases the plastic strain.For the studied material, the thermal expansion coefficient increase with the increasing temperature; the elastic modulus and yield strength decrease with increasing temperature.Using the model in Section 2.1, the evolution of the plastic strain at different temperatures during quenching process was studied.Figure 7 shows that the plastic strain increases with increasing temperature at the same temperature difference between the two units; the plastic strain increases with increasing temperature difference at the same temperatures.This means that the influence of the thermal expansion coefficients and yield strengths changing with temperatures on plastic strains are more serious than the effect of the changes in the elastic moduli.Thus, the plastic strains increase with temperature. Quenching residual stress of aluminum alloy components are caused by the inhomogeneous temperature distribution.Decreasing the temperature gradients decreases the plastic strain during the quenching process, resulting in lower residual stresses.Decreasing the cooling rates decreases the temperature gradient.Many papers reported that reducing the cooling rate can minimize residual stresses [10,15,23].However, the above paragraphs in this section imply that for this material component, the residual plastic strains and stresses are mainly determined by the cooling rates in a high-temperature zone.Figure 9 shows that compared with the sample A1, reducing the cooling rates at low temperatures below 400 • C does not reduce the measured strains of the sample A2, as indicated by the slitting method results; in contrast, reducing the cooling rates at high temperatures above about 450 • C reduces the strains of sample A3 sharply, as per measurement results of the slitting method.The strains are in proportion with the residual stress of the samples.Figure 10 shows the residual strains and stresses simulation results along the central line L0 of the samples A1-A3, and they show a similar trend as the quenching experiment results.These results confirmed that the cooling rate at the high-temperature zone insignificantly affects the residual stress of this studied material component. Time-temperature-properties (TTP) curves of this studied material [15] show that mechanical performances are mainly determined by cooling rates in the quenching sensitivity temperature range.According to this conclusion and the results presented above, an optimized step quenching technology was proposed to balance the residual stresses and mechanical properties.By quenching sample A3 with this cooling path, the residual stresses were reduced significantly and the tensile properties changed slightly, compared with sample A1 quenched with water at 20 • C, as shown in Figures 9 and 10.In this quenching treatment, the cooling rates were low at high temperatures above 450 • C to minimize the residual stresses, and they increased in the other temperature ranges, resulting in good mechanical properties. Conclusions After analyzing and summarizing the results above, several main conclusions were inferred.Plastic strains increase with the temperature when the temperature difference remains unchanged.The cooling rates at high temperatures play a key role in determining the magnitude of residual stresses.Only reducing the cooling rates at low temperatures cannot reduce the residual stress and plastic strains. Residual stresses and mechanical properties can be balanced with an optimized quenching technology.The cooling rates of the sample are low at high temperatures to reduce residual stress and are high at the other temperatures to improve mechanical properties. Figure 1 . Figure 1.Two-unit model: (a) uniform temperatures at the beginning of quenching; and (b) temperature difference appeared during quenching. Figure 1 . Figure 1.Two-unit model: (a) uniform temperatures at the beginning of quenching; and (b) temperature difference appeared during quenching. Figure 2 . Figure 2. Schematic of quenching of a sample: (a) sizes and measured points (the unit used in this figure is "mm"); and (b) cutting plane of slitting method. Figure 2 . 13 Figure 2 . Figure 2. Schematic of quenching of a sample: (a) sizes and measured points (the unit used in this figure is "mm"); and (b) cutting plane of slitting method. Figure 6 . Figure 6.Multiplier factors of thermal expansion coefficient, elastic modulus, and yield strength at different temperatures. Figure 6 . Figure 6.Multiplier factors of thermal expansion coefficient, elastic modulus, and yield strength at different temperatures. Figure 6 . Figure 6.Multiplier factors of thermal expansion coefficient, elastic modulus, and yield strength at different temperatures. Figure 7 . Figure 7. Plastic strains of unit i of the model at different temperatures with a certain temperature difference (Tg). Figure 7 . Figure 7. Plastic strains of unit i of the model at different temperatures with a certain temperature difference (T g ). stress.Therefore, the plastic strains and residual stresses of samples M1-M3 are smaller than the ones of sample M0. Figure 8 . Figure 8. Residual equivalent plastic strain (PEEQ) and Mises stresses along the central line L0 of the sample (110 mm × 100 mm × 70 mm) in Figure 2a using different thermal expansion coefficients, elastic moduli, and yield strengths: (a) normalized measured and adjusting material properties; (b) PFEQ; and (c) Mises stresses. Figure 8 . Figure 8. Residual equivalent plastic strain (PEEQ) and Mises stresses along the central line L0 of the sample (110 mm × 100 mm × 70 mm) in Figure 2a using different thermal expansion coefficients, elastic moduli, and yield strengths: (a) normalized measured and adjusting material properties; (b) PFEQ; and (c) Mises stresses. Figure 9 . Figure 9. Results of Samples A1-A3 with different quenching technology: (a) cooling rates at P0 during the quenching process; (b) temperature differences between points P0 and P1 during the quenching process; (c) strains measured by the slitting method after quenching treatment; and (d) tensile properties after aging treatments. Figure 9 . Figure 9. Results of Samples A1-A3 with different quenching technology: (a) cooling rates at P0 during the quenching process; (b) temperature differences between points P0 and P1 during the quenching process; (c) strains measured by the slitting method after quenching treatment; and (d) tensile properties after aging treatments. Figure 10 . Figure 10.Simulation results of the quenching samples: (a) evolution of the plastic strain (plastic strains in the y direction (PE22) and equivalent plastic strain (PEEQ)) at point P0; (b) plastic strains (PE22 and PEEQ) along the central line L0; and (c) residual stresses (Mises stress and principal stresses in the y direction (S22)) along L0. Figure 10 . Figure 10.Simulation results of the quenching samples: (a) evolution of the plastic strain (plastic strains in the y direction (PE22) and equivalent plastic strain (PEEQ)) at point P0; (b) plastic strains (PE22 and PEEQ) along the central line L0; and (c) residual stresses (Mises stress and principal stresses in the y direction (S22)) along L0. Table 2 . Plastic moduli at different temperatures. Table 3 . Chemical composition of the studied material (wt %). Table 2 . Plastic moduli at different temperatures. Table 3 . Chemical composition of the studied material (wt %). Table 4 . Heat treatments applied to 2A14 aluminum alloy samples. Table 4 . Heat treatments applied to 2A14 aluminum alloy samples.
10,931.2
2017-06-21T00:00:00.000
[ "Materials Science", "Engineering" ]
Prolonged Postdiapause: Influence on some Indicators of Carbohydrate and Lipid Metabolism of the Red Mason Bee, Osmia rufa Bees of the genus Osmia are being used in crop pollination at an increasing rate. However, a short life expectancy of adult individuals limits the feasibility of their use. Cocoons of the red mason bee, Osmia rufa L. (Hymenoptera: Megachilidae), can be stored at 4° C in a postdiapause state, and adult bees can be used for pollination outside their natural flight period. The period of storage in this form has an unfavorable influence on the survival rate, life expectancy, and fertility of the bee. It was suggested that the negative results are connected with exhaustion of energy reserves. To test this hypothesis, the present study examined the contents of protein, carbohydrates, lipids, and the activities of some enzymes, and their degradation in red mason bees that emerged in spring according to their biological clock and in summer after elongated diapause. It was found that postdiapause artificially elongated by 3 months caused significant decreases in body weight, total sugar, glycogen, lipids, and protein content in O. rufa. Glucose level was highest in bees that emerged in the summer, which was coincident with increased activities of maltase and trehalase. The activities of sucrase and cellobiase were not changed, while amylase activity was considerably decreased. The activities of triacylglycerols lipase and C2, C4, C10 carboxylesterases were highest in bees that emerged in July. Low temperatures restrict O. rufa emergence, and during prolonged postdiapause, metabolic processes lead to significant reductions of structural and energetic compounds. Introduction Diapause is a fundamental process that allows insects to synchronize their life cycle with seasonal weather changes. Obligatory diapause occurs in the red mason bee, Osmia rufa L. (Hymenoptera: Megachilidae), which allows them to survive the winters. This solitary bee overwinters as a fully enclosed, cocooned, and unfed imago. Its diapause is not dependent on photoperiod. The duration of the overwintering period depends on the temperature and is important in both bee survivability and bee usability in crop pollination in the following vegetation year (Bosch and Blas 1994;Bosch and Kemp 2004). The metabolic rate of O. rufa decreases in late summer, just after transforming into an imago. This decreased metabolism rate is a very important phenomenon, as survival of the winter period depends on the amount of food stored in the organism during the larval stage (Bosh et al. 2010). According to the general diapause model (Kostal 2006), O. rufa overwintering consists of 2 phases, diapause and postdiapause quiescence. This concept of diapause in Osmia genus has been confirmed by many studies (Bosh and Kemp 2003;Kemp et al. 2004;Krunic and Stanisavljevic 2006;. During diapause, supercooling point values in O. rufa decline (Krunic and Stanisavljevic 2006). Placing diapausing O. rufa at 20° C leads to their death. Diapause last about 100 days and seems to be independent of the temperature variation. After this period, bees of Osmia genus can develop normally, but their development is inhibited by the temperature ). This period is called the stage of postdiapause quiescence. In the beginning of postdiapause in Osmia cornuta and O. rufa, their supercooling point value begins to grow until spring (Krunic and Stanisavljevic 2006). This growth could be the result of a decrease in protective compounds, such as glycerol, sorbitol, trehalose, etc. (Storey and Storey 1991). It is known that O. rufa in can be kept in postdiapause quiescence for a long time by being stored in a cooler. This practice allows beekeepers to activate bees and use them for pollination at the desired time. For example, the stored bees can be used to pollinate plants that flower in the summer, a time when under natural conditions the bees would have already finished their flight period. Artificially prolonging postdiapause in O. rufa has an unfavorable effect on their survivability and fertility (Bosch and Blas 1994;Giejdasz and Wilkaniec 1998;Bosh andKemp 2003, 2004;Sgolastra et al. 2010). Futhermore, prolonging wintering can cause a partial loss in the effectiveness of antioxidant systems (Dmochowska et al. 2012). It seems that the main cause of these undesired occurrences is the exhaustion of reserve substances. This suggestion is supported by the fact that bees of Osmia genus with larger body weights have a higher overwintering survival rate than lighter bees (Tepedino and Torchio 1982;Bosh and Kemp 2004); however, this hypothesis had not been examined experimentally on the molecular level until the present study. In insects, the predominant materials stored in the fat body are lipids, mainly as triacylglycerols and polysaccharide-glycogen (Canavoso et al. 2001;Arrese and Soulages 2010). Besides the energetic value lipids and carbohydrates play in insects wintering at below-zero temperatures, they also play an important role as the substrates for synthesis of cryoprotectants as glycerol, trehalose, or other polyols (Hahn and Delinger 2007). The aim of the present study was to determine and compare selected biochemical parameters of newly-emerged O. rufa after a natural overwintering period (in April) and an artificially prolonged postdiapause quiescence under laboratory conditions (in July). Body weight, total protein, carbohydrates and lipid contents, glycogen and glucose levels, and the activity of chosen enzymes of lipid and carbohydrate metabolism were analyzed. The results obtained will help determine how elongation of postdiapause quiescence influences the energetic stores of O. rufa and how the metabolism of O. rufa is optimized for its role as a pollinator. It should be highlighted that this is the first report on any elements of lipid and sugar metabolism in O. rufa. Bees O. rufa were reared in artificial reed tube nests. The O. rufa cocoons and the artificial nests were placed in nesting shelters situated at the Swadzim Biological Station of the Department of Apidology, Poznań University of Life Sciences, Poznań, Poland. During the nesting period (from April to June 2009), O. rufa females occupied the nest tubes, which were transferred to the laboratory in February. In the laboratory, the nest tubes were dismantled, and the adult bees in cocoons were removed from nest cells. The wintering bees were kept in a SANYO cooler (www.us.sanyo.com) at 4° C. On April 5 th and July 2 nd , randomly selected cocoons were placed in an incubator at 25° C for emergence. Sample preparation The emerged bees were weighed, then placed in eppendorf tubes and immediately frozen in liquid nitrogen. Until analyses, the material was stored at -71° C. Forty females were randomly selected from bees that emerged in April or July. They were divided into 20 samples, each of 2 individuals). The samples were homogenized in an ice bath for two minutes with 0.9% NaCl at 1:10 (w/v) ratio. The homogenate was centrifuged at 4° C for 15 minutes at 15000× g. The supernatant was carefully collected from under the fatty layer for analysis of proteins, total sugars, glucose and glycogen content, and the activity of αamylase, maltase, sucrase, trehalase, cellobiase, triacylglicerol lipase, and carboxylesterases. Lipids were extracted separately with a mixture of chloroform and methanol (2:1) according to Folch et al. (1957). Biochemical assay The protein content was assayed spectrophotometrically (A 280 ) using a Nano-Drop apparatus (www.nanodrop.com) and NanoDrop 1000 version 3.6.0 software. Total carbohydrate content was assayed using the anthrone method according to Roe (1955). To 1 mL of reagent was added 0.5 mL of extract (first diluted 20 times with deionized water). After 14 minutes of incubation at 95° C, samples were chilled, and absorbance at 620 nm was measured. Total carbohydrate content was expressed as mg/g of fresh body weight. Glucose was assayed using the enzymatic method, using Liquick Cor-GLUCOSE 500 kit (Cormay, www.pzcormay.pl) according to the manufacturer's instructions. 10 µl of extract was added to 1 mL of 1-GLUCOSE reagent. Glucose level was expressed as µg/100 mg of fresh body weight. Glycogen level was isolated from the extract by the micro-method described by Sölling and Esmann (1975). A 20 µl sample was pipetted on square Whatman No. 3 filter paper (10 mm side). In the next step, glycogen was precipitated by addition of 5 mL of 10% trichloroacetic acid in 70% ethanol, and then rinsed 3 times for 20 minutes with 5 mL of ethanol. Finally, squares were rinsed in cold acetone for 10 minutes, dried, and cut in to small pieces to fit in the NanoDrop test tube. The 0.5 mL 0.2 M acetate buffer (pH 4.8) and 30 µl amyloglucosidase (25.8 mU) (cat. nr A-7255, Sigma Aldrich, www.sigmaaldrich.com) were added to each tube. Mixtures were incubated 15 minutes at 55° C with careful shaking. At the same time, the probe of the standard solution of glycogen (5 mg/mL) was treated in an identical manner. Glucose released from glycogen by amyloglucosidase was determined by the enzymatic method. Results were expressed as µg of glucose per g of tissue. The activity of α-amylase was assayed with a modified Caraway (1959) method. The incubation mixtures contained 50 µl extract, 0.85 mL 0.2 M acetate buffer (pH 4.8), and 0.1 mL starch solution (0.75%). The incubation lasted 120 minutes at 37° C. After this time, 4 mL iodine solution was added. For every sample, a control was prepared, which was not incubated. The activity of enzymes was expressed by mg of starch decomposed during 1 hr of incubation at 37° C per 1 mg of protein. The activity of disaccharidases, maltase, sucrase, trehalase, and cellobiase, was assayed by Dahlqvist's (1968) method. The activities were assayed by measuring the amount of glucose released by these enzymes from their specific substrates: maltose, sucrose, trehalose, cellobiose, respectively. The assay mixture contained: 0.380 mL 0.2 M acetate buffer (pH 5.4), 20 µl of extract, and 0.1 mL 50 mM suitable substrate. The incubation lasted one hour at 37° C. The releasing glucose was determined by enzymatic method. The enzymatic activities were expressed in international enzymatic units (U). The activity of triacylglicerol lipase was assayed by the Jurado et al. (2006) method. 100 µl extract was added to 1 mL of tributyrin emulsion. Samples were incubated for 2 hours at 37° C and titrated with 0.01 M NaOH. The activity of lipase was expressed as nmol of fatty acids released during 60 minutes of incubation at 37° C per 1 mg of protein. Lipid content was assayed by the sulfophospho-vanilin reaction (Frings et al. 1972). Lipid precipitate was dissolved with absolute ethanol (75 µl/100 mg fresh body weight). 0.2 mL concentrated sulphuric acid was added to 20 µl lipid solution. Samples were placed in boiling water for 10 minutes and then were cooled. 10 mL sulfo-phospho-vanilin reagent was added, and after 15 minutes of incubation at 37° C, the mixture was chilled, and absorbance was measured at 540 nm. Lipid content was expressed as mg per 100 mg of fresh body weight. All analyses were performed in 20 samples. All samples were tested in triplicate. Statistical analysis The obtained results were statistically analyzed using Statistica 9 software (StatSoft Inc., www.statsoft.pl) at the significance level p < 0.05. Average body weights, protein, sugar, glucose, glycogen and lipid content, and the activities of sucrase and C10 esterase were compared with Student's t-test. Due to nonhomogeneity of variances of mean activities of amylase, cellobiase, maltase, trehalase, lipase, and C2 and C4 esterases, comparison of the mean values was performed using a t-test with separate variance analysis, the Cochran and Cox test. Results The obtained results for weight and chemical composition of O. rufa bodies are shown in Figure 1. Lipids constituted about 20% of the fresh weight of the emerged females, and carbohydrates only about 2%. Glycogen constituted almost half of the all carbohydrate pool. Free glucose was in very small quantities (0.027% of body weight). The proteins constituted about 30% of the body weight ( Figure 1). By comparing the results for bees emerged in April and July, it was found that body weight, protein content, total sugar content, and glycogen and lipid level were significantly higher in O. rufa that emerged in April than those that emerged in July. On the contrary, glucose level was higher in insects that emerged in the summer (Figure 1). A considerable decrease (around 14%) was noticed in the body weight of bees that emerged in July in comparison to those that emerged in April. The general loss of carbohydrates was greater (~13.2%) than lipids (~9.4%) and proteins (~7.1%). The activity of α-amylase in newly emerged bees from both time periods was not high. However, a higher α-amylase activity was observed in O. rufa that emerged in spring compared to those that emerged in the summer. Among the studied disaccharidases, the highest activity was observed for maltase, followed by sucrase and trehalase. Cellobiase showed the lowest activity among the studied disaccharidases ( Figure 2). The activities of maltase and trehalase were significantly higher in bees emerged in July. The activities of sucrase and cellobiase were similar in bees from both times of emergence ( Figure 2). The activity of lipase was very low (5.35 nmol fatty acid mg -1 ). Carboxylesterases were much more active, especially C4-esterase. Butyric acid esters (C4) were the best substrate for them. Ester of acetic acid was hydrolyzed at a lower rate. Decomposition of decanoic acid ester was the weakest (Figure 3). A higher activity of the analyzed enzymes of lipid metabolism was seen in the O. rufa that emerged in the summer than in those that emerged in spring. For C4 esterase and lipase, the differences were statistically significant. Discussion There are not many data about the metabolism of bees of Osmia genus during its ontogenesis. Only decreases of the body weight and reductions of fat body size during overwintering were well documented (Bosch and Kemp 2003;Kemp et al. 2004;Bosch et al. 2010;Sgolastra et al. 2010). Until now, there has been no information about main biomolecules' metabolism and the activity of responsible enzymes. The biochemical consequences of prolonged postdiapause, a procedure commonly used in the commercial rearing of O. rufa, were not studied. As O. rufa do not intake any food from the environment during the overwintering period, they must rely on energy reserves obtained during larval stage development. Larvae of O. rufa feed on pollen with some addition of nectar. As O. rufa collect pollen from various types of plants (Wilkaniec et al. 1997), pollen for their larvae may be chemically distinct from each other (Pacinni 1996). This hypothesis is supported by the results of Konrad et al. (2009), who found large variations in sugar content in crops of newly emerged O. rufa. The differences in the amount and quality of pollen eaten during larval development can explain the wide range of values of the analyzed parameters, namely body weight and lipid, protein, and total sugar content, which were noted among the individuals in both groups of bees. This suggestion is confirmed by the results of previous studies concerning O. rufa body weight and its changes upon different amounts of eaten pollen (Giejdasz 2002;Wilkaniec et al. 2004). The main reserve materials in insects during dormancy are lipids (Hahn and Delinger 2007). Lipids were also the main energetic material during overwintering O. rufa. Lipids constitute 20% the wet weight of the O. rufa. Our results coincidence with Buckner et al. (2004), who determined as much as 20% of the body weight to be lipids in diapausing prepupae of Megachile rotundata, which belongs to the same family as O. rufa. This percentage is considerably higher than has been recorded in other insects studied so far (Fast 1970). Lipids in fat bodis originate from larval diet, and they are also partly synthesized by conversion from carbohydrates (Beenakkers et al. 1985;Canavoso et al 2001;Ziegler and Ibrahim 2001;Hahn and Delinger 2007). The extension of postdiapause lead to a substantial increase of lipase activity and the reduction of lipid content in the body of O. rufa. This observation confirmed previous studies (Dmochowska et al. 2011). Lipids stored in the insect's fat bodies are probably used also for the production of reserve materials for oocytes in the ovary of O. rufa. An increase in the size and number of oocytes in O. rufa takes place during the entire overwintering period (Wasielewski et al. 2011). Intensive lipid mobilization is stimulated by adipokinetic hormone and octopamine by activation in fat body triacylglycerol lipases (Canavoso et al. 2001). Judging by the level of lipase activity in our study, this phenomenon does not occur at moment of emergence of O. rufa. In newly emerged O. rufa, the activity of lipase was very low and was coincident with high levels of lipids. Both facts may be important for protecting the energy store for the maiden flight of females. In insects, esterases are involved in important physiological processes, including the catabolism of juvenile hormone (Zera et al. 1992), pesticide resistance (Whyard et al. 1995;Rosario-Cruz et al. 1997), digestion (Kerlin and Hughes 1992;Argentine and James 1995), and reproduction (Richmond and Senior 1991;Karotam and Oakeshott 1993). Carboxylesterases participate in the metabolism of lipid compounds. These enzymes can hydrolyze endogenous substances or promote xenobiotic detoxification (Shen and Dowd 1991). They play an important role in immunity against insecticides and plants' secondary metabolites (Cai et al. 2009). So, the activity of carboxy-lesterases is important for bee health after emergence. Regardless of the time of emergence, esterases of O. rufa showed the highest activity towards esters of butyric acid (C4), and the activity of this esterase increased significantly in summer. High activities of C4, C2, and C10 carboxyesterases were also observed in an APIZym test (Dmochowska et al. 2011). This result is in agreement with that obtained for another solitary bee, M. rotundata. Similar to O. rufa, esters of aliphatic acids of 3C and 4C length were metabolized by this bee more easily than esters of acids of shorter or longer chains (Frohlich 1990). It was different in Apis melifera, whose esterases were more active towards acetic acid esters (C2), and their activity decreased to esters formed by acids with longer aliphatic chains (Dziuban et al. 2010). In our study, a significantly higher activity of lipase and only slightly higher C2 esterase activity was discovered after prolonged wintering. In Hyalomma dromedilrii, these enzymes play a principal role in the interconversion of lipovitellins during embryogenesis (Fahmy et al. 2004). They may play a similar role in O. rufa oogenesis. Glycogen is the main storage carbohydrate in the animal kingdom. In insects, it is synthesized and stored mainly in the fat bodies and muscles. Hypertrehalosemic hormone (HrTH) is responsible for the mobilization of glycogen to glucose, which is essential for further trehalose synthesis, the main sugar of insect hemolymph (Arrese and Soulages 2010). The level of glycogen in emerged O. rufa was twice that of hibernating Osmia cornifrons (Hoshikawa et al. 1992) and 2 to 3 times lower than that of honey bee workers (Farjan 2008). The low glycogen level in O. rufa may be connected to its transformation into glycerol or trehalose. Both are necessary cryoprotectants to survive freezing weather. This process was observed in Phyto americanus and P. deplanatus (Ring and Tesar 1980;Ring 1982). Glycogen and total carbohydrate level were lower in bees that emerged in summer. Total sugar content in emerged O. rufa was high and close to the value found in newly emerged honey bee workers (Farjan 2008). The lack of clear differences in total sugar content between O. rufa and honey bees is puzzling, as the diet of A. mellifera is high in carbohydrates while the diet of O. rufa is rich in proteins and lipids. Pollen, the main component of the diet of O. rufa larvae, contains mainly proteins and lipids, carbohydrates as starch, and soluble sugars, which constitute only a minor part of its composition (Pacinni 1996;Speranza et al. 1997). The level of glucose in O. rufa was low, which is characteristic for many insects. The fluctuation of glucose in hemolymph is an important signal regulating the rate of metabolism (Arrese and Soulages 2010). The content of glucose was significantly higher in O. rufa that emerged in summer. This result was due to a significantly higher activity of trehalase and maltase, which degrade disaccharides to glucose. Krunic and Stanisavljevic (2006) found that during postdiapause, concentrations of cryoprotectants decline significantly, even under constant external temperature condition. Carbohydrates such as trehalose have a dual role as cryoprotectants and sources of energy. Glucose released by the action of trehalase can be built into glycogen or immediately catabolized (Hanh and Delinger 2007). Just after emergence, the activity of O. rufa trehalase was clearly lower than maltase and sucrase (Figure 2). Similar findings were observed in newly-emerged honey bee and hawk moth development, and may be an adaptation to a diet appriopriate for an adult individual (Sobiech et al. 1984;Żółtowska et al. 2012). On other hand, low activity of α-amylase was a bit surprising because this enzyme is important in digesting starch from pollen (Ohashi et al. 1999), the main component of O. rufa diet. Starvation during the overwintering period may be a reason for low activity of amylase and cellobiase before emergance. Most likely, higher activities of amylase appear only when O. rufa eat their first nourishment after emergence, because it is one of the digestive enzymes induced by diet. Diapause is a dynamic process (Denlinger 2002), and the prolongation of the overwintering period will lead to a higher depletion of energy reservoirs. The results of our study confirmed this hypothesis. As was expected, total carbohydrates, glycogen, fat, and protein content were significantly lower in bees that emerged in July compared to those that emerged in April (Figure 1). The obtained results are in agreement with the earlier studies on other bee species from Megachilidae family Kemp 2003, 2004;. The changes in the studied biochemical indicators in O. rufa that emerged in summer may have resulted from the acceleration of O. rufa metabolism. Such a phenomenon was observed by Kemp et al. (2004), who analyzed oxygen usage in Osmia lignaria wintering at a stable temperature of 4° C. O. rufa are worth more detailed biochemical studies because of their usefulness in pollination of crops. Particularly interesting is their life cycle, especially in regards to the possibility of regulating overwintering time. The role of wild bees as alternative pollinators will be more and more important in agriculture due to the decrease of populations of honey bees.
5,175.8
2013-08-10T00:00:00.000
[ "Biology", "Environmental Science" ]
Damage experiment with superconducting sample coils - experimental setup and observations during beam impact The damage mechanisms and limits of superconducting accelerator magnets due to the impact of high-intensity particle beams have been subject to extensive studies in the past years at CERN. Recently an experiment with dedicated sample coils made from Nb–Ti and Nb3Sn strands was performed at CERN’s HiRadMat facility. This paper describes the design and construction of the sample coils as well as the results of their qualification before the beam impact. In addition, the experimental setup will be discussed. Finally, measurements during the beam experiment like the beam-based alignment, the observations during the impact of 440GeV protons on the sample coils and the achieved hot-spots and temperature gradients will be presented. Introduction and description of the experimental setup To study the damage limits of superconducting coils due to proton beam impact, a multistage experimental campaign has been devised and carried out at the CERN HiRadMat experimental facility [1] over the past years.Prior experiments aimed at deriving the damage mechanisms and limits of superconducting strands made of Nb-Ti, Nb 3 Sn, and high-temperature superconducting materials, both at room [2] and at cryogenic temperatures [3].This latest experiment aims to study additional damage mechanisms in the coil as a whole, using sample coils wound with low-temperature superconducting strands that were impacted with 440GeV/c proton beams below 5.5K.The experiment was carried out in October 2022 and is the focus of this paper. A set of small sample coils was wound at Karlsruhe Institute of Technology (KIT) using strands of polyimid insulated Nb-Ti, as used for the LHC dipole and quadrupole magnets and RRP Nb 3 Sn insulated with fibre-glass, which is used for the HL-LHC final focusing quadrupole magnets [5].The strands with a length of about 1.7m were wound around two half-moon-shaped copper pieces, as shown in Fig. 1, which are electrically insulated by a Macor ceramics sheet.For the winding of the Nb-Ti coils a tension of 80N was applied.For the Nb 3 Sn coils a tension of 50N was used.Stainless Steel wire-blocking parts were placed on the copper body to prevent the winding from shifting upwards.The finished structure was held together with a holding piece, made out of epoxy glass cloth laminated sheets (G10).The Nb-Ti coils were soldered to halfmoon copper terminals along with two pairs of voltage taps for critical current measurements before and after beam impact.The Nb 3 Sn coils were heat-treated at the University of Geneva (UniGe) using a standard temperature profile [5].Then the leads and voltage taps were soldered to the copper terminals.Finally, the Nb 3 Sn coils were impregnated with CDK101K epoxy at the CERN polymer lab and equipped with a G10 clamp to hold them in place during impregnation and prevent movement during powering.Based on the damage limits derived in the previous beam impact experiment at 4K [6] with Nb-Ti and Nb 3 Sn superconducting strands, the hot-spot temperatures in the windings of the sample coils were chosen to reach between 300 to 900K for Nb-Ti and 200 and 750K for Nb 3 Sn.Dedicated FLUKA [7] Monte Carlo simulations were conducted to define and optimize the experimental layout to reach these hot-spots.These simulations and results are discussed in detail in [8].The final layout consists of three batches of five coils, aligned along the beam axis.The coils of each batch are separated by 1cm thick copper blocks used to adjust the peak energy deposition along the batch by creating secondary particle showers.The first batch contains five Nb-Ti coils, the second batch two Nb 3 Sn and three Nb-Ti coils and the third batch five Nb 3 Sn coils.The 0.1mm thick Sn foils were inserted downstream of the copper blocks to allow visualising the beam impact and beam size and also to benchmark the hot-spot temperature simulations.A schematic view of the arrangement of a batch is shown in Fig. 2. Tin foils (silver) are used to record the beam impact with imprint marks.Note: this schematic only shows two out of the five coils contained in one batch. For the beam impact, the coils were cooled down to 5.5K using a cryogenic-free system [6], as shown in Fig. 3, mimicking the failure case of parts of the LHC beam impacting a superconducting magnet at cryogenic temperatures.The system comprises a vacuum vessel that houses a two-stage cryocooler (Sumitomo RP-082B2) and a radiation shield.The first stage cools the radiation shield, which surrounds the 50cm wide second stage copper plate where the coils are installed.Each stage and the radiation shield are wrapped in multi-layer aluminium insulation to reduce radiative thermal losses.The vessel was placed on a horizontally and vertically movable stage to allow the precise alignment of the samples with beam for the impact of the three batches.Two diamond detectors [9] were installed outside the vessel to measure the particle showers during the Beam-Based Alignment (BBA) phase at the beginning of the experiment described below. Qualification of the samples For the pre-irradiation qualification process, the critical current of the sample coils was measured at UniGe in a dedicated cryostat in liquid helium.The Nb-Ti coils were measured in self field while the Nb 3 Sn coils were qualified in an external field of 7T.Finite Element Method simulations were performed to derive the relation between transport current and peak magnetic field and the load line of 2.44T/kA ± 5 % [10].The expected quench current was derived from critical current measurements performed on non-irradiated strand samples [6]. Figure 4 shows the measured critical currents of the Nb-Ti and Nb 3 Sn samples coils, which were derived from fits on measured coil voltages, during the transition from the superconducting to the normal state.The number of training quenches for Nb 3 Sn was up to three times higher compared to Nb-Ti.This could be caused by the fact, that the Nb 3 Sn sample coils were measured in an external magnetic field of 7T, whereas the Nb-Ti coils were measured in self field.The Nb-Ti coils reached critical currents between 976 and 1014 A (94-98% of the shortsample limit), while the critical current in the Nb 3 Sn reached between 1027 and 1128 A (91-100% of the short-sample limit).The Nb-Ti coils (red crosses) were measured in self field.The Nb 3 Sn coils (green and blue squares) were measured in an external field of 7T.Note: the three last Nb-Ti coils in second batch haven't been qualified before the experiment and therefore the critical current is not shown here. The position of each coil within the three batches was determined by the number of quenches and the critical current, such that the coils with fewer training quenches and higher critical currents were placed upstream of others as these coils are expected to be more sensitive to induced damage. Beam time The BBA in both transverse planes was performed by impacting the alignment piece, the base plate and the shower development copper blocks of batch one with low intensity beams.The losses were detected by a combination of the fixed installed ionisation chamber Beam Loss Monitors (BLMs) and the diamond detectors mounted on the experimental setup.The BBA confirmed the correct positioning and movement of the device, and the batches of samples were then successively irradiated with three high intensity beam shots.The beam impact positions were separated by 62.5mm, so the device was moved after each high-intensity shot.The temperature of both stage plates was monitored during the experiment.After each shot the temperature of the second stage plate rose to values between 25 and 35K and about 45 minutes were required to cool down below 5.5K before the next shot (see Fig. 5). The beam sizes and intensities for the three successive shots were measured using a beam screen (BTV) and fast beam current transformer (FBCT).The measured values are shown in Table 2.The intensities matched the target intensities within 8%, while the beam sizes where within 40% from the expected targets.These values were then used to recompute the More detailed results of the simulation are described in [8].A subsequent visual inspection of the samples and partially melted tin witness foils finally confirmed the correct impact of the beam on each sample. Summary and outlook For the first time, a damage experiment has been performed using Nb-Ti and Nb 3 Sn sample coils at cryogenic temperature with 440GeV/c proton beam at the CERN HiRadMat facility.A total of 15 samples grouped in three batches have been impacted by shots of up to 3.86 × 10 12 protons, creating hot-spot temperatures in the coil windings between 206 K up to 863K.The visual inspection of the irradiated samples and partially melted Sn witness foils confirmed the correct alignment of the sample plate for each beam shot and provided a further validation of the expected hot-spot temperatures.The post-irradiation critical current qualification measurements will be performed as soon as the activation levels have decayed sufficiently to evaluate the damage impact as a function of the hot-spot temperature and temperature gradients.Furthermore, thermo-mechanical simulations to calculated the stress in the coil windings caused by the beam impact will be performed. Figure 2 . Figure 2.Schematic view of the arrangement of the different elements in a batch: the coils (green) are aligned along the beam axis and copper blocks (orange) are used to tune the hot-spot temperatures through the creation of additional secondary particle showers.Tin foils (silver) are used to record the beam impact with imprint marks.Note: this schematic only shows two out of the five coils contained in one batch. Figure 3 . Figure 3. Internal view of the vacuum vessel: (1) second stage plate with the samples installed, (2) alignment piece, (3) first cooling stage, (4) multi-layer insulation, and (5) vacuum vessel lid.The radiation shield and the external vacuum tank are not shown. Figure 4 . Figure 4. Critical current of the Nb-Ti and Nb 3 Sn coils expressed as a fraction of the short-sample limit.The Nb-Ti coils (red crosses) were measured in self field.The Nb 3 Sn coils (green and blue squares) were measured in an external field of 7T.Note: the three last Nb-Ti coils in second batch haven't been qualified before the experiment and therefore the critical current is not shown here. 14thFigure 5 . Figure 5. Measured temperature on the first stage (green) and on the second stage (purple) during the beam experiment.The temperature increase after each highintensity shot is well visible. Table 1 . [5]le1shows the properties of the strands and filaments.Properties of the Nb-Ti and Nb 3 Sn strands used to wind the experimental sample coils (from[4]and[5]) Table 2 . Measured parameters (Intensity, Pulse Length, and Transverse Beam Sizes) for the three beam shots)
2,464
2024-01-01T00:00:00.000
[ "Physics", "Engineering" ]
A Survey on Blockchain Technology Concepts, Applications and Security —In the past decade, blockchain technology has become increasingly prevalent in our daily lives. This technology consists of a chain of blocks that contains the history of transactions and information about its users. Distributed digital ledgers are used in blockchain. A transparent environment is created by using this technology, allowing encrypted secure transactions to be verified and approved by all users. As a powerful tool, blockchain can be utilized for a wide range of useful purposes in everyday life including cryptocurrency, Internet-of-Things (IoT), finance, reputation system, and healthcare. This paper aims to provide an overview of blockchain technology and its security issues for users and researchers. In particular, those who conduct their business using blockchain technology. This paper includes a comparison of consensus algorithms and a description of cryptography. Further, most applications used in blockchain are focused on in this paper also analyzing real attacks and then summarizing security measures in blockchain. Even though Blockchain holds a promising scope of development in several sectors, it is prone to several security and vulnerability issues that arise from different types of blockchain networks which represent a challenge to deal with blockchain. Finally, as a research community, we encourage future research challenges that can be addressed to improve security in blockchain systems. I. INTRODUCTION Blockchain is based on a decentralized, unchangeable database that makes it simpler to record assets and keep track of transactions in a corporate network. An asset may be tangible or intangible. On a blockchain network, virtually anything of value may be stored and traded, reducing risk and improving efficiency for all users. Generally, a blockchain is a digital ledger of transactions that are being recorded. It is decentralized and is not controlled by any individual, group, or company [1]. As a structured technology, blockchain can be very difficult to change without the approval of the people who use it. Blockchain stores data as a decentralized ledger. Participants in this network can read, write, and verify transactions. Transactions cannot be modified or deleted. To support and secure the blockchain system, digital signatures, hash functions, and other cryptographic functions are used. These primitives ensure that transactions recorded in the ledger are integrity-protected and authenticated. This technology is called blockchain because new blocks are linked to older ones to form a chain. The first appearance of this term was a publication written by S. Haber and W.S. Stornetta in 1991 [2]. In general, blockchain technology is credited to Satoshi Nakamoto, who developed the theory and implemented the technology in 2008 and 2009, respectively in the cryptocurrency Bitcoin, the most well-known blockchain application. Blockchain technology in recent years has attracted significant attention from academics and industries because of its advanced features. It can be applied to a variety of applications beyond cryptocurrencies. Blockchain technology has become a leading technology of internet interaction systems, including the Internet of Things (IoT) [3]. Our motivation in this paper is to inform and assist someone to become familiar with blockchain technology and its security issues, particularly for those who carry out transactions using blockchain technology and for researchers interested in developing blockchain technology and evaluating its security issues. To search publications and information on the Internet, the first step is to identify keywords such as blockchain, consensus algorithm, cryptography, cryptocurrency, and blockchain security. A second approach is to review papers that have been published in top conferences and journals that deal with blockchain. In this paper, we provide the following main contributions: • A detailed survey was conducted on blockchain technology. • A systematic survey of Blockchain applications is conducted in this paper. 10 application areas are considered. • Security and privacy issues were also addressed. Therefore, we encourage further efforts to survey and develop blockchain technology for widespread adoption. The rest of this paper consists of the following sections: In Section II, we provide an overview of the history of blockchain technology. A typical consensus algorithm used in the blockchain is described in Section III. In Section IV, we focused on blockchain applications. In Section V, we summarize the technical risks, attacks, and challenges of security in this area, and in Section VI, we conclude this paper. II. HISTORY OF BLOCKCHAIN Chaum's Ph.D. thesis, published in 1982, was the first to suggest a blockchain as a protocol. A paper by Haber and Stornetta published in 1991 titled "How to Time-Stamp a Digital Document" detailed the concept of time stamping digital data cryptographically [3]. In 1998, Nick Szabo proposed the creation of Bit Gold, an early attempt at the creation of a decentralized virtual currency. However, Szabo's attempt to implement Bit Gold is generally regarded as the basis for Satoshi Nakamoto's bitcoin protocol, even though the project was never implemented [4]. Modern day blockchain technology is widely believed to have been first implemented by Satoshi Nakamoto in 2008. He hypothesized a direct online payment between parties without the use of a third-party intermediary. Rather than relying on trust, that paper presented a cryptographic proof-based electronic payment system [5]. Blockchain was introduced by Ethereum in 2013 as a technology for executing smart contracts on a decentralized platform. With Ethereum, it is possible for developers to create markets, store transactions, and move funds according to written instructions, all without the involvement of middlemen. Unlike Bitcoin, Ethereum is a ledger technology that is being used by companies to develop new programs, which are being expanded beyond the realm of currencies for the first time [6]. With the launch of the Ethereum platform in 2015, blockchain could be used for storing and processing loans and contacts. Using an algorithm known as a smart contract, this technology ensures the implementation of an action between two parties. Due to Ethereum's ability to provide a faster, safer, and more efficient environment, it became extremely popular. Instead of all the different blockchain projects, Ethereum enables communication via untrusted distributed applications on its own blockchain, thus creating a new concept called Ethereum 2.0 [7]. Hyperledger is open source software for blockchains that was announced by the Linux Foundation in 2015. The Hyperledger blockchain framework aims to build enterprise blockchains, which are different from Bitcoin and Ethereum. Blockchain attracted interest with its capability to enable anonymity, but the real appeal lies in its capability to enable complete privacy. As will be discussed in the fourth section, there have been many applications for blockchain technology that have been discovered across a wide range of industries. The following Fig. 1 summarizes the history of blockchain technology. Since everyone can participate in Bitcoin and Ethereum's blockchain networks, they are considered public blockchains. Due to their need to verify participants before joining the network, the Hyperledger blockchain networks are considered private blockchains, also known as permissioned blockchains.The following Table I summarizes the differences between Hyperledger and Ethereum, two popular blockchain platforms and networks. A. Blockchain Layers According to Melanie Swan, blockchain technology has passed through two stages. The first stage is blockchain 1.0 represented by Bitcoin, and the second stage is blockchain 2.0 represented by Ethereum. In general, blockchain-based technologies include Bitcoin, Ethereum, Hyperledger, etc [8]. Even though the implementations are varied, there are some similarities in the basic architecture. Blockchain environments can be classified into five layers, as shown in Table II application, network, contract, consensus, and data layers. Consensus mechanisms are the main component of the consensus layer. In the contract layer, smart contracts are included. Various protocols for data transmission and verification are included in the network layer. In addition, it is pertinent to note that the blockchain is a typical peer-to-peer network. There is no central node and all nodes are connected through a planar topology [9]. It is possible to transact between any two nodes. Each node within the network is free to leave or join anytime. A number of applications are included in the application layer, such as Bitcoin, Ethereum, and Hyperledger. B. Consensus Algorithms Among the many desirable characteristics of blockchain technology, it is possible to verify the honesty of anonymous users when they enter transactions into the ledger. This is done by validating each transaction to ensure that it is legal before adding it to a block. Consensus algorithms are used to determine whether new blocks will be added to the blockchain and to ensure trust between parties involved in the blockchain system and to store transactions. As a result, consensus algorithms are the core of all blockchain transactions [10]. Every participant must follow a consensus protocol. There have been several consensus mechanisms developed for blockchains. This includes Proof of State, Delegated Proof of State, Proof of Work, Proof of Elapsed Time, Directed Acyclic Graph, and so on. We will take a look at the most common algorithms shown in Table III. Proof of Work (PoW): The objective of this algorithm is to determine a problem that must be solved through guessing. Bitcoin and Ethereum employ PoW as the algorithm for their consensus. As a result of PoW requiring lots of electricity and time, it is not widely used [11]. Proof of Stake (PoS): It ranks second in popularity as a consensus algorithm, and it involves fewer computations than PoW. It minimizes the time and energy waste issues that PoW has. This consensus algorithm replaces the current method for reaching consensus in a distributed system, instead of solving a Proof-of-Work. BlackCoin was the first cryptocurrency to use a PoS [12]. Proof of Elapsed Time (PoET): It is a consensus algorithm for blockchain networks that keeps the process more efficient by avoiding over-utilization of resources and high-energy consumption. The PoET method resembles the proof of work method (PoW), but requires less power due to its ability to allow the processor to switch to other tasks after a period of time, which increases efficiency [13]. Byzantine Fault Tolerance (BFT): It is aimed at solving problems where there are untrustworthy parties, but they need to achieve consensus. PBFT is designed to improve BFT. With PBFT, if hostile nodes represent fewer than thirty percent of all nodes, then the current state of the blockchain will be agreed upon by all participants. Blockchain systems are more secure when there are more nodes involved. Currently, Hyperledger Fabric is based on PBFT [14]. Direct Acyclic Graph (DAG): It consists of vertices and edges, which differentiates it from various consensus algorithms. Transactions are represented by the vertices of the structure. A block is not referred to in this algorithm, nor do we need to use a mining process to add transactions. Each transaction is built upon the previous one rather than being grouped into a block. Several applications of DAG technology can be found in fields that require high speed and no fees, like Internet of Things (IoT) [15]. C. Smart Contract The smart contract also called chaincode is an essential feature of blockchain because it not only offers a distributed, immutable completion of all activities, but is also capable of allowing for the creation of a computer program that is nonsubjective and specifies how the process will be implemented. In this contract, an important activity is addressed. More than two parties don't need to be involved in this contract. The Ethereum smart contract was designed to overcome some of the limitations of Bitcoin [16]. Enterprise blockchain applications are based on smart contracts, which will revolutionize the way businesses operate. Smart contracts can be developed by anyone without the need for an intermediary. Because of a smart contract, the process is autonomous, accurate, and cost-effective. D. Cryptography of Blockchain Blockchains enable confidential and secure transactions between anonymous parties. This trust is established through cryptography, thus eliminating the necessity for centralized institutions. By using cryptography, blockchain data is kept on the ledger. Cryptography building blocks are used in blockchain technology as follows [17]: • Public Key Cryptography: Designed to create digital signatures and encrypt data. • Zero-Knowledge Proof: Show that you know a secret without divulging it. • Hash Functions: A mathematical function that generates pseudo-random numbers. 1) Public key cryptography: A transaction can be proven to have been created by the right user by this method. Using a private key, a user can sign a message, known as a digital signature. Digital signatures are used in Hyperledger and Ethereum transactions to verify the authenticity of the sender and that the information has not been changed since it was signed. The algorithm (ECDSA) is widely used to generate a combined set of private and public keys. 2) Zero-knowledge proofs: These are primarily used when users request to transfer money to other users. Before committing a transaction, the blockchain must verify that the participant who is transferring funds has enough to complete the transaction. However, the blockchain does not care about how much money he has in total or who is spending it so it has no idea who the user is or how much money he owns. 3) Hash functions: Hash Functions: Hash functions form an essential part of blockchain technology. There are five properties of a hash function that are critical for cryptography [18]: Fixed size: The hash function can accept any input and create the output of a fixed size. In order to provide digital signatures, blockchains employ hash functions to condense messages. Preimage resistance: When given a set of inputs, it is not challenging to produce a hash result. Despite this, reverse engineering the original input is mathematically impossible www.ijacsa.thesai.org based on the hash output. The only way to achieve the same result is to randomly select data that should be entered into the hash algorithm. 2nd preimage resistance: Obtaining a secondary input that provides the same hash result is impossible given an input and its hash result. Collision resistance: The same hash output cannot be produced from two distinct inputs. Big change: An entirely different hash output will be produced if any single bit is changed in the input. IV. BLOCKCHAIN APPLICATIONS According to the survey, blockchain applications include cryptocurrency, Internet-of-Things (IoT), finance, reputation system, healthcare, security and privacy, advertising, copyright protection, society application, energy, mobile applications, defense, digital records, supply chain, digital ownership management, automotive, intrusion detection, agricultural sector, voting, identity management, education, law and enforcement, property title registries, asset tracking, and so on [19]. An illustration of the spiraling applications of blockchain can be found in Fig. 2. More applications of blockchain systems are predicted to be developed in the future.To provide further information, we have selected the following 10 blockchain-based applications: A. Healthcare Prescription medications are being tracked and traced throughout supply chains using blockchain technology. The tool enables the easy and rapid prevention and regulation of counterfeit pharmaceutical distribution as well as the recall of ineffective and unsafe medications. Security of customer data is a primary goal in healthcare, as is the exchange of data between hospitals, governments, and research institutes, which facilitates the improvement of healthcare services.As part of this project, Nokia has used wearable devices to track daily steps and hours of sleep and stored the data on the Blockchain [20]. B. IoT People, places, and products can be connected via the Internet of Things (IoT), providing new opportunities for the generation of value in products and business processes. On the other hand, implementing this technology on a large scale is fraught with security concerns. Combining blockchain and IoT offers the following benefits: To detect data manipulation quickly and accurately, blockchain technology can provide a robust framework for faster detection. Due to the size of IoT networks, it can be difficult to detect failure patterns. Each IoT endpoint is assigned a unique key by blockchain technology, which facilitates the identification of inconsistencies. By combining IoT with smart contracts, it becomes possible to authorize automated responses. Decentralization enhances security: Blockchain technology is decentralized, making it impossible for cybercriminals to hack and corrupt a single server. Additionally, the use of blockchain technology allows tracking of user actions to provide information on who, when, and how users have used a particular device [21]. C. Government Blockchain technology can be used in the public sector to improve the quality and quantity of services. It can also be used to improve transparency and accessibility, as well as to share information between different organizations. In addition to being secure against online attacks, the blockchain is publicly available. Transactions are not editable or deletable once they have been added. This makes data transactions safe, secure, and accessible to anyone [22]. D. Power Grid The development of blockchain-based smart grids is aimed at improving energy distribution on a large scale. There is a significant amount of inefficiency in electricity distribution at the retail level. The use of blockchain technology and Internetof-Things (IoT) devices for these types of services can reduce electricity bills by bypassing retailers and directly connecting consumers to wholesale distributors. Consumers connected to the smart grid can also shop around for the highest rates from a variety of providers. This leveled the playing field in an industry that has traditionally been dominated by a single provider. Several projects are leading the way in this area, including Grid + and Energy Web Token [23]. E. Copyright and Royalties Music, films, and other creative mediums are subject to copyright and royalties. These are artistic mediums and do not appear to be linked to Blockchain in any way. In the creative industries, however, this technology is quite critical in terms of ensuring security and transparency. It is common for music, films, art, etc., to be plagiarized without proper credit being given to the original creators. A detailed ledger of artist rights can be maintained on the Blockchain to rectify this issue. The use of blockchain technology can also provide a secure record of artist royalties and deals with large production companies, in addition to being transparent. Digital currencies, such as Bitcoin, can also be used to manage the payment of royalties [24]. F. Cryptocurrencies In 2008, it was announced that Bitcoin would be the first cryptocurrency. It was launched in 2009. It is estimated that there are 21 million bitcoins in use today. The miner receives a transaction fee once he finds a value that matches the difficulty. Currently, about 90% of BTC is mined. Ethereum (ETH) is regarded as the second largest cryptocurrency based on market capitalization after Bitcoin (BTC). According to Cryptoslate, [25] there are 2403 top cryptocurrencies ranked by market capitalization. Table IV below shows seven popular cryptocurrencies. Blockchain technology can be applied to the use of cryptocurrencies, thus taking full advantage of the features of this technology including: • There is no intermediary involved in the payment process. • Processing fees are low. • Money can be sent at any time without delay or restriction. A few disadvantages of cryptocurrencies include: • Black money may be incurred due to a lack of control. • Digital assets may be lost as a result of a security attack, which we will discuss in more detail later. • Some commentators claim that investing in cryptocurrencies is highly speculative and risky. Tesla, for instance, advised investors to be aware of Bitcoin's volatility. G. Dubai Blockchain Office Strategy of Dubai Blockchain is the result of a collaboration between the Dubai Future Foundation and the Digital Dubai Office. The purpose of this initiative is to continuously explore and evaluate the latest technological innovations that can be used to enhance the quality of life in cities through seamless, efficient, safe, and impactful solutions [26]. The strategy represents a powerful and innovative tool to influence the future of the Internet through the provision of safe and simple transactions. This will help to achieve the vision of making Dubai the world's first blockchain-powered city. When this strategy is successful, Dubai will contribute substantially to the future economy. H. Cloud Computing Cloud computing has had a major impact on the software technology industry due to its impressive benefits. There are many uses for cloud computing among businesses worldwide, including data storage and backup, software development and testing, disaster recovery, and more. Many industries are using cloud computing to build innovative solutions, including healthcare, automotive, and retail. Even with the advantages of cloud computing, it has its limitations. Blockchain can help overcome these limitations. Due to its transparency, security, and decentralized nature, blockchain technology is being used by millions of businesses for a variety of industrial applications. The use of blockchain and cloud technology together, however, can further revolutionize industries. Even though blockchain technology provides better network security, privacy, and decentralization, cloud computing provides high scalability and elasticity. Therefore, cloud technology and blockchain technology can be combined to produce innovative solutions [27]. I. e-Commerce Constant evolution is taking place in the e-commerce industry due to the development of new technologies and the creation of new ways to buy and sell products and services. Using blockchain technology, it is possible to create a decentralized database for storing information about products and customers. By doing so, customers would be able to obtain information about products, such as their origin and supplier, which would also reduce the possibility of fraud. A blockchain-based payment system can also ensure enhanced security and reduce the risk of fraudulent payments. As a distributed database, blockchain technology provides secure, transparent, and tamper-proof transactions. It is anticipated that this technology will revolutionize the e-commerce industry by improving the security of transactions and simplifying the fulfillment process. The system also enhances buyer-seller trust and transparency. Blockchain technology allows e-commerce businesses to track the history of orders and transactions to improve the customer experience. The customer would be able to track their orders easier and find information about previous purchases. Additionally, blockchain can reduce the risk of fraud and facilitate the tracking and verification of transactions more reliably and securely. The implementation of this technology could prove to be a game changer for the e-commerce industry, which is currently plagued by issues of fake reviews, fraudulent transactions, and other security risks. Businesses that use blockchain technology can reduce costs associated with processing transactions and shipping products, as well as improve the speed at which new products are introduced to the market [28]. J. Advertising A blockchain advertising application is a type of distributed ledger technology that promotes decentralization with the highest level of security and transparency. On the blockchain, digital records are immutable, which means that individuals have access to read but cannot amend the records. Blockchain can allow advertisers to track their advertising expenditures in real time since it stores information and transactions. it provides a level of transparency that cannot be achieved with existing systems. Transparency is not the only advantage. In advertising, speed is crucial, as it is difficult to track inventory and ensure high-quality products. Blockchain technology has the capability of keeping up with these challenges [29]. A. Attacks on Blockchains Blockchains are distributed so it makes sense to conduct research on their security. In this section, we will discuss the security risks associated with this technology. In order to gain a deeper understanding of blockchain security, it is essential to first understand the differences between private and public blockchain security, particularly regarding data access and participation capabilities, as we mentioned above. The following are the top security issues associated with blockchains [30]: 1) Sybil attack: In this attack, several fake network nodes are generated by hackers. Through the use of these nodes, it will be able to gain majority agreement and interrupt transactions. 2) Endpoint vulnerabilities: Another vital concern in the security of blockchain is the vulnerability of endpoints. Electronic devices such as mobile phones and computers are used to interact with the blockchain network. Observing the behavior of users and targeting their devices will allow hackers to steal the user's key. Perhaps this is one of the most prominent security issues associated with blockchain technology. 3) 51% attack: An attack of 51% occurs when one user or institution controls half of the hash rate and takes control of the entire system. Transactions can be modified by hackers and prevent them from being confirmed. They will even reverse transactions that have already been completed, leading to double spending. 4) Phishing attacks: Phishing attacks are designed to steal user credentials. An email will be sent to the wallet key owner that appears to be legitimate. A fake hyperlink is attached to the email that requires the user to enter their login details. By gaining access to a user's credentials and private information, it is possible to cause damage to the user and the blockchain network as a whole. 5) Routing attacks: In this attack, participants are usually unaware of the threat because the transmission of data and the conduct of operations continue as usual. A potential danger is that such attacks could reveal sensitive information or generate revenue without the user's permission.There is a critical reliance on the movement in real time of enormous amounts of information in a blockchain application and network. Due to the anonymity of an account, hackers may be able to intercept information transmitted to Internet service providers by using it. 6) Private keys: You will need a private key in order to access your funds. A hacker can easily guess the private key if it is weak. Your funds could be accessed as a result. Keeping your private key secret is extremely critical, and it should be strong enough not to be guessed easily. 7) Malicious nodes: Additional security problems related to blockchain technology include the threat of malicious nodes. An attempt to disrupt the network will occur once a dishonest actor has joined the network. In order to accomplish this, they will attempt to reverse transactions or flood the network with transactions. B. Security Measures of Blockchain To ensure the security of blockchain applications, security must be considered at all layers, including permission management through several security measures [31]. The following are some of the security measures of blockchain: 1) Blockchain governance: Determining how existing organizations or users leave or join the network, and providing mechanisms to prevent malicious actors, manage errors, secure data, and address issues between parties. 2) Data security: While data compression is generally regarded as the most effective method for identifying what data should be kept on-chain, additional privacy measures should be implemented to hash data, cloud storage, and data in transit. 3) Security of blockchain network: Blockchain is a distributed system, which requires network connections from various participants beyond a single organization to interact. All of these factors have the potential to introduce security exploits or flaws. Part of governance, therefore, includes reviewing security protocols for users [32]. 4) Blockchain application security: Security applications are vulnerable points and should be protected with effective user identification and endpoint security measures. For private blockchains, where access and use are limited to authorized participants, it may be necessary to provide different levels of authorization that may change with time. 5) Smart Contracts Security: Smart contracts consist of a set of codes within the blockchain, triggered by a set of programmed conditions. This presents another point of vulnerability as their reliability determines whether the operation and the results can be trusted. 6) Use of trusted third-parties: Security evaluations, penetration checks, and reviews of the source code of smart contracts and blockchain implementations should be performed only by trusted individuals. Use these to protect against new security threats, such as unauthorized access to cryptographic algorithms [33]. VI. CONCLUSION During the past few years, blockchain technology has attracted a great deal of attention due to its advanced characteristics of decentralization, autonomy, integrity, immutability, verification, and fault tolerance. In terms of the future scope, the primary priority will be addressing the security concerns arising from the various types of blockchain networks. Furthermore, consensus algorithms such as PoW implemented on blockchain have several drawbacks. Thus, the development of a consensus algorithm that is more efficient will result in more cost-effective blockchain networks. This survey introduces an in-depth overview of blockchain technology. A brief historical overview of blockchain was presented, followed by a comparison of the most widely used consensus algorithms. It has been discussed in detail how public key cryptography and hash functions applied to blockchains can be used for security, identification, and non-repudiation purposes. In addition, it provides detailed information and comparisons of some cryptocurrencies used in blockchain. Also, we focus on various categories of top security risks associated with blockchain technology. Finally, by making this effort, we hope that someone will gain a deeper understanding of blockchain technology. We also hope that individuals will give more focus to the safety of the blockchain
6,638.8
2023-01-01T00:00:00.000
[ "Computer Science" ]
Planar Inductor for Biological Experimentation in Pulsed Magnetic Fields In this paper we present the novel design of the multilayer planar inductor for biological experimentation in pulsed magnetic fields. The presented planar inductor, together with developed high voltage generator, is capable to deliver 1T homogeneous magnetic fields in the volume of 17 µl with pulse repetition frequency up to 10 kHz. The finite element methods were applied to evaluate the magnetic field distribution and heat dissipation of proposed multilayer planar inductor under aluminium dioxide Al 2 O 3 substrate. The computed and experimental results are presented as well. DOI: http://dx.doi.org/10.5755/j01.eie.24.2.20632 I. INTRODUCTION The application of pulsed magnetic fields increased dramatically during last few decades.The scientists and engineers have found the applicability of magnetic fields in areas such as military applications -rail guns [1], [2], food industry -food preservation [3], [4], medicine -"Hall effect" imaging, tomography [5]- [7] biology -delivery of the nanoparticles [8]- [11] or even in space programsactive shielding from radiation [12], [13].One of the novel areas of the application of high pulsed magnetic fields is magnetoporation -permeabilization of the living biological cells using high pulsed magnetic fields.The original and seminal [14]- [17] papers on the permeabilization of living cells using pulsed magnetic fields takes a large interest in scientific society as it can be applied as contactless method of the well-known electroporation phenomenon [18]- [22], when high up to 30 kV/cm electric fields are applied over cell membrane, which causes an increase of cell transmembrane potential.When the transmembrane potential reaches 200 mV -1 V, depending on the cell type, the cell membrane becomes permeable to small molecules due to appearance of the permanent nanopores over cell membrane.Despite all the advantages the electroporation application requires tens of kilovolts to be applied, which can lead to cell death [23]- [26]. Therefore, application of high magnetic field pulses with high dB/dt, results the induction of the electric field which leads of increase of the membrane potential.In such way magnetoporation allows contactless cell membrane permeabilization without side effects of electroporation [14], [27]- [29]. The existing magnetoporation systems can generate pulsed magnetic fields up to 20 T. To generate such pulsed magnetic fields the complicated pulse forming networks, heavy, large volume and expensive pulsed magnetic field generators with limited transport possibilities are used.Usually these systems are used to investigate the phenomenon itself.Also, the lack of such generators due to their complexity and price impedes their applicability for small research groups or individual researches. In this paper we present novel multilayer planar inductor together with magnetic field generator for biological experiments in pulsed magnetic fields.The planar technology is well developed, therefore, different forms of inductors can be replicated and produced by photolithography.The proposed assembly of the planar inductor is capable to generate pulsed magnetic fields up to 1 T with repetition frequency up to 10 kHz. II. PLANAR INDUCTOR A. Geometry Figure 1 shows a proposed 3D model of the planar inductor for cell permeabilization in pulsed magnetic fields.The combined and expanded view of the multilayer planar inductor are shown in the figure.As it can be seen from the expanded view, the 4 separate planar inductors are stacked together on the axis of revolution by revolution angle of 90°.The planar inductor is fabricated using photolithography process.Using photolithography technology, the planar inductor can be fabricated on standard FR-4 textolite substrate as well as our proposed aluminium oxide Al2O3 substrate.As FR-4 as well as Al2O3 are good insulators, the substrate acts as electrical insulation material between the stacked coils operated under high voltage. Also, the substrate works as heat sink and protects from overheating during repetitive operation.As the planar technology is well developed, it is easy to replicate the proposed inductor geometry without complex and expensive manufacture techniques, which makes it more accessible for small scientific research groups.After evaluating the standard electroporation cuvette sizes, an inductor with an internal diameter of 3 mm is selected for the prototype.The metalized layer, in the standard case is 0.1 mm. It should be noted that proposed geometry of the planar inductor is not limited by the number of the stacked inductors as long as the sufficient heat dissipation from the plates of the inductor is ensured. The physical parameters of the coil as well as simulation parameters for the magnetic flux density and heat exchange calculation are given in the Fig. 2. The simplified mathematical model of the planar inductor was made using "COMSOL Multiphysics" finite element method analysis tool and is presented in Fig. 3. To implement transient parameters of the current pulse, delivered from the pulsed magnetic field generator the timedependent solver for the simulation was chosen.While to evaluate magnetic flux density B the "Magnetic Fields (mf)" physics was chosen.Under this physics module the "Coil Group Domain" was selected to simulate four different coils stacked on the top of each other.The time varying current source was defined as excitation of the coil.The concentric cylinders together with inductor pads are treated as axisymmetric model which, is revolved by 360°after simulation.As it can be seen from the Fig. 3, the stacked inductor four loops can be described as Helmholtz coil arrangement.The separate magnetic flux densities generated by separate planar inductors can be added by superimposing four constituent fields.The simulation results showed that the homogeneous generated flux density in the volume of 17 µl is equal to 1 T. The proposed geometry is not limited with the number of the stacked coils and allows of generation of the higher homogeneous magnetic fields in large volumes with adding more planar inductors to the stack.The magnetic field lines for the developed geometry are given in the Fig. 3. C. Heat Dissipation In biological experiments as well as in electroporation or magnetoporation, it is important to maintain a constant temperature, or at least to be sure that, the peak temperature does not exceed the permissible limits during the experiments.To estimate the warm-up processes in proposed planar inductor geometry the "COMSOL Multiphysics" finite element method analysis tool was used.The finite element method analysis results are presented in Fig. 4. As it can be seen, in a multilayer planar inductor using a glass textolite (FR-4) insulating material the temperature of 85 °C can be reached in a distance l of 0.2 mm from the internal conductor surface when 30 repetitive pulses with duration of 5 µs are used.Such warming can be negative for the biological objects under investigation.The thermal conductivity of the textolite FR-4 is about 0.4 W/mK.Therefore, in order to increase the heat transfer in the structure, aluminium oxide Al2O3 substrates with a thermal conductivity of 24 W/mK were proposed instead of the textolite.The results of the warming simulation of a multilayer planar inductor on textolite and Al2O3 pallets are presented in Fig. 4. As we can see the maximal heating using Al2O3 substrate is dropped by 40 °C resulting in maximal temperature increase of the 55 °C.Also increasing measurement point distance l from the planar inductor results the temperature increase up to 25 °C that ensures the sufficient temperature for the biological substances [30]. III. PROPOSED EXPERIMENTAL SETUP A. Magnetic Field Generator Figure 5 presents the prototype of pulsed magnetic field generator for biological experiment in pulsed magnetic fields.The proposed pulsed magnetic field generator consists of four independent and galvanically insolated high voltage sources, IGBT switch and a planar inductor that generates a magnetic field.The proposed generator topology together with the multilayer inductor is not limited in number of the coils and generators as each inductor is powered by a separate power source. The 32 bit.ARM Cortex-M0 microcontroller (LPC1114) generates the signals with the following pulse parameters: pulse (10 µs-30 µs), pulse frequency (1 kHz-10 kHz) and pulse number (1-999) with an amplitude of 3.3 V.The signal is sent to the galvanically insolated driver which drives two MOSFET (PSMN2R6-40YS) transistors.When microcontroller generates the 3.3 V signal, left MOSFET transistor (SW2) is ON and right (SW3) -OFF state.So IGBT (IXYX100N120B3) gate emitter voltage is +25 V.When IGBT is turned-on, the current flows through the planar Coil.When microcontroller signal is low (0 V), the left (SW2) transistor of half bridge is turned-off, and right (SW3) -turned-on which results in gate emitter voltage to be -9 V. Half bridge control system with MOSFET transistors enables to get more powerful driver for IGBT control comparing with ordinary IGBT drivers.During operation, the turn-on and turn-off transients of the IGBT switch take place in case of inductive load.Therefore, clamping diodes should be connected to protect IGBT from transient overloads. B. Magnetic Field and Heat Dissipation Measurements To test simulation results of the pulsed magnetic field and heat dissipation during the field generation the setup of measurement of the magnetic field distribution and heat in the center of the planar inductor is proposed.The simplified circuit of the magnetic field and temperature measurement setup is shown in the Fig. 6. The proposed magnetic field generator generates magnetic field pulses with duration in microsecond range.To measure such high-gradient magnetic fields, the fast and small in dimensions' magnetic field sensor is required. As we can see from Fig. 3 the direction of the generated magnetic field is known.Due to this the loop sensor was chosen for axial magnetic field measurement experiments.The magnetic field loop sensor was made in Institute of High Magnetic Fields of Vilnius Gediminas Technical University (VGTU, Lithuania) and calibrated using "Lakeshore 455" gaussmeter.The loop sensor was wounded with 0.05 mm copper wire and consisted of 5 turns placed in the middle of the planar inductor.The signal proportional to dB/dt is integrated and controlled with oscilloscope.Fig. 6.The magnetic field and temperature measurement setup. To evaluate the change of the temperature inside of the planar inductor, the digital multimeter Tektronix DMM4050 together with platinum resistance temperature sensor PT1000 due to its size was chosen.To simulate experimental environment, the planar inductor with aluminum oxide Al2O3 substrate should be filled with distilled water.The thermocouple PT1000 should be placed in the middle of the inductor. IV. CONCLUSIONS The proposed multilayer planar inductor geometry consisting of four planar inductors with an internal diameter of 3 mm and outer diameter of 7 mm, together with proposed experimental setup, can generate < 1 T amplitude pulsed magnetic field.According to finite element method analysis, for the better treatment area heat exchange characteristics the aluminum oxide Al2O3 substrates instead of the textolite FR-4 are recommended to increase the heat transfer.The warm-up of the multilayer pulsed inductor on the aluminium oxide Al2O3 substrate is 40 °C lower than the prototype of the textolite, which can have an impact on results of the biological experiment. Fig. 1 . Fig. 1.Expanded and combined view of the novel planar inductor for biological experiments in pulsed magnetic fields. Fig. 2 . Fig. 2. Physical and mathematical simulation parameters of the proposed planar inductor. Figure 1 and Figure 1 and Fig. 3 shows planar inductor configuration for cell treatment in high pulsed magnetic fields.Referring to both figures, the proposed model of the inductor is modelled as four concentric cylinders placed on top of each other and depending on simulation parameters separated by textolite FR-4 or Al2O3 substrates for insolation and heat dissipation purpose. Fig. 4 . Fig. 4. Mathematical calculations of the heat dissipation in the planar inductor.
2,606.8
2018-04-20T00:00:00.000
[ "Physics" ]
Samsung: Align-and-Differentiate Approach to Semantic Textual Similarity This paper describes our Align-and-Differentiate approach to the SemEval 2015 Task 2 competition for English Semantic Textual Similarity (STS) systems. Our submission achieved the top place on two of the five evaluation datasets. Our team placed 3rd among 28 participating teams, and our three runs ranked 4th, 6th and 7th among the 73 runs submitted by the 28 teams. Our approach improves upon the UMBC PairingWords system by semantically differentiating distributionally similar terms. This novel addition improves results by 2.5 points on the Pearson correlation measure. Introduction Since its inception in 2012, the annual Semantic Textual Similarity (STS) task has attracted and increasing amount of interest in the NLP community. The task is to measure the semantic similarity between two sentences using a scale ranging from 0 to 5 (Agirre et al., 2012;Agirre et al., 2013;Agirrea et al., 2014). In this task, 0 means unrelated and 5 means complete semantic equivalence. For example, the sentence "China's new PM rejects US hacking claims" is semantically equivalent to the sentence "China Premier Li rejects 'groundless' US hacking accusations" even though there are many word level differences between the two sentences. Improvements in the STS task can advance or benefit many research areas, such as paraphrase recognition (Dolan et al., 2004), automatic machine translation evaluation (Kauchak and Barzilay, 2006), ontology mapping and schema matching (Han, 2014), Twitter search (Sriram et al., 2010), image retrieval by captions (Coelho et al., 2004) and information retrieval in general. Measuring semantic similarity is difficult because it is relatively easy to express the same idea in very different ways. Both word choice and word order can have a great impact on the semantics of a sentence, or not at all. For example, the sentences "A woman is playing piano on the street" and "A lady is playing violin on the street" have a semantic similarity score of only 2, because pianos are not violins so the two events in the sentences must be different. This is problematic because common solutions, such as bag-of-words representations, parse trees, and word alignments measure word choice and word order. We improve upon existing word choice approaches with better measures to semantically differentiate distributionally similar terms, and by using these measures to also improve the word alignment. Our solution is an Align-and-Differentiate approach, in which we greedily align words between sentences, before penalizing non-matching words in the differentiate-phase. Our system improves upon the successful UMBC PairingWords system by about 2 points of Pearson's Correlation measure. The success of the PairingWords system is largely due to their high-quality distributional word similarity model 1 described in (Han et al., 2013). The distributional similarity model can tell that "woman" and "lady" in the above example are highly similar, which is usually correct, but it also says that "pi-ano" and "violin" are very similar, which in many contexts is incorrect. While distributional similarity measures can be criticized for producing high similarity scores for antonyms and contrasting words, we find that this property is actually advantageous when performing word alignment between two sentences. We take advantage of this property by first aligning with distributional similarity, and then differentiate by penalizing alignments of words that are semantically disjoint (Ex: antonyms). This technique to first align and then differentiate is our key improvement. The remainder of the paper proceeds as follows. Section 2 briefly revisits the UMBC PairingWords system. Section 3 presents our new Align-and-Differentiate approach. Section 4 presents and discusses our results. UMBC PairingWords System The PairingWords system (Han et al., 2013) uses a state-of-the-art word similarity measure to align words in the sentence pair and computes the STS score using a simple metric that combines individual term alignment scores. Precompute Word Similarities First, a distributional model was built on an English corpus 2 of three-billion words and separated 2 The UMBC WebBase corpus is available for download at http://ebiq.org/r/351 into paragraphs. Words are POS tagged and lemmatized. A small context window of ±4 words is used to count word co-occurrences. The vocabulary has a size of 29,000 terms, which includes primarily open-class words (i.e. nouns, verbs, adjectives and adverbs). Singular Value Decomposition (SVD) (Landauer and Dumais, 1997;Burgess et al., 1998) has been used to reduce the 29K word vectors to 300 dimensions. The distributional similarity between two words is measured by the cosine similarity of their corresponding reduced word vectors. The distributional similarity is then enhanced with WordNet (Fellbaum, 1998) relations in eight categories (See (Han et al., 2013)). Finally it is wrapped with surface similarity modules to handle the matching of out-of-vocabulary words. NLP Pipeline The Stanford POS tagger is applied to tag and lemmatize the input sentences. A predefined vocabulary, POS tags, and regular expressions are used to recognize multi-word terms including noun and verb phrases, proper nouns, numbers and time. Stop words are ignored. The stop word list was augmented with adverbs that occurred more than 500, 000 times in the corpus. Word Alignment Between Two Sentences The alignment function g for a target word w in one sentence S is simply defined as its most similar word w in the other sentence S with respect to the aforementioned word similarity measure. See Equation 1. Score The PairingWords systems yield an STS score in the range [0, 1] with a linearly scaled definition corresponding to the standard STS score. This score is computed using the word level semantic similarity of the aligned words. The PairingWords system uses a similarity threshold to decide whether a term can be aligned. If a term cannot be aligned then a penalty is imposed. Therefore, the PairingWords STS score is the result of subtracting the penalty score P from the overall term alignment score T , which is defined in Equation 2. where S 1 and S 2 are the sets of words/terms in two input sentences. 3 Align-and-Differentiate Approach Our system extends the UMBC PairingWords system by differentiating distributionally similar terms, resulting in a conceptually new framework to tackle the STS challenge. Figure 2 illustrates our system. After preprocessing there are four main algorithms: align, differentiate, score, and rescore. Precompute Word Similarities We reused the distributional model built for the UMBC PairingWords system. NLP Pipeline In addition to the basic NLP techniques used by the PairingWords in Section 2.2 we use the Stanford de-pendency parser to translate the input sentences into their dependency graph representation. Word Alignment Between Two Sentences For alignment we upgraded the PairingWords approach (see Equation 1) with candidate disambiguation. If multiple candidates (ambiguity) exist, we use their neighboring words in the sentences and dependency graphs to carry out disambiguation. For two mapping candidates, we found their neighboring words in terms of dependency relations. Then we choose the candidate with the highest neighbor similarity. This alignment method is directional. In domains for which we have high confidence that the dependency parser will correctly parse both sentences, we require mutual agreement in both directions. Mutual alignment is computed by finding g such that g(w) = w and g(w ) = w. The similarity function sim(w, w ) is the word similarity function described in Section 2.1. Following the PairingWords system, we use a similarity threshold of .05 to determine whether a vocabulary word 3 has at least some minimum similarity with any of the words in the other sentence. We call a word Out Of Context (OOC) if the threshold is not satisfied. The appearance of OOC words could be an indicator of different sentence semantics, as illustrated in the example "A beautiful red car" vs. "A beautiful red rose" where "car" is an OOC word with respect to the other sentence. The impact of OOC words to semantic equivalence is disproportionately high. Therefore, we penalize semantic similarity scores in proportion to the number of OOC words. However, we observed that if OOC words occur because there are additional details, then these words should not be penalized. For example, in the two sentences "Matt Smith to leave Doctor Who after 4 years" and "Matt Smith quits Doctor Who", the word 'year' is an OOC word that does not significantly reduce the semantic equivalence. We found that many of these extraneous and benign OOC words do not represent physical objects, i.e. something that can be touched. Hence, we chose to only penalize OOC words that are physical objects. WordNet has a synset physical objects and we use its descendants to collect the set of physical objects. Differentiate This subsection defines and then describes how we identify Disjoint Similar Concepts. The semantic similarity of two words is the degree of semantic equivalence between the two words. We may also say, it is the ability to substitute one term for the other without changing the meaning of a sentence. Many distributionally similar terms are not semantically similar. Examples include "good" vs "bad", "cat" vs "dog", "Thuesday" vs "Monday", "France" vs "England" and etc. Existing research on distributional models has mainly been focused on studying antonyms or contrasting words (Mohammad et al., 2008;Scheible et al., 2013;Mohammad et al., 2013). However, as shown by the above examples, the scope of distributionally similar but not semantically similar terms goes far beyond antonyms. Hereafter, we refer to this new category of terms as Disjoint Similar Concepts (DSCs). To the best of our knowledge, collecting Disjoint Similar Concepts is a novel research problem. General statistical methods are not easily available, but we can extract such information from human-crafted ontologies, such as WordNet. For this work, we identify Disjoint Similar Concepts as siblings under a common parent in an ontology, such as WordNet. For example, in the electronics domain, we can assert that smart phone and tablet are DSCs if they are siblings with the same parent electronics in the ontology. We use a semi-automatic method to produce several sets of potential DSCs for our STS system. The sets include animals, countries, vehicles, weekdays, colors and etc. First, we decide what types of DSCs are likely to appear in a dataset. For example, animals and vehicles will likely appear in the images training dataset. We penalize each aligned word pair that has Disjoint Similar Concepts. If both words are antonyms then they are DSCs. If both words share the same hypernym in WordNet, and that hypernym is a potential DSC, then they are DSCs. Otherwise, the concepts are considered semantically similar. Score We create a base similarity score E i , and then apply penalties for OOC words O i and Disjoint Similar Concepts D i . Our primary method of producing the STS score is shown in Equations 3 to 7. The method is based on the directional alignment function described in Section 3.3. E i is the base score where i indicates the alignment direction and SS i represents the collection of pairs of semantically similar terms for direction i. O i is the sum of penalties applied to OOC terms for direction i. In our current system, the function α(t) has a constant value 1.0. D i is the sum of penalties applied to Disjoint Similar Concepts for direction i. We normally set β( t, g(t) ) to 0.5 but we can also tune β coefficient depending on different types of Disjoint Similar Concepts (e.g.animal and color), if a training dataset is available. Rescore by Learning STS Offset Scores We learn an offset score to account for and correct systemic biases in the Align and Differentiate algorithm using supervised machine learning. For domains with labeled data we used bag-of-words Support Vector Machines (SVMs) in regression mode, with a linear kernel, to compute an offset score measuring the difference between our Equation 7 STS score and the gold standard training STS score. We add this offset score to the Equation 7 STS score. This process improved our Pearson Correlation scores from .7936 to .8162 on the 2014 STS data in a ten fold cross-validation setting. The SVM was trained on a length normalized bag-of-words with additional non-normalized meta features for (1) the length difference between sentence pairs, (2) the percentage of exact word to word matches between both sentences, and (3) the STS score produced in Equation 7. The bag-of-words feature values were calculated by taking the absolute value of the difference between the number of times a word occurred in the first sentence versus the number of occurrences in the paired sentence. The bag-of-words was created with both words and bi-gram word sequences. Table 1 shows the official results of our three runs, alpha, beta and delta, in the 2015 STS task. Each entry supplies a run's Pearson correlation on a dataset and the rank of the run among all 73 runs submitted by the 28 teams. The last row shows the weighted mean and the overall ranks of our three runs. The alpha run was produced by applying the align-and-differentiate algorithm to the five datasets with the same parameter settings. The beta run was produced without penalizing OOC terms, except for the images dataset. The result for penalizing OOC terms are slightly better, but are just shy of a 95% confidence interval (using paired T-tests). On the images dataset, we exploited dependency structure in the align and differentiate algorithm. We use the supervised ML model to rescuer our STS scores only for the delta run on the Headlines and Images datasets. Results and Discussion Our results on the forums and beliefs datasets were surprisingly much lower than other datasets due to the PairingWords system's poor baseline performance on these datasets as shown in Table 2. We speculate that this drop in performance is caused by the PairingWords system ignoring words that are not nouns, verbs, adjectives and limited adverbs. These include common meaningful words such as "how" and "why" in both datasets. Our approach of semantically differentiating distributionally similar terms, as shown in Table 2 is a statistically significant improvement at the 95% confidence interval.
3,234
2015-06-01T00:00:00.000
[ "Computer Science" ]
Neuromarketing Solutions based on EEG Signal Analysis using Machine Learning Marketing campaigns that promote and market various consumer products are a well-known strategy for increasing sales and market awareness. This simply means the profit of a manufacturing unit would increase. "Neuromarketing" refers to the use of unconscious mechanisms to determine customer preferences for decision-making and behavior prediction. In this work, a predictive modeling method is proposed for recognizing product consumer preferences to online (E-commerce) products as “Likes” and “Dislikes”. Volunteers of various ages were exposed to a variety of consumer products, and their EEG signals and product preferences were recorded. Artificial Neural Networks and other classifiers such as Logistic Regression, Decision Tree Classifier, K-Nearest Neighbors, and Support Vector Machine were used to perform product-wise and subject-wise classification using a user-independent testing method. Though, the subject-wise classification results were relatively low with artificial neural networks (ANN) achieving 50.40 percent and k-Nearest Neighbors achieving 60.89 percent. Furthermore, the results of product-wise classification were relatively higher with 81.23 percent using Artificial Neural Networks and 80.38 percent using Support Vector Machine. Keywords—Electroencephalogram (EEG); brain-computer interface; neuromarketing; machine learning; artificial neural networks I. INTRODUCTION E-commerce is a growing field these days. People want to expand their businesses, so they spend money on marketing to learn about their customers' preferences. Neuromarketing is a developing field with enormous potential for application marketing, brand management, and advertising. It emerges as a result of combining relevant concepts from the fields of neural science, psychology, human neurophysiology and even neuro chemistry. It connects consumer behaviour research with neuroscience [1]. Consumer behaviour quite often undermines the effectiveness of traditional marketing methods. This is because the consumers' reactions vary when they are exposed to advertisements. Neuromarketing is the key to gaining insight into the minds of consumers. Because neuromarketing does not necessitates the consumer's conscious participation. It operates on the unconscious state of the brain. Neuromarketing assesses the brain's reaction to any advertising stimuli. It differs from self-reports that consumers provide during surveys. The truth can be revealed by studying the EEG signals directly [21]. As several reported studies show those two systems-conscious and subconscious can provide contradictory interpretations at times. Individual choices influence the decision-making process not only through individual and cognitive assessments, such as questionnaire responses but also through objective and implicit assessments, such as eye movement and neural activities. Recent findings from functional magnetic resonance imaging (fMRI) and EEG studies have linked movements in the frontal theta and posterior gamma bands to the development of individual choice. These findings show that before deliberate decision making, the physical reaction is caused by implicit desires. As a result, such neural behaviors associated with attention-related tasks, such as eye moments, can influence the consumer's preferences at an unconscious level. Despite this, there have been few neurological studies on the relationship between visual attention and subjective interest: the causes of subjective preference choices, such as the amount of visual perception and attention, are impossible to assess when using attractive faces with a wide range of visual features (e.g., facial contour, eye color, and hair length) [2]. Several commercial efficacy metrics can be calculated using neuromarketing. Emotional commitment, memory retention, purchase purpose, novelty, perception, and attention are the factors to consider. When customers make decisions, they are influenced by their emotions. The emotional interest level causes the emotional commitment level to rise. It can also help predict when customers will purchase by observing how their brains react to advertising stimuli. When customers decide to buy a product, the level of encoding of marketing stimuli influences our decision [3]. Neuromarketing provides knowledge that traditional marketing methods cannot provide. The significant advantage provided by neuromarketing techniques is that these techniques, which collect quantitative data, could be used before the launch of a new product, increasing the likelihood of that product's success [3]. Electroencephalography (EEG) was developed to record brain signals. EEG is used to study brain activity by recording postsynaptic potentials generated by neurons. With the development of tools, EEG is no longer limited to medical applications but has now been extended to other fields. Medical, Brain-Computer Interface (BCI), and neuromarketing are examples of EEG applications [4]. In the 298 | P a g e www.ijacsa.thesai.org [10][11], the authors have proposed a predictive modeling method based on EEG signals to understand customer preferences for E-commerce products in terms of "likes" and "dislikes". EEG signals were recorded while volunteers of various ages and gender browsed through various consumer goods. The tests were performed on a dataset containing a variety of consumer goods. The accuracy of choice prediction was calculated using a user-independent testing approach and hidden Markov Model (HMM) classifier. The prediction results appear to be promising, and the methodology can be used to create business models [11]. In comparison to the previous study, this study introduces subject-wise classification as well. The previous study has only done on the product-wise classification. The goal of this study is to assist marketing researchers in making appropriate decisions for further increasing the sale of products using imaging techniques by developing an EEG-enabled model that can replace expensive methods of current day neuromarketing. In addition, by analyzing EEG signals, a Neuro-marketing system will be provided to predict customer choices while viewing E-commerce products. As such, the main objective of this study is to investigate the various tuning of artificial neural networks and other classifiers for improving the classification rates of productwise classification and for the first time doing subject-wise classification. Section II presents the background and related works in the field of neuromarketing. Section III presents our approach towards building an EEG-based prediction model. Section IV presents the results of our study and Section V concludes the paper with possible future recommendations. II. RELATED WORK We looked at recent studies that linked EEG signals to predict customers' response, behavior and emotions based on self-reported ratings. All these studies mostly focus on studying the relationship between brain imaging and customer decision-making. Kumar, Singh, et al. (2015) investigated the current state of neuromarketing, as well as the activities involved, which included neuroimaging, EEG, fMRI, and eyetracking. The customer dialectic is examined in the paper: "consumers contradict themselves, saying what they want but doing what they feel." The authors focused on four aspects of consumers: physical body, mind, heart, and spirit [5]. W. Anderson, Sijercic et al. (2007) worked on the classification of EEG Signals from four subjects while performing five mental tasks. Half-second segments of sixchannel EEG data were trained to be graded into one of five groups, each of which corresponds to one of five cognitive tasks completed by four subjects. Two and three-layer feed forward neural networks were trained using 10-fold crossvalidation and early stopping to avoid over fitting. To represent EEG signals, autoregressive (AR) models were used. The average percentage of correctly classified test segments ranged from 71% for one subject to 38% for another. The Clustering of the hidden-unit weight vectors of the resulting neural networks shows which EEG channels were most important in this discrimination problem [6,20]. Solhjoo, Nasrabadi, Golpayegani, et al. (2005) investigated chaotic signal classification using hmm classifiers and EEGbased mental task classification. The analysis of mental activities using brain signals, based on EEG provides a better understanding of human brain functions. Furthermore, the author stated for EEG chaotic signals it is critical to determine whether probabilistic and statistical signal processing tools (such as HMM-based classifiers) can handle chaotic signals. The author has examined how well HMMs perform in classifying various types of synthetically formed chaotic signals. The performance of such classifiers in classifying mental tasks based on EEG was then evaluated. The results in both cases indicate good performance [7]. Guo et al. (2013) developed the new recommender system for 3D e-commerce using EEG signals. The author proposed a novel augmented reality recommender framework for the world of e-commerce. The system makes recommendations based on customer preferences, taking into account both prepurchase and post-purchase scores, as well as post-purchase ratings in general. Positive emotions among users are evaluated using EEG signals before interacting with 3D virtual products. Pre purchase ratings work in tandem with postpurchase ratings to address two major challenges that traditional recommender systems face: data and cold start. By properly utilizing both pre-and post-purchase scores, user preference can be more reliably modeled. The author claimed that it has boosted the effectiveness of modern recommender systems and force traditional ecommerce applications to adapt [8]. The authors of [9] conducted an experiment on EEG signal classification using the wavelet transform. The author used an artificial neural network (ANN) technique in conjunction with a feature extraction technique, namely the wavelet transform. The artificial neural network used to classify the data is a feed-forward network with three layers that implements the back propagation algorithm for error learning. After that, the network with wavelet coefficients was trained. Over 66% of the normal class was correctly graded, and 71 % of EEGs in the schizophrenia group were positive. Murugappan, Celestin Gerard et al. (2014), the goal of their research is to use wireless EEG signals to identify the most popular automotive brand in Malaysia. This work is taken into account a community of four major vehicle brand advertisements, including Toyota, Audi, Proton, and Suzuki. The participants (9 male and 3 female, ages 22-24) were simulated using a 14 channel wireless Emotive headset with a sampling frequency of 128 Hz, and the brain activity responses to the stimuli were obtained using a 14 channel wireless Emotive headset with a sampling frequency of 128 Hz. The obtained signals are filtered using a Surface Laplacian filter and a 4th order Butterworth band pass filter with a cutoff frequency of 0.5 Hz -60 Hz is used to filter the obtained signals. The alpha frequency band (8 Hz -13 Hz) of EEG signal information was obtained using the same Butterworth 4th order filter. The Fast Fourier Transform (FFT) was used to extract three statistical features from EEG signals using the Alpha band frequency spectrum: power spectral density (PSD), spectral energy (SE), and spectral centroid (SC). The feature vector is constructed using extracted features extracted 299 | P a g e www.ijacsa.thesai.org from all of the subjects via four different advertising stimuli. This feature vector is fed into two non-linear classifiers, K Nearest Neighbor (KNN) and Probabilistic Neural Network (PNN), to classify the subject's intention on advertising [10]. III. SYSTEM SETUP In this study EEG signals were recorded from 15 healthy people using a Muse 2 headset -which is a neuro-signal acquisition wireless device -connected to a mobile app called Muse Monitor, as shown in Fig. 1. The device has 4 channels for EEG data that are located at AF7, AF8, TP9, and TP10 positions as per the International 10 -20 system. Internally Muse 2 headband is sampled at a frequency of 256 Hz. The EEG data is stored in a CSV file and then transferred to computer for further processing. EEG headset was mounted onto the head of participants and asked to view shopping products as shown in Fig. 4. We recorded 450 EEG signals, each lasting 4 seconds. Because the Muse 2 device has four sensors, it is the most user-friendly data acquisition device. We get raw data from four sensors. Fig. 2 shows the raw signal from AF7 Channel. Fig. 3 is the graph of RAW signals from AF7, AF8, TP9 and TP10 sensors. Fig. 4 shows the products we have used. While the user was viewing an item EEG signals were collected simultaneously. After the watching, each consumer was asked to rate the product in one of two categories: like or dislike. Then the signal passes through certain signal pre-processing techniques and feature extraction steps. Next, classification models are built, trained, and tested based on the user's choice. A. Data Preprocessing and Feature Extraction Pre-processing is the necessary step in EEG processing because it converts the signal into a usable format. The Initial pre-processing was done in excel to ensure that each recording was exactly 4 seconds long. Fig. 3 shows the unfiltered raw EEG signals from different channels: AF7, AF8, TP9 and TP10. B. S-GOLAY Filter Researchers have successfully used the S-Golay filter in signal smoothing. It is implemented using least squares or polynomials to reduce the noise in signals and smooth them out by fitting consecutive sub-sets of neighboring signal points with low-degree polynomials and linear least squares. The S-G filter has two parameters: polynomial degree and frame size. PRO and SNR (signal to noise ratio) is the output evaluating variables for denoising EEG signals using the S-G filter. The experiment results show which type of polynomial degree value is best [13]. The S-Golay filter can be applied to obtain a smoothed signal for a signal S j = f (t j ), where (j=1, 2,…., n) with length of n as mathematically defined below. Where m is defined as frame span, c i represents convolutional coefficients number and Q j is the smoothed output signal. Fig. 4 depicts the RAW signal from a single channel, while Fig. 5 depicts the smooth signal after applying the S-Golay filter. C. Wavelet Transform Wavelet Transform (DWT) based Features The most important part of distinguishing objects from one class to another is feature extraction. It is the process of converting raw signals into useful features. It is required to proceed to the next steps. For the classification of the EEG signal, we used discrete wavelet transform (DWT) [14] based on features. The DWT typically produces five signals: alpha, beta, theta, gamma, and delta bands of varying frequencies. Eq. 2 provides a mathematical explanation of DWT. When S is used as a scaling factor, it is usually set to 2. DWT is commonly used in biomedical signal processing because it denotes a signal in the time and frequency domains. The basic idea, behind DWT, is to use multistage decomposition to transform the signal input signals into small waves. A signal's wavelet analysis based on transformation can be performed at different frequency bands by decomposing it into approximation (A) and information (D) coefficients. To begin, two digital filters, the Low Pass Filter (L) and the High Pass Filter (H) are used to process the signal (H). A low-pass filter (L) is applied to the signal, which eliminates high-frequency fluctuations while preserving slow patterns. D. Classification Following the feature extraction step, we used those features for classification. Support Vector Machine (SVM) [15], Logistic Regression [16], Decision Tree [17], Random Forest [18], and Artificial Neural Networks [1 9] are among the classifiers employed. These features were classified subject-wise as well as product-wise on different bands. The dataset were divided into training and testing sets, with 80 percent of our data used for model training and 20 percent used for model testing. IV. RESULTS AND DISCUSSION As the features are fed into the Artificial Neural Network, various classifiers such as SVM, LDA, Logistic Regression, Random Forest, and Decision Tree are employed. The following are the ANN results. A. Subject Wise Classification We used 14 subjects in our experiment to collect EEG data from them while they are watching and selecting the products, and a K-fold cross validation of 10 folds was used to validate our experimental results. Different models are trained and accuracies are obtained using 5 different bands: alpha, beta, theta, gamma, and delta. These accuracies are evaluated for each product separately to carry out subject-wise classification. B. Hyper Parameter Tuning on Theta Band for Model Selection ANN is trained on 14 subjects using theta band with columns named Theta AF7, Theta AF8, Theta TP9, and Theta TP1 0. Table I displays the tuning of the ANN model's model parameters. To achieve the best results, the hidden layers, the number of neurons, activation function on layers, and optimizer are all varied. As a result, the number of hidden layers should be one and the number of neurons should be two to achieve the best result of 50.40 percent for theta band. ANN Model is trained using theta, alpha, beta, gamma, and delta bands using 10 folds on 14 subjects. Different parameters of ANN models have been tuned to achieve the best overall accuracy for each band. Subject-wise accuracy of 50.40 percent, 50.02 percent, 50.39 percent, '50.14 percent, and 50.21 percent is obtained using Artificial Neural Networks on the Theta band, Alpha band, Beta band, Gamma band, and Delta band, as shown in Table II by testing different classifiers to obtain the best subject-wise accuracy. Using Decision Tree, K-nearest Neighbors, and Logistic Regression, we achieve the highest accuracy on the Delta band of 57.30 percent, 60.89 percent, and 51.34 percent, respectively. Hence, K-nearest neighbors proved to be best algorithm for classification of the Delta band signals. Fig. 6 shows the accuracy of theta band tested number of times. Fig. 7 shows the maximum attained accuracy by alpha band is 51.5 %. Fig. 8 illustrates the minimum accuracy for beta band is 49.2 % and maximum accuracy is 50.5 %. Fig. 9 and Fig. 10 depict that minimum accuracy for delta and gamma band almost same that is 45.5 % with having maximum accuracy of 52 %. 302 | P a g e www.ijacsa.thesai.org C. Product Wise Classification The K-fold cross validation with 10 folds is applied for the product as well as subject-wise classification. Training and testing accuracies are obtained for various models using five different bands: alpha, beta, theta, gamma, and delta. Fig. 11 depicts the accuracy graph of the ANN model, which was trained using 14 products 1 subject. The model has learned completely after 15 epochs. As shown in Table III, the average product-wise results are obtained accuracy of 78.73 percent,76.14 percent, 81 .23 percent, 74.12 percent, and 82.19 percent on the Alpha band, Theta band, Beta band, Gamma band, and Delta band using Artificial Neural Networks. To achieve product-specific accuracy, a variety of classifiers have been used. The highest accuracy on the Delta band is 90.71 percent, followed by 92.21 percent, 82.37 percent, and 83.51 percent using Decision Tree, K-nearest Neighbors, Logistic Regression, and Support Vector Machine (SVM), as shown in Fig. 12. In our study, SVM and ANN performed better than in the previous study [11], and the results obtained are good enough to be used for practical business models. D. HEAT Map The proposed physiological heat map tool allows for the representation of the relative distribution of physiologically inferred emotional or cognitive states of users on a given interface. To make a heat map in MATLAB-based EEGLAB, first select the channel location, then perform the independent component analysis (ICA), and finally plot a 3D component map. Fig. 13 illustrates the 3D heat maps for a consumer's choices. This figure clearly depicts the difference in the heat maps for products with "like" and "dislike". The EEG signals for like are mainly concentrated on the right hemisphere while that for dislike are concentrated on the left hemisphere of the brain. E. ICA (Independent Component Analysis) Component The independent components analysis generates a set of weights for all electrodes such that each component is a weighted sum of operation at all electrodes, and the weights are designed to isolate brain electrical signal sources. Components with blink artifacts are possibly the easiest to detect. We have taken a careful approach and only deleted components from the data if you were confident they contained artifacts or noise with no or very little signal. ICA can be used to clean data, separate objects, and exclude certain components from the data, or reduce data. When the individual component analysis is used as a preprocessing method, components can be judged as containing objects based on their topographies, time courses, and frequency spectra. ICA also helps in removing high frequency noise from the EEG signal. F. Discussion In this work, we used EEG data to predict users' product preference using neuroscience. The outcome demonstrates the efficacy of the proposed framework and offers an additional option to existing methods of predicting product market success. This study investigates and improves the classification accuracies of subject-wise and product wise choice preferences. From results it is evident that our proposed system gives classification accuracy of up to 92.21 percent on the Delta band. The classification accuracies for all five bands are calculated both subject-wise and product-wise. Because there is more randomness in EEG signals of subjects, productwise accuracy is higher than subject-wise accuracy because EEG signals of the same products are more similar and accurate. The other strong point is that our neuromarketing tool is simple, as we used four dry electrode sensors that can be easily placed on the forehead. V. CONCLUSION Using EEG signals, we predicted a customer's product selection preference. The brain activity of 14 participants was recorded while they were viewing products. The Muse 2 headset, which has four sensors, was used to record EEG signals. Further, the filters were applied to make signals smooth and classified using Artificial Neural Networks and other classifiers like SVM, decision tree, logistic regression, and K-Nearest Neighbors. Using all of the above-mentioned classifiers, we obtained subject-and product-level accuracies. Our obtained results demonstrate the effectiveness of the proposed framework, which has provided a superior solution than traditional methods of predicting product success in the market. By extending existing models, the framework can aid in the development of market strategies, research, and forecasting market success. In the future, this work can be extended by analyzing fictitious responses to product preferences as compared to neutral responses. To improve prediction results, more powerful features and algorithm combinations could be, developed.
5,014.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
Molar masses and molar mass distributions of commercial regenerated cellulose materials and softwood dissolving pulp determined by SEC/MALLS The molar masses and molar mass distributions of three commercial regenerated cellulose samples, viscose rayon, Tencel, and Bemliese (or cuprammonium nonwoven), have been determined by dissolution in 8% (w/w) lithium chloride/N,N-dimethylacetamide (LiCl/DMAc) and subsequent size-exclusion chromatography with multi-angle laser-light scattering detection (SEC/MALLS). Before dissolution in LiCl/DMAc, the regenerated cellulose samples were pretreated by the following three methods: (1) soaking in ethylene diamine (EDA) and subsequent solvent exchange to N,N-dimethylacetamide (DMAc) through methanol, (2) soaking in water and subsequent solvent exchange to DMAc through ethanol, and (3) soaking in water and subsequent solvent exchange to tert-butyl alcohol through ethanol and freeze dying. The pretreated samples were dissolved in 8% (w/w) LiCl/DMAc by stirring the cellulose/LiCl/DMAc mixtures for 1–3 weeks followed by dilution to 1% (w/v) LiCl/DMAc for SEC/MALLS analysis. The EDA- and water-pretreated samples gave almost the same SEC-elution pattens and molar mass plots, resulting in similar number- and mass-average molar masses. However, the freeze-dried samples gave 10%‒20% lower mass recovery ratios than those obtained for the EDA- or water-pretreated samples, probably because of incomplete dissolution of the freeze-dried samples in 8% (w/w) LiCl/DMAc. The average mass-average degree of polymerization values of viscose rayon, Tencel, and Bemliese were 340, 530, and 880, respectively. The slopes of the conformation plots were 0.58–0.62, showing that all of the molecules in the three regenerated cellulose samples were dissolved in 1% (w/v) LiCl/DMAc, forming linear random-coil conformations. Introduction Regenerated cellulose fiber is important for textiles, engineering filament yarns, and various medical and healthcare applications. Viscose rayon, Tencel (or Lyocell), and Bemliese (or cuprammonium nonwoven) are typical regenerated cellulose materials produced at the industrial level, and they have contributed to our cultural lives and technologies for a long time (Fink et al. 2001;Sayyed et al. 2019;Veit 2022). Although viscose rayon production and wastewater treatment systems have some environmental issues caused by H 2 S emission into the atmosphere, viscose rayon is still the main regenerated cellulose fiber, and it is called artificial silk in man-made fibers Kuchtoá et al. 2023). Tencel has been developed to overcome some of the shortcomings of the viscose rayon production process and to improve the fiber quality (Fink et al. 2001;Kreze and Malei 2003;Abu-Rous et al. 2007;Borbély 2008;Sayyed et al. 2019;Veit 2022). Bemliese is another category of regenerated cellulose materials. Viscose rayon and Tencel are produced by dissolution of wood dissolving pulps with high α-cellulose contents (> 93%) in aqueous CS 2 /NaOH at room temperature and thermally melted N-methylmorpholine N-oxide (NMMO) hydrate, respectively, and subsequent spinning, regeneration in aqueous media, washing with water, and drying. Bemliese is produced from cotton linters cellulose by dissolution in aqueous Cu(NH 3 ) 4 (OH) 2 , spinning, fabrication, regeneration in an aqueous medium, washing with water, and drying to form cuprammonium nonwoven, which is mainly used for medical, healthcare, and agribusiness applications (Veit 2022). The molar masses and molar mass distributions of these regenerated cellulose materials are fundamental and important factors that influence the mechanical and other key properties of the materials. Size-exclusion chromatography combined with multi-angle laser-light scattering and refractive index detection (SEC/MALLS/RI) gives molar-mass distributions and number-and massaverage molar mass values (M n and M w , respectively). In this case, the cellulose samples should be completely dissolved in a solvent at the individual molecular level, and the solution should be transparent (without fluorescence formed by laser-light irradiation). Furthermore, the cellulose molecules should be stable in the solvent without depolymerization or side reactions during the dissolution process and storage of the cellulose solution. Lithium chloride/N,N-dimethylacetamide (LiCl/DMAc) is the only solvent system that satisfies the above requirements (Bikova and Treimanis 2002;Potthast et al. 2003;Dupont 2003;Ono and Isogai 2020). Some activation methods or pretreatments of the cellulose sample are required before dissolution treatment in 8% (w/w) LiCl/DMAc for complete dissolution (Bikova and Treimanis 2002;Potthast et al. 2003;Dupont 2003). However, it has been reported that complete dissolution of regenerated cellulose materials is often difficult, resulting in inaccurate molar mass data by SEC/MALLS (Henninges et al. 2014;Siller et al. 2014;Silbermann et al. 2017). In previous work, we succeeded in completely dissolving all of the above native cellulose and plant holocellulose samples in 8% (w/w) LiCl/DMAc by 100% ethylene diamine (EDA) pretreatment. The cellulose and holocellulose samples were first soaked in EDA, and EDA was then solvent exchanged to DMAc through methanol. Complete dissolution was achieved by stirring the EDA-pretreated cellulose and holocellulose samples in 8% (w/w) LiCl/DMAc at room temperature for a few days, weeks, or months depending on the sample. The key for complete dissolution of all of the native cellulose and holocellulose samples is conversion of the cellulose I crystal structure in the sample to cellulose III or disordered structures by EDA treatment followed by methanol washing. The cellulose and holocellulose solutions in 8% (w/w) LiCl/DMAc are then diluted to 1% (w/v) LiCl/DMAc and subjected to SEC/MALLS analysis. Consequently, some important information about the native cellulose samples and plant holocelluloses have been obtained by SEC/MALLS (Ono et al. 2016a(Ono et al. , b, 2017(Ono et al. , 2018(Ono et al. , 2021(Ono et al. , 2022aOno and Isogai 2020). It is possible that low-molar-mass hemicellulose molecules slightly dissolve in EDA during pretreatment and are excluded from the SEC/MALLS data, which should be taken into account (Yamamoto et al. 2011). In this study, two regenerated cellulose fibers, viscose rayon and Tencel, and one regenerated nonwoven cellulose, Bemliese, were selected, and the following three pretreatments were applied to the regenerated cellulose samples: EDA soaking used in our previous studies, conventional water soaking, and water-soaking/freeze-drying. In the EDA-and water-soaking pretreatments, some mass losses may be unavoidable during repeated solvent exchange and centrifugation, resulting in inaccurate cellulose concentrations in the obtained LiCl/DMAc solutions used for SEC/MALLS. In contrast, more accurate cellulose concentrations of the LiCl/DMAc solutions are obtained using freeze-dried samples, which is significant for evaluation of complete dissolution in LiCl/ DMAc from mass loss values (caused by, for example, incomplete dissolution) in SEC/MALLS analysis. Soaking of cellulose samples including some regenerated cellulose fibers in dimethylsulfoxide followed by solvent-exchange to DMAc has been reported for complete dissolution in LiCl/DMAc for SEC/MALLS analysis (Silbermann et al. 2017). However, the mass recovery ratios of the starting cellulose materials in the LiCl/DMAc solutions subjected to SEC/MALLS analysis were not taken into account, and no data for water-activated or freeze-dried cellulose samples were provided. In this study, one softwood bleached sulfite pulp (SBSP) sample, which was prepared by acid sulfite pulping and subsequent bleaching and used as the dissolving pulp for production of viscose rayon and some cellulose derivatives, was also applied to dissolution in 8% (w/w) LiCl/DMAc and subsequent SEC/MALLS analysis as a reference (Mendes et al. 2021). Samples The viscose rayon and Tencel fibers were commercial products. Bemliese (or cuprammonium nonwoven) was kindly provided by Asahi Kasei Co., Ltd. (Miyazaki, Japan). The regenerated samples were cut into short lengths of 3-5 mm with scissors. The SBSP was dissolving pulp fibers produced from softwood chips by acid sulfite pulping and subsequent bleaching processes (Nippon Paper Co. Ltd., Japan). This SBSP contained 96% glucose, 1.5% xylose, and 0.9% mannose as neutral sugars (Ono et al. 2018). The 1 M cupriethylenediamine hydroxide solution (Cu(EDA) 2 (OH) 2 ) is a commercial product (Sigma Aldrich, USA). LiCl, DMAc, and the other chemicals and solvents were laboratory grade (FUJIFILM Wako Pure Chemical, Co., Tokyo, Japan) and used as received. Dissolution of the cellulose samples in 8% (w/w) LiCl/DMAc The three regenerated cellulose samples (viscose rayon, Tencel, and Bemliese) and SBSP were dissolved in 8% (w/w) LiCl/DMAc according to the procedures shown in Fig. 1. The first procedure was followed by the EDA-activation method commonly used in our laboratory (Ono et al. 2016a(Ono et al. , b, 2020. After vacuum drying at 40 °C for 1 day, the cellulose sample (20 mg on dry mass) was soaked in 100% EDA (5 mL), and the mixture was stirred with a magnetic stir bar overnight. Solvent exchange from EDA to methanol (MeOH, 35 mL) was then performed by centrifugation, and the cellulose/MeOH mixture was shaken overnight. This treatment was repeated twice with fresh MeOH (35 mL each). The mixture was then solvent exchanged from MeOH to DMAc (35 mL) by centrifugation and shaking the cellulose/DMAc mixture overnight. This treatment was repeated again with fresh DMAc. After centrifugation of the mixture to remove excess DMAc, 8% (w/w) LiCl/DMAc (5 g) was added to the cellulose sample, and the mixture was stirred at ~ 23 °C for 1 or 2 weeks. The second procedure was followed by the conventional water-activation method (Bikova and Treimanis 2002;Dupont 2003;Henninges et al. 2014) with slight modification. The vacuum-dried sample (20 mg on dry mass) was soaked in water (20 mL), and the mixture was stirred overnight. Solvent exchange from water to ethanol (EtOH, 35 mL) was then performed Scheme for dissolution of the three regenerated cellulose samples and one dissolving pulp in 8% (w/w) LiCl/DMAc by three activation methods for SEC/MALLS analysis of the cellulose samples by centrifugation, and the mixture was shaken overnight. This treatment was repeated twice with fresh EtOH (35 mL each). Solvent exchange from EtOH to DMAc was then performed by centrifugation, and the cellulose/DMAc mixture was stirred overnight. This treatment was repeated again with fresh DMAc. After centrifugation of the mixture to remove excess DMAc, 8% (w/w) LiCl/DMAc (5 g) was added to the cellulose sample, and the mixture was stirred at ~ 23 °C for 1, 2, or 3 weeks. In the third procedure, the three regenerated cellulose samples were dissolved in 8% (w/w) LiCl/DMAc according to the following freeze-drying method. The vacuum-dried sample (50 mg on dry weight) was soaked in water (20 mL), and the mixture was stirred overnight. Solvent exchange from water to EtOH (35 mL) was then performed, and the mixture was shaken for 3 h. Solvent exchange from EtOH to tert-butyl alcohol (t-BuOH) was then performed, and the mixture was shaken for 3 h. This treatment was repeated again with fresh t-BuOH followed by freeze drying for 5 days. The freeze-dried sample (8 mg) was dispersed in 8% (w/w) LiCl/DMAc (2 g), and the mixture was stirred at ~ 23 °C for 1, 2, or 3 weeks. SEC/MALLS analysis The cellulose solutions in 8% (w/w) LiCl/DMAc obtained by the processes described in the previous section were diluted with fresh DMAc to prepare cellulose solutions in ~ 1% (w/v) LiCl/ DMAc. Each solution was passed through a 0.45µm poly(difluoroethylene) disposable filter (Millex, Merck Millipore, Tokyo, Japan) and then subjected to SEC/MALLS analysis with 1% (w/v) LiCl/DMAc as the eluent (Ono et al. 2016b(Ono et al. , 2022a. KD-806 M and KD-G columns (Shodex, Tokyo, Japan) were used as the SEC and guard columns, respectively. A MALLS detector (DOWN HELEOS-II, λ = 658 nm; Wyatt Technologies, USA) and a refractive index detector (RID-10A, Shimadzu, Japan) were set in a high-pressure liquid chromatograph system. ASTRA software (version 6.1, Wyatt Technologies, USA) was used for data acquisition and the analyses. The number-and mass-average molar masses (M n and M w , respectively) of the cellulose samples were calculated using the value of 0.131 mL/g as the specific refractive index increment (dn/dc) (Ono et al. 2016a). Viscosity-average degrees of polymerization of the cellulose samples The freeze-dried cellulose sample (0.04 g) was soaked in water (10 mL), and the mixture was stirred for 10 min. The 1 M Cu(EDA) 2 (OH) 2 solution (10 mL) was added to the cellulose/water mixture, and the solution was stirred until the cellulose sample was completely dissolved in 0.5 M Cu(EDA) 2 (OH) 2 (20 mL). The viscosity ratio of the solution η rel was measured using a Cannon-Fenske-type capillary viscometer. The limiting viscosity number [η] (or intrinsic viscosity) was calculated from the viscosity relative inclement η sp (or specific viscosity) using the Schulz-Blaschke equation (Schulz and Dinglinger 1941): [η] = η sp /c(1 + 0.28η sp ), where c is the cellulose concentration (g/mL). The DP v value of the sample was calculated from [η] using the Mark-Houwink-Sakurada equation: [η] = 0.909 × DP v 0.9 (Marx 1955;Isogai et al. 1989a, b). Solid-state 13 C-NMR analysis Each of the air-dried cellulose samples was set in a ZrO 2 sample rotor, and the solid-state 13 C NMR spectrum was obtained by a NMR system (JNM-ECA II 500, JEOL, Tokyo, Japan) equipped with a crosspolarization (CP) and magic-angle sample-spinning probe (Funahashi et al. 2017;Ono et al. 2022a, b, c) under the following conditions: a sample spinning rate of 15 kHz, a proton 90° pulse time of 2.5 μs, and a relaxation delay of 5 s. CP transfer was achieved using a ramped amplitude sequence for a CP contact time of 2 ms. Each spectrum was acquired by 12,000 scans for 16 h. Solid-state 13 C-NMR spectra of the cellulose samples The solid-state 13 C-NMR spectra of the three regenerated cellulose samples and SBSP are shown in Fig. 1. Viscose rayon, Tencel, and Bemliese exhibited typical NMR patterns of low-crystallinity cellulose II samples. The crystalline C1 carbon peaks appeared at 105 and 107 ppm, and the crystalline C4 carbon peaks appeared at 88 and 89 ppm (Horii et al. 1982;Zhao et al. 2007;Östlund et al. 2013;Idström et al. 2016). The C6-OH carbon atoms of the three regenerated cellulose samples exhibited broad single peaks at ~ 63 ppm, showing that they had gauche-trans and gauche-gauche conformations, which correspond to the C6-OH groups of cellulose II and amorphous structures, respectively (Horii et al. 1983;Larsson PT 2005;Funahashi et al. 2017). The NMR spectrum of SBSP showed the typical pattern of wood chemical pulp (Zhou et al. 2020), and the crystallinity of cellulose I measured from the peak areas of the crystalline and amorphous C4 carbon atoms (C4 cry and C4 amo , respectively, in Fig. 2) was 56%. The relative peak areas of C6/C1 for the four samples in Fig. 2 were 0.78-0.83 Zhou et al. 2020;Ono et al. 2021Ono et al. , 2022c. Each of the regenerated cellulose samples had a small peak (Cʹ) at ~ 97 ppm, which is the same as those of the C1 carbon atoms of the reducing ends (Dudley et al. 1983;Isogai et al. 1989a, b;Moulthrop et al. 2005;Yuan et al. 2022). However, the peak ratios of (C1 + Cʹ)/C1ʹ (or the DP n values calculated from the 13 C peak ratios) were 25, 40, and 42 for viscose rayon, Tencel, and Bemliese, respectively, which are not plausible because the DP n values of the regenerated cellulose samples determined by SEC/MALLS were much higher than 170, as described in the following section. Some solid-state 13 C NMR spectra of viscose rayon samples in the literature exhibit the same small peaks at ~ 97 ppm (Horii et al. 1982;Ibbett et al. 2007;Zhao et al. 2007;Li et al. 2012;Shanshan et al. 2012;Mori et al. 2012;Wei et al. 2018;Zhang et al. 2018;From et al. 2020;Fadavi et al. 2021), whereas other regenerated cellulose samples do not show the corresponding small peaks (Newman and Hemmingson 1994; Ago et al. 2004;Jin et al. 2007;Duchemin et al. 2007;Östlund et al. 2013;Idström et al. 2016). The small peak at ~ 97 ppm appears in the NMR spectra of some dried regenerated cellulose samples (Nomura et al. 2020), whereas mercerized cellulose does not show such a peak (Maunu et al. 2000). Some TEMPO-oxidized cellulose samples without post-oxidation with NaClO 2 or post-reduction with NaBH 4 (Shinoda et al. 2012) show the corresponding small peaks at ~ 97 ppm in their solidstate 13 C-NMR spectra (Follain et al. 2010;Biliuta et al. 2010;Cao et al. 2012;Li et al. 2017;Lin et al. 2017). These TEMPO-oxidized cellulose samples contain small amounts of C6-aldehydes formed as intermediates Isogai 2022). Thus, it is possible that the Cʹ peak at ~ 97 ppm for the regenerated cellulose samples can be ascribed to hydrated C6-aldehydes, which are formed by partial oxidation of the C6-OH groups during the dissolution, aging, spinning/regeneration, and/or drying process in the commercial production system. The extended 13 C-NMR spectra of the cellulose samples are shown in Fig. S1 in the Electronic Supplementary Material. Although the three regenerated cellulose samples exhibited no clear C=O peaks owing to carboxy, aldehyde, and/ or ketone groups in the region 170-235 ppm, the intensity-magnitude spectra indicated the presence of small C=O peaks at 197-198 ppm. The presence of C=O groups in regenerated cellulose samples has been reported by Potthast et al. (2003). The three regenerated cellulose samples were pretreated by the three methods (EDA soaking, water soaking, and freeze drying) (Fig. 1), and the pretreated samples were stirred in 8% (w/w) LiCl/ DMAc for 1-3 weeks. Freeze-dried wood chemical pulps, such as SBSP, are insoluble in 8% (w/w) LiCl/DMAc, and thus only the EDA-and watersoaking treatments were applied to SBSP before stirring in 8% (w/w) LiCl/DMAc. All of the EDAand water-pretreated samples visually dissolved in 8% (w/w) LiCl/DMAc within 1 week. The freezedried Tencel and Bemliese samples visually dissolved in 8% (w/w) LiCl/DMAc, whereas freezedried viscose rayon in 8% (w/w) LiCl/DMAc was slightly cloudy even after stirring the mixture for 2 weeks, probably because of the presence of a small amount of insoluble particles . The SEC/MALLS results of viscose rayon are shown in Fig. 3a, Figs. S2-S4, and Tables S1-S3. The molar mass plots for the three pretreatments agreed well (Fig. 3a). The peak areas of the SECelution patterns roughly corresponded to the sample masses injected into the SEC/MALLS system. The SEC-elution peak area of the viscose rayon sample pretreated by freeze drying was smaller than those pretreated by EDA and water soaking (Fig. 3a). The lower calculated mass value of the freeze-dried viscose rayon (42 µg, Table S3) than those of EDAsoaked and water-soaked viscose rayon (48 µg, Tables S1 and S2) indicates some mass loss. This was probably caused by filtration of insoluble particles present in the 8% (w/w) LiCl/DMAc solution. However, the other molar mass parameters, such as M n , M w , DP w , and M w /M n , were similar for the three Fig. 3 Molar mass plots and SEC-elution patterns of a viscose rayon, b Tencel, c Bemliese, and d SBSP dissolved in 8% (w/w) LiCl/DMAc following the three activation methods and stirring for 1 week before dilution to 1% (w/v) LiCl/DMAc pretreated viscose rayon samples. Furthermore, no significant differences in the molar mass plots and SEC-elution patterns were observed between the viscose rayon samples stirred in 8% (w/w) LiCl/ DMAc for 1 and 2 weeks, showing that stirring the pretreated viscose rayon sample in 8% (w/w) LiCl/ DMAc for 1 week is sufficient to obtain constant SEC/MALLS data. The SEC/MALLS results of Tencel are shown in Fig. 3b, Figs. S5-S7, and Tables S4-S6. The molar mass plots for the three pretreatments agreed well, and the peak area of the SEC-elution pattern of freeze-dried Tencel was smaller than those of Tencel pretreated by the other two methods (Fig. 3b). However, the molar mass parameters of the Tencel samples, except for the calculated mass values (Tables S4-S6), were similar for the samples pretreated by the three methods. No significant differences in the molar mass plots or SEC-elution patterns were observed for the Tencel samples stirred in 8% (w/w) LiCl/DMAc for different times. Stirring the pretreated Tencel sample in 8% (w/w) LiCl/ DMAc for 1 week was sufficient to obtain constant SEC/MALLS analysis data. The SEC/MALLS results of Bemliese are shown in Fig. 3c, Figs. S8-S10, and Tables S7-S9. The molar mass plot was the same line regardless of the pretreatment (Fig. 3c). The SEC-elution patterns of Bemliese pretreated by EDA and water soaking were similar. However, the peak area and SEC-elution pattern for the freeze-dried Bemliese sample were different from those pretreated by EDA and water soaking. Not only the calculated mass values, but also the other molar mass parameters of the freeze-dried Bemliese were different from those of the EDA-and water-pretreated samples (Tables S7-S9). The M n , M w , and DP w values of freeze-dried Bemliese were higher than those of EDA-and water-pretreated Bemliese, whereas the calculated mass values were lower. Thus, the SEC/ MALLS data obtained for the EDA-and water-pretreated samples were regarded as being constant and more accurate than those of the freeze-dried sample. The SEC/MALLS data of SBSP are shown in Fig. 3d and Table S10. The molar mass plot was the same regardless of the pretreatment and stirring time, and the SEC-elution patterns were almost the same (Fig. 3d). All of the molar mass parameters were similar for the samples pretreated by the two methods (Table S10). Discussion As described in the previous section, the molar mass parameters, including the calculated mass values, of EDA-and water-pretreated viscose rayon, Tencel, Bemliese, and SBSP were similar. Consequently, EDA pretreatment resulted in almost no mass loss of the low-molar-mass fractions in the three regenerated cellulose samples and SBSP used in this study. Furthermore, the SEC/MALLS data for the cellulose samples obtained after stirring the EDA-and water-pretreated samples in 8% (w/w) LiCl/DMAc for 1 week can be regarded as being reproducible and constant for analytical studies of their molar masses and molar mass distributions. The representative molar mass plots and SEC-elution patterns of the three regenerated cellulose samples and SBSP pretreated by EDA soaking are shown in Fig. 4a, in which the peak-top heights of the SECelution patterns were adjusted to be similar. All of Fig. 4 a Molar mass plots and SEC-elution patterns of the three regenerated cellulose samples and SBSP and b the corresponding double logarithmic plots (or conformation plots) the SEC-elution patterns showed mostly single peaks without additional low-molar-mass peaks owing to, for instance, hemicelluloses (Ono et al. , 2022a. The hemicellulose molecules originally present in the softwood chips were mostly removed by the acid sulfite pulping and subsequent bleaching processes for production of SBSP, differing from softwood and hardwood bleached kraft pulps (Ono et al. 2017(Ono et al. , 2018. The peak-top elution volume increased in the order of SBSP < Bemliese < Tencel < viscose rayon, showing that their molar masses decreased in the opposite order of SBSP > Bemliese > Tencel > viscose rayon. All of the molar mass plots were roughly on the same line and the molar mass decreased with increasing SEC-elution volume. This showed that all of the cellulose molecules in the four samples were dispersed in 1% (w/v) LiCl/DMAc at the individual molecular level without forming any aggregates, and they were suitably separated by the SEC column depending on their sizes. Double logarithmic plots, or conformation plots, of the four cellulose samples shown in Fig. 4a are shown in Fig. 4b. All of the plots were roughly on the same line, and the slopes were 0.58-0.62, showing that all of the cellulose molecules in the four samples had linear random-coil conformations in 1% (w/v) LiCl/ DMAc without any branched or shrunk structures. This is reasonable as pure β-(1 → 4)-linked glucans. We have reported that softwood holocellulose samples, softwood bleached kraft pulps (SBKPs), 17.5% NaOH-extracted softwood holocellulose and SBKP samples (i.e., α-cellulose samples prepared from softwood holocellulose and SBKP samples, respectively), and dilute acid-hydrolyzed SBKP show conformation plots with slope values of < 0.45. These results indicate that some cellulose molecules in the highmolar-mass fractions of these samples have branched structures with glucomannan through lignin molecules or lignin fragments (Ono et al. 2017(Ono et al. , 2018Ono and Isogai 2020). Differing from the above softwoodoriginating samples, SBSP did not have branched structures as in the cases of hardwood kraft pulps and cotton, bacterial, tunicate, and algal cellulose samples (Ono et al. 2016b(Ono et al. , 2017(Ono et al. , 2018. This is the reason why SBSP is used as dissolving pulp fibers for production of regenerated cellulose samples and cellulose derivatives. The molar mass parameters of the four cellulose samples are summarized in Table 1. The average values for the viscose rayon and Tencel samples were calculated from those obtained for the EDA-pretreated, water-pretreated, and freeze-dried samples. For Bemliese, the average values were calculated from those of only the EDA-and water-pretreated samples. The values obtained for freeze-dried Bemliese were excluded because they were clearly different from those obtained for the EDA-and waterpretreated samples, as described in the previous section (Tables S7-S9 in the Electronic Supplementary Material). When the Mark-Houwink-Sakurada equation, [η] = 0.909 × DP v 0.9 , was used, the average DP v value was similar to that of the average DP w value for each regenerated cellulose sample. However, the average DP v value of SBSP (610) was much lower than the DP w value (1810) determined by SEC/MALLS. The DP v values of SBSP were almost unchanged for the SBSP/Cu(EDA) 2 (OH) 2 solutions stirred for 10, 20, and 40 min. This clear discrepancy between the DP v and DP w values for SBSP may have been caused by the stability of SBSP in the alkaline 0.5 M Cu(EDA) 2 (OH) 2 solution. The SBSP sample may have contained some chemical structures susceptible to depolymerization by the alkaline Cu(EDA) 2 (OH) 2 solution. Generally, the C=O groups in cellulosic pulps cause depolymerization by β-alkoxy elimination under alkaline conditions, although the presence of such C=O groups could not be detected in the solid-state 13 C-NMR spectrum of SBSP (Figs. 2 and S1). Thus, a small amount of C=O groups present randomly along each cellulose molecule of SBSP may have caused the low DP v value. Although the solid-state 13 C-NMR spectra of the regenerated cellulose samples indicated the presence of C=O groups as small peaks at ~ 97 ppm (Fig. 2), there were no significant differences between the DP w and DP v values (Table 1). This is probably because the C=O groups susceptible to the alkaline Cu(EDA) 2 (OH) 2 solution are located close to both ends of each cellulose chain in the regenerated cellulose samples, whereas the C=O groups in SBSP are located more randomly along each cellulose chain. However, this is speculative, and further studies are needed to clarify the reason for the discrepancy between the DP v and DP w values for SBSP. Conclusions The three commercial regenerated cellulose samples, viscose rayon, Tencel, and Bemliese, exhibited similar solid-state 13 C-NMR spectra to low-crystallinity cellulose II. The 13 C-NMR spectra of the three samples had small peaks at ~ 97 ppm, which may be ascribed to the hydrated C6-aldehyde groups formed during the commercial fiber production processes. The three regenerated cellulose samples were dissolved in 8% (w/w) LiCl/DMAc by EDA-soaking, water-soaking, and freeze-drying pretreatment and subsequent stirring of the pretreated samples in 8% (w/w) LiCl/ DMAc for 1 week. Based on the calculated mass values obtained by SEC/MALLS, almost all of the cellulose molecules in the EDA-and water-pretreated regenerated cellulose samples were dissolved in 8% (w/w) LiCl/DMAc and subjected to SEC/MALLS without significant mass loss. However, 10%-20% of the freeze-dried samples were not subjected to SEC/ MALLS, probably because of incomplete dissolution in 8% (w/w) LiCl/DMAc. The average M n and M w values calculated from the SEC/MALLS data were almost the same as those obtained for the EDA-and water-pretreated samples. The average DP w values were calculated to be 340, 530, 880, and 1810 for viscose rayon, Tencel, Bemliese, and SBSP, respectively. The conformation plots of the samples had slopes of 0.58-0.62, showing that all of the cellulose molecules in the four samples were dissolved in 1% (w/v) LiCl/ DMAc, forming linear random-coil conformations. For the regenerated cellulose samples, the DP w values determined by SEC/MALLS were similar to the corresponding DP v values. However, for SBSP, the DP v values were lower than the DP w values, indicating the inaccuracy of the DP v values of SBSP.
6,461
2023-07-26T00:00:00.000
[ "Materials Science" ]
Investigation on nickel ferrite nanowire device exhibiting negative differential resistance $-$ a first-principles investigation The electronic property of NiFe$_2$O$_4$ nanowire device is investigated through nonequilibrium Green's functions (NEGF) in combination with density functional theory (DFT). The electronic transport properties of NiFe$_2$O$_4$ nanowire are studied in terms of density of states, transmission spectrum and $I{-}V$ characteristics. The density of states gets modified with the applied bias voltage across NiFe$_2$O$_4$ nanowire device, the density of charge is observed both in the valence band and in the conduction band on increasing the bias voltage. The transmission spectrum of NiFe$_2$O$_4$ nanowire device gives the insights on the transition of electrons at different energy intervals. The findings of the present work suggest that NiFe$_2$O$_4$ nanowire device can be used as negative differential resistance (NDR) device and its NDR property can be tuned with the bias voltage, which may be used in microwave device, memory devices and in fast switching devices. Introduction The spinel ferrite is one type of soft magnetic materials with the general formula of MFe 2 O 4 , where "M" represents the divalent metal ions such as Mg, Zn, Mn, Cu, Co, Ni, etc., which are the most attractive magnetic material owing to their significant magnetic, magnetoresistive and magnetooptical properties. The other fascinating characteristics of MFe 2 O 4 are its low melting point, large expansion coefficient, low magnetic transition temperature and low saturation magnetic moment [1]. In spite of these properties, the spinel ferrites have been utilized in many technical applications, such as in catalysis [2], photoelectric devices [3], nano-device [4], sensors [5], magnetic pigments [6] and microwave devices [7]. The remarkable magnetic and electronic property of ferrites mainly depends upon the cations, their charges and the distribution of cations along tetrahedral (A) and octahedral (B) sites [8]. Nickel ferrite (NiFe 2 O 4 ) is one of the most versatile materials due to its soft magnetic property, low eddy current loss, low conductivity, catalytic behaviour, high electrochemical stability, abundance in nature, etc., [7]. NiFe 2 O 4 is a kind of ferromagnetic oxide with inverse spinel structure in which Fe 3+ ions are equally distributed between both octahedral B-sites and tetrahedral A-sites, whereas Ni 2+ ions occupy only octahedral B-sites [9]. The inverse spinel ferrites are represented by the general formula of (Fe 3+ ) A (Ni 2+ Fe 3+ ) B O 2− 4 [10]. NiFe 2 O 4 powders have been used as catalysts [11], ferrofluids [12], biomedicine [13] and gas sensors [14,15]. Various methods have been employed for the synthesis of nanoscale NiFe 2 O 4 , which includes solid-state reaction [16], sol-gel [17], rheological phase reaction method [18], mechanochemical [19], pulsed wire discharge [20], electrospinning [21], hydrothermal [22] and sonochemical methods [23]. The nanoscale devices have attracted researchers and these devices may have high packing density and are more efficient than microelectronic devices. Moreover, the junction properties of nanoscale devices play a vital role in the charge transport across the semiconductor/metal interfaces [24]. Furthermore, the semiconductor/metal interface may also form Schottky or ohmic contact. If Schottky type of contact is present, rectifying action takes place. The transport characteristics of nanoscale contacts must be investigated before the amalgamation of these structures in nanoscale electronic devices [25]. Transport properties of these nanoscale device contacts are also influenced by the charge carriers and the geometry of the semiconductor/metal interface. Negative differential resistance (NDR) behaviour is a most significant electronic transport property for various electronic components [26]. The NDR effect can be observed from low dimensional nanostructures like nanowire when connected between two electrodes [27]. In a negative differential resistance device, the occupied states on one side may get aligned with the gap on the other side, when the voltage across the device is increased. Moreover, the current reduction may also occur due to the position of the resonant states of the molecule, which move within the gap of one of the contacts. In the case of carbon nanotube junctions, the reduction in the current for an increased bias voltage is due to the mismatch in the symmetry of incoming and outgoing wave functions of the same energy. Besides, the NDR effect observed between gold electrodes and scattering region is due to the lack of orbital matching between the contacts. The potential barriers in 2D graphene sheets are due to the linear dispersion of electrons, which shows a gap in their transmission across the barrier [28]. Thus, negative resistance provides a physical significance in nonlinear electronic components. NDR has attracted scientific community due to its vast applications in electronics, such as in oscillators, memory devices and fast switching devices [29]. Nowadays, NDR has been demonstrated in various semiconductor systems, including molecular nanowire junctions [30], organic semiconductor [31] and single electron devices [32]. The NDR effect is associated with a variety of phenomena, including Coulomb blockade [33], tunnelling and charge storage [34]. Ling [35] reported the negative resistance property in triangular graphene p-n junctions induced by vertex B-N mixture doping. Liu and An [36] investigated the negative resistance property in metal/polythiophene/metal structure. Chen [37] investigated NDR in oxide-based resistance-switching devices. Gupta and Jaiswal [38] reported NDR in nitrogen terminated doped zigzag graphene nano-ribbon field effect transistor. Zhao et al. [39] studied NDR property and electronic transport properties of a gated C60 dimer molecule sandwiched between two gold electrodes. The inspiration behind the present work is to study the transport property of NiFe 2 O 4 nanowire and to investigate its NDR property. In the present work, the transport characteristics of NiFe 2 O 4 nanowire device and its NDR properties are explored at an atomistic level and the results are reported. Computational methods The first-principles calculation on inverse spinel NiFe 2 O 4 molecular device is investigated through nonequilibrium Green's functions (NEGF) in combination with density functional theory (DFT) method utilizing TranSIESTA module in SIESTA package [40]. NiFe 2 O 4 nanowire is optimized by reducing the atomic forces on the atoms in nickel ferrites to be less than 0.05 eV/Å. The Brillouin zones of NiFe 2 O 4 are sampled with 1 × 1 × 5 k-points. The generalized approximation (GGA) along with Perdew-Burke-Ernzerhof (PBE) exchange correlation functional is used to study the electron-electron interaction [41,42]. The negative differential resistance property of NiFe 2 O 4 is also studied through SIESTA package, in which the core electrons are suitably replaced by Troullier-Martins pseudopotentials for nickel, iron and oxygen atoms. Moreover, the electronic wave functions of nickel, iron and oxygen atoms are demonstrated in terms of a basis set, which are mainly related to the numerical orbitals. The optimization of band structure and electronic properties of NiFe 2 O 4 nanowire are implemented using the double zeta polarization (DZP) basis set for the right-hand, left-hand electrodes and the scattering region in the present study [43]. In order to investigate the electronic properties of NiFe 2 O 4 and to exclude the interaction of NiFe 2 O 4 nanowire with its periodic images, 10 Å vacuum padding is modelled along x and y axes. This makes the computation process easy while examining the density matrix Hamiltonian. The atoms in NiFe 2 O 4 nanowire freely move along their respective positions until the convergence force smaller than 0.05 eV/Å is achieved. Sen et al. [44] studied the transport properties of trimer unit of cis-polyacetylene and fused furan 23301-2 trimer using DFT in combination with NEGF ab initio method. They observed the NDR over a bias voltage of (+2.1 to +2.45 V). Yu et al. [45] investigated the transport properties of a few nm long singlewalled carbon nanotube (SWCNT) p-n junctions using the ab initio quantum method. The finding reveals that nm long SWCNT shows negative differential resistance. Song et al. [46] reported NDR behaviour in (8,0) carbon/boron nitride nanotube heterojunction. They report that under positive and negative bias, the variation in the localization of corresponding molecular orbital under the applied bias voltage leads to NDR behaviour. Mahmoud and Lugli [47] studied molecular devices with negative differential resistance. The molecular device is composed of diphenyl-dimethyl connected to the carbon chain linked to gold electrodes. They observed NDR behaviour only for an odd number of carbon atoms in the chain between the gold electrodes. In the present work, NDR behaviour is observed along NiFe 2 O 4 nanowire. The adopted method in the present work resembles the method used in the above mentioned literature, which confirms the reliability of first-principles study on NiFe 2 O 4 nanowire molecular device. The novel aspect of the present work is NDR properties of NiFe 2 O 4 nanowire device which is discussed in terms of density of states spectrum, transmission and I-V characteristics. Figure 1 represents the schematic diagram of NiFe 2 O 4 molecular device. Band structure of NiFe 2 O 4 nanowire The band structure of NiFe 2 O 4 nanowire provides the insights on the materials properties of NiFe 2 O 4 nanowire. The band structure of NiFe 2 O 4 nanowire can be described in terms of conducting channels across the Fermi energy level (E F ) between the conduction band and the valence band [48]. Figure 2 represents the band structure of NiFe 2 O 4 nanowire. From the observation, it is known that NiFe 2 O 4 nanowire has the band gap of 2.65 eV for the whole nanostructure, which exactly matches with the reported theoretical work [49]. The experimental direct band gap value of NiFe 2 O 4 is 2.5 eV, which is almost equal to the obtained theoretical value as shown in figure 2. Thus, it can be suggested that SIESTA may be used as a significant computational tool for studying electronic properties of nanostructured materials with suitable basis sets. Moreover, the band gap of 2.65 eV for NiFe 2 O 4 is one of the favorable conditions for the application in electronic devices. Density of states and electron density across NiFe 2 O 4 nanowire device The density of states (DOS) spectrum provides a clear picture regarding the density of charge in energy intervals along NiFe 2 O 4 nanowire [50 -52]. Besides, the variation in bias voltage along NiFe 2 O 4 nanowire leads to the change of the density of charge in the energy interval. In the present work, the variation in DOS is observed only beyond a threshold voltage of 2.5 V; which yields a significant change in the density of charge. On behalf of this reason, the bias voltage from 2.5 V to 7.5 V is carried out in the present study. In addition, the Fermi level (E F ) is kept at zero, since the bias window between right-hand and left-hand electrode is set as −V/2, V/2 in NiFe 2 O 4 nanowire device. Figure 3 illustrates the projected density of states (PDOS) of NiFe 2 O 4 base material. The base material refers to the basic element for building the molecular device. In the present work, NiFe 2 O 4 is the base material that is used as electrodes and scattering region in the molecular device. Moreover, the major contribution in PDOS spectrum arises from d orbitals of Ni and Fe, whereas for O, it is due to p orbitals as observed in total DOS. The peak maxima at different energy levels are governed by the orbital overlapping of d and p orbital projected in NiFe 2 O 4 base material. Furthermore, the peak maxima are observed near the Fermi level, which upon applying the bias voltage results in the transition of electrons from the valence band to the conduction band. Figure 4 refers the device density of states spectrum for 0.0 V, 2.5 V, 3.0 V, 3.5 V, 4.0 V, 4.5 V, 5.0 V, 5.5 V, 6.0 V, 6.5 V, 7.0 V and 7.5 V bias. For 0 V bias, the DOS spectrum across NiFe 2 O 4 nanowire is observed to be more in the conduction band than in the valence band. The peak maximum is recorded to be around 0.85 eV in the conduction band. Interestingly, at zero bias voltage condition, the peaks arise due to the mismatch of electronic chemical potential between the electrodes, thus localization of charges is observed in the conduction band. There is no significant peak maximum observed in the valence band of NiFe 2 O 4 nanowire device at 0 V. Furthermore, on applying the bias voltage of 2.5 V across the electrodes, the localization of charges is recorded near the Fermi level as shown in figure 3. In addition, increasing the bias voltage to 3.0 V across NiFe 2 O 4 nanowire device, results in peak maximum at −2.5 eV in the valence band. When the bias voltage is set to 3.5 V, localization of charges is observed on both the valence band and the conduction band within the energy interval of −2.4 and 1.75 eV, respectively. This infers that the bias voltage drives the charges from the valence band to the conduction band along NiFe 2 O 4 scattering region. The same trend is observed at the bias voltage of 4.0 V. The only difference is that the localization of charges is shifted towards the conduction band on increasing the bias voltage. When the bias voltage is switched to 4.5 V, the localization of charges is noticed in the valence band at −2.1 eV. However, the charge transition takes place for the bias voltage of 5.0 V and the peak is observed at 1.4 eV. In the case of bias voltage for 5.5 and 6.0 V, the peak maxima are observed on both the conduction band and the valence band. By contrast, the localization of charges is observed only on the conduction band at different 23301-5 23301-6 energy intervals in the case of 6.5 and 7.0 V bias voltages. Thus, it is inferred that the density of charge along NiFe 2 O 4 nanowire device can be finely tuned with the bias voltage. The electron density across NiFe 2 O 4 nanowire is shown in figure 5. The density of electrons is observed to be more in oxygen sites than in iron and nickel sites along NiFe 2 O 4 nanostructure. Since the atomic number of the oxygen atom is eight and it is belongs to the group VIA element, due to the electronegative property of oxygen, it results in the accumulation of more electrons across oxygen sites in NiFe 2 O 4 nanowire. One of the most significant chemical properties of the oxygen atom is the electronegativity property, which is accredited as the tendency of oxygen to attract electrons towards it. Moreover, the electron density is larger along the oxygen sites owing to the electronic configuration of the oxygen atom when bonding with nickel and iron atoms in NiFe 2 O 4 nanowire. Besides, the electronegativity of the oxygen atom is also influenced by the distance between nucleus and valence electrons in NiFe 2 O 4 nanowire. The electron density provides the insight on the chemical and electronic properties of NiFe 2 O 4 nanowire. Transport properties of NiFe 2 O 4 nanowire device The electronic transport of NiFe 2 O 4 molecular device can be ascribed in terms of transmission spectrum [53][54][55]. The transport characteristics of NiFe 2 O 4 nanowire devices are investigated using TranSIESTA module in SIESTA package. The transmission function T(E, V) of NiFe 2 O 4 molecular device can be expressed as the sum of the probabilities of transmission for all the channels at energy E beneath external bias voltage V as shown in equation (3.1) where Γ R,L is the coupling function of the right-hand and left-hand self-energies, respectively. G A and G R are the advanced and retarded Green's function. Furthermore, the molecular orbitals nearer to the Fermi energy level (E F ) facilitate the electronic transport across NiFe 2 O 4 nanowire even for the low bias voltage. The general relation between the conductance and transmission probability under zero bias condition is given as where G 0 is the quantum unit of conductance and it is equal to 2e 2 /h, h is Planck's constant and e is the electronic charge. The potential of −V/2 and +V/2 is maintained between the right-hand and left-hand electrode across NiFe 2 O 4 molecular device, respectively. The current through the NiFe 2 O 4 nanowire device can be calculated from the Landauer-Büttiker formula [56] where e is the elementary charge, 2e 2 /h is the quantum conductance, µ L,R is the electrochemical potential of left-hand and right-hand electrode, respectively. When zero bias is set across NiFe 2 O 4 nanowire device, the Fermi level of left-hand electrode and righthand electrode gets aligned and the electronic transmission between right-hand and left-hand electrode is equal in both directions, hence Fermi level is considered as zero. Figure 6 depicts the transmission spectrum of NiFe 2 O 4 nanowire for different bias voltages. (The transmission spectrum is drawn in a three dimensional multi-curve fashion; the magnitude is taken into consideration along y axis.) Besides, the transmission peaks recorded for the zero bias voltage are owing to the mismatch in the electronic chemical potential across right-hand electrode and left-hand electrode in NiFe 2 O 4 nanowire device. By contrast, low peak amplitude is recorded in the conduction band. On applying the bias voltage above zero, the molecular orbitals in NiFe 2 O 4 nanowire get delocalized. In that case, the mobility is recorded to be more in these energy intervals in the transmission spectrum [57]. This gives rise to a certain peak maximum in the transmission spectrum of NiFe 2 O 4 nanowire device [58]. However, on increasing the bias voltage across NiFe 2 O 4 scattering region, the transmission pathways increase along the NiFe 2 O 4 nanowire; this gives rise to a shift in the peak maximum [59]. Besides, when the bias voltage of 2.5 V is applied between the electrodes, the peak maximum is observed around 2.6 eV. The increase of the bias voltage leads to the flow of electrons along the scattering region and the peak maximum moves towards the conduction band for the potential difference of 2.5 V. In the case of 3.0 V, the peak maximum is observed at −2.5 eV on the valence band and the peak gets shifted to the conduction band on applying the bias voltage of 3.5 V as shown in figure 5. Furthermore, due to the transition of electrons across the scattering region along NiFe 2 O 4 , the peak maximum shifts to a different energy interval on varying the bias voltage. The applied bias voltage drives the electrons across the NiFe 2 O 4 molecular device, in which the peak maximum gets shifted. For the applied bias of 4.0 V, the peak maximum is observed on both the valence band and the conduction band at −1.65 and 2.75 eV, respectively. Further increasing the bias voltage from 4.5 to 7.5 V, the peak maximum gets shifted along the valence band and the conduction band. The transmission spectrum has a peak maximum along different energy levels. The change in the current for different voltages should not be correlated directly with transmission spectrum with that of I-V characteristics curve. The transmission spectrum indicates that the transmission of charges is larger for a particular energy interval to the applied bias voltage. However, the net current flowing through the molecular device depends on overall transmission for a different energy interval. This clearly suggests that the bias voltage is adequate enough for the transition of electrons along NiFe 2 O 4 nanowire device and the transmission is governed by the applied bias voltage. Thus, it can be concluded that the transport property of NiFe 2 O 4 nanowire device can be finely tuned by applying the proper bias voltage and can be used as a chemical sensor in microwave devices. I-V characteristics of NiFe 2 O 4 nanowire device Negative differential resistance behaviour is the most significant electronic transport property for various electronic components [26]. In the present study, the NDR behaviour is observed in the I-V in this bias voltage, it exhibits NDR. Further increasing the bias voltage beyond 6.0 V along NiFe 2 O 4 nanowire device, the NDR behaviour vanishes and the device obeys the ohm's law. In the present work, N-shaped NDR is observed for NiFe 2 O 4 molecular device. The NDR behaviour in NiFe 2 O 4 nanowire device originates from the inhibition of the conduction channels at a certain bias condition [60]. Besides, the frontier orbitals localized in any part of the scattering region will not contribute to the transmission spectra and the current conduction may be suppressed. By contrast, a completely delocalized molecular orbital may contribute more to the transmission probabilities than that of the localized one in NiFe 2 O 4 nanowire device. Figure 8 illustrates the schematic diagram of 23301-9 NiFe 2 O 4 nanowire device, which can be used as NDR device. Li et al. [61] observed the N-shaped NDR in GaAs-based modulation-doped FET along with InAs quantum dots. Xu et al. [62] reported a similar N-shaped negative differential resistance in GaAs-based modulation-doped FET with InAs quantum dots. The NDR effect observed in the device is not only related to a single physical mechanism. Many phenomena give rise to the NDR property, namely tunneling, Coulomb blockade, Gunn effect [63], metal and semiconductor contact, charge storage and geometry of the nanodevice. Furthermore, the cylindrical geometry and high surface-to-volume ratio of nanowire results in deep penetration of the surface charge, which largely affect the conduction property of nanowire. From the Landauer-Büttiker relation, it is well known that the current through the device depends on T(E, V). The current in the NiFe 2 O 4 device is the integral of the transmission coefficient in the bias window of [−V/2, V/2]. In the present work, the NDR effect is observed in the bias voltage of around 5 V to 6 V. Moreover, the device DOS (figure 4) indicates a peak in the conduction band for 5 V at the energy level of 1.4 eV, whereas for 5.5 V and 6 V bias, the peaks are observed both in the conduction band and in the valence band. Thus, for the applied bias voltage of 5 V, the current increases drastically, and the further increase in the bias voltage gives rise to a decrease in the current due to the Coulomb blockade that arise due to the geometry of the device. Furthermore, for the bias of 5 V to 6 V, the bias window makes transition of electrons between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) decrease. The decrease in the transmission (figure 6) takes place because a larger wave function overlaps between the scattering region and electrodes, the degree of coupling between the molecular orbitals and electrodes becomes weaker with an increase in the bias voltage beyond 5 V. Moreover, such a decrease may not be compensated by the increase in the bias voltage, thus the integral area gets smaller. However, on further increasing the bias voltage beyond 6 V, the degree of coupling between the electrodes and scattering region is overcome by the bias voltage and the current increases further more for the applied bias voltage. The negative differential resistance properties are observed on various materials with different morphology such as ZnO nanorod, porous silicon devices and graphene nanoribbon FET [38,64,65]. The NDR property of NiFe 2 O 4 nanowire device is similar to the reported works, which further strengthens the present work. Thus, the negative differential resistance property of NiFe 2 O 4 nanowire can be finely tuned by applying a proper bias voltage. Conclusions In the present study, NiFe 2 O 4 nanowire based molecular device is studied using DFT method. Under various bias voltages, the electronic transport properties of inverse spinel NiFe 2 O 4 nanowire device is investigated. The density of charges among different energy intervals of NiFe 2 O 4 nanowire is clearly studied with the help of projected density of states spectrum. Moreover, the peak maximum is observed on both the valence band and the conduction band, which is influenced by the applied bias voltage. The electron density is observed to be more on oxygen sites along NiFe 2 O 4 nanowire. The transmission spectrum of NiFe 2 O 4 nanowire device shows a larger peak maximum in the valence band at the zero bias condition. However, on increasing the bias voltage, a larger peak maximum in the conduction band is observed, which clearly suggests that the bias voltage drives the charges towards the conduction band. The NDR properties of NiFe 2 O 4 nanowire are investigated using I-V characteristics. The NDR property of NiFe 2 O 4 nanowire device depends on the applied bias voltage. Thus, the NDR property can be finely tuned with the bias voltage. The findings of the present work in NiFe 2 O 4 nanowire device can be used as NDR device, which may find its potential application in microwave devices, memory devices and in fast switching devices.
5,847.8
2017-06-01T00:00:00.000
[ "Physics" ]
Direct experimental determination of the topological winding number of skyrmions in Cu2OSeO3 The mathematical concept of topology has brought about significant advantages that allow for a fundamental understanding of the underlying physics of a system. In magnetism, the topology of spin order manifests itself in the topological winding number which plays a pivotal role for the determination of the emergent properties of a system. However, the direct experimental determination of the topological winding number of a magnetically ordered system remains elusive. Here, we present a direct relationship between the topological winding number of the spin texture and the polarized resonant X-ray scattering process. This relationship provides a one-to-one correspondence between the measured scattering signal and the winding number. We demonstrate that the exact topological quantities of the skyrmion material Cu2OSeO3 can be directly experimentally determined this way. This technique has the potential to be applicable to a wide range of materials, allowing for a direct determination of their topological properties. . Robustness of the measurement principle for varying radial profile. a, Three different Θ(ρ) profiles that govern different radial spin distributions, labelled as (i), (ii), and (iii), are used for the subsequent numerical calculations. Note that profile (i) represents a linear relationship, which is equivalent to the one-dimensional helix modulation case. b-d, Circular dichroism (CD) profiles, and, e-g, polarisation-azimuthal maps (PAMs) calculated based on the three different radial functions. It can be seen that both CD and PAM are independent of the radial profile, confirming the robustness of the measurement principle. On the other hand, we use the one-dimensional helix approximation method to perform the numerical calculations for the same object, in order to confirm the equivalence of both methods. In summary, three theoretical methods are used for calculating the CD and PAM as a function of the topological winding number, in order to demonstrate the consistency of the results: (I) Analytical solution based on Eq. (3) for PAM and Eq. (4) for CD in the main text. (II) Construction of the skyrmion configuration using the one-dimensional helix approximation model. For the azimuthal angle Ψ, the diffractive x-rays are sensitive to the structure factor of the spin helix, obtained by rotating the helix N Ψ from the base position. The CD and PAM are subsequently calculated numerically. (III) Generation of a two-dimensional skyrmion lattice using the rigorous solution given by Eq. (2) in the main text. The CD and PAM are then numerically obtained using Eq. We first demonstrate the consistency between the three calculation methods, which fur- Second, we discuss the influence of χ and λ on the CD and PAM patterns. Supplementary Figure 4a shows another N = 3 topological object, which is essentially a continuous transformation from the object shown in Fig. 2c (see main text). This homotopic transformation can be achieved by adjusting χ. As shown in Supplementary Figures 4b and 4c, 8 compared to Fig. 2g and 2k in the main text, the CD and PAM patterns have identical periodicities, and the only difference is a linear phase shift. This is valid for all cases in our numerical studies. Moreover, as shown in Supplementary Figure 4d-f, flipping the polarity of the topological object does not alter the PAM, however, it imposes a phase shift on the CD profile. Therefore, the use of the phase parameters Φ 1 and Φ 2 in Eqs. (3) and (4) (see main text) can generalise the principle to all homotopies arising from variations in χ and λ. To briefly summarise, our polarisation-dependent REXS method, represented by the circular dichroism plots and the polarisation-azimuthal maps, is only sensitive to the winding number and has a one-to-one correspondence to this topological quantity. Any homotopy change will not affect the outcome of the measurement. In other words, the method itself can be seen as 'topologically protected'. Supplementary Note 2. NON-INTEGER WINDING NUMBERS Here we discuss the case of non-integer winding numbers. Note that non-integer winding numbers correspond to energetically unstable states, due to the appearance of the singularities within their spin structures. As shown in Supplementary Figures 5a and 5d, the abrupt change of the spins across the red lines will cost extremely high energy, leading to the unstable states. However, we will calculate the corresponding CD and PAM in order to demonstrate that our new technique is only sensitive to spin configurations with integer topological winding numbers. Supplementary Figures 5a-c show the magnetisation distribution, CD and PAM for a N = 1.7 motif lattice. First, the CD shape is largely distorted from a well-defined sinusoidal curve shape. Second, in PAM, the humps are no longer of equal height due to the non-integer topology. These features can also be found for the N = 3.3 case, shown in Supplementary Figures 5d-f. The asymmetry is even more pronounced in CD, in which the periodically modulated peaks do not have equal height. This is also clearly shown in their PAM relationship. On the other hand, as expressed by Eq. (7) (see main text), polarisation-dependent REXS is sensitive to all three magnetisation components. As a consequence, the calculated CD signal as shown in Supplementary Figure 6d will be suppressed for N = 0 type of vortices, while PAM shows two humps. Combining the CD and PAM results, one can unambiguously conclude that the motif is a N = 0 vortex. This is in stark contrast to the CD and PAM results for an N = 1 skyrmion, as shown in Supplementary Figure 6i. Therefore, our method is a direct experimental technique that can accurately measure N . [2] S. Zhang, A. A. Baker, S. Komineas, and T. Hesjedal, "Topological computation based on direct magnetic logic communication," Sci. Rep. 5, 15773 (2015).
1,323
2017-02-24T00:00:00.000
[ "Physics" ]
Scrambling in Two-Dimensional Conformal Field Theories with Light and Smeared Operators We study quantum chaos in two dimensional conformal field theories, building on the work analyzing the out-of-time order thermal correlation functions using large-c Virasoro blocks. Our work investigates the contribution of light intermediate channels and smearing length scales to the four-point function and scrambling. Precise relations for how light intermediate channels increase the scrambling time and how smearing length scales smaller than the thermal length scale decrease the scrambling time are derived. Introduction Unitary quantum mechanical systems do not exhibit information loss in that they never forget their initial state, however observers without access to all degrees of freedom may not be able to distinguish microstates. If an observer with access to less than half the system's degrees of freedom cannot distinguish a perturbed state from an unperturbed one then the information contained in the perturbation is said to be scrambled. The time required for this dynamical chaotic mixing to occur is the scrambling time. In classical physics chaos is understood as sensitivity to initial conditions. If one considers the phase space coordinate x(t) in a classical chaotic system and perturbs x(0) an infinitesimal amount, one can diagnose the sensitivity to initial conditions through the Poisson bracket This quantity initially grows as a sum of exponentials in t, with the exponents called Lyapunov exponents. An analogous quantity to consider for a quantum system in state ρ is the squared commutator At the onset of scrambling, C(t) also grows exponentially with t, which from the perspective of operator growth arises from how the unitarily evolved W (t) = e iHt W (0)e −iHt grows with time to become a larger sum of longer operator products, due to non-trivial commutations with the Hamiltonian. Loosely speaking, the commutator squared is determined by the fraction of operator products in W (t) which contain V (0). The systems we are especially interested in are many-body systems in thermal equilibrium, partly due to the AdS/CFT correspondence [2] and the holographic dual description of black holes as thermofield double (TFD) states in double copies of large c CFTs [3]. For holographic field theories one can understand scrambling in thermal states as a disruption of the special TFD state entanglement between the left and right CFTs as diagnosed by mutual information between subregions in the two copies, which on the bulk side corresponds the lengthening of the wormhole connecting the two asymptotic regions [4][5][6][7]. The wormhole lengthens because low energy quanta produced far in the past become highly boosted near the black hole horizon, giving a large shockwave backreaction to the geometry. Scrambling is also relevant to the black hole information paradox, where a remarkable result [8] shows that, in certain cases, information absorbed by a black hole can be emitted almost immediately after it has been scrambled amongst the black holes degrees of freedom. It has been conjectured [9,10] that black holes scramble information faster than any other quantum system in nature, and some evidence for this conjecture has been found [11]. Returning to our tool for studying quantum chaos, the squared commutator C(t), expanding out gives four terms This simplifies after the thermal relaxation time as V acting on the thermal state becomes indistinguishable from the thermal state to local operators. The term V W W V β can be understood as the expectation value of the operator W W in the state obtained by V acting on the thermal ensemble (and vice versa for W V V W β ). If the energy inserted by the operator V is small then after the dissipation time V W W V β is given by the thermal expectation value W W β multiplied by the norm of the state V V β . Thus, Eq. (1.3) becomes, with the out of time ordered correlator (OTOC) defined by OTOCs and C(t) are thus equivalent ways of diagnosing chaos. At early times, the disconnected product and the OTOC terms in Eq. (1.4) cancel and the commutator squared is zero. At the onset of scrambling, the OTOC decays at a bounded rate λ L ≤ 2π/β [12][13][14]. We define scrambling time as the operator time separation t at which the OTOC is exponentially decaying to zero, or equivalently when C(t) is asymptotically approaching the disconnected product 2 V V W W . There is a significant body of literature on understanding scrambling from the holographic bulk perspective. In contrast, we will do a purely field-theoretic study, primarily building on work in [15], though we will give some holographic interpretation in the Discussion. The authors of [15] explicitly calculate the OTOC in two-dimensional conformal field theories using the known form of semiclassical Virasoro blocks [16]. Specifically they consider the contribution of the identity block to the OTOC, and from it, extract the scrambling time with h w the holomorphic weight of the W operator. Ideally, one would analyze scrambling for a pair of light operators with both h v , h w c, however the semiclassical conformal block is only valid for fixed h w /c, corresponding to a heavy W operator. In [17], by matching with a bulk shockwave calculation, the authors conjecture the validity of the semiclassical formula in the light-light operator limit. In this paper, we investigate the effect of non-identity Virasoro blocks and of smearing length scales on the scrambling time (1.6). We examine the contribution of higher primaries and demonstrate that the scrambling time depends on the spectrum of the CFT,and that the existence of a light primary operator with O(1) OPE coefficients with the V and W operators, and conformal weights h p ,h p c bounds the scrambling time from below as with h v the holomorphic weight of the V operator. The scrambling time is determined by the primary operator for which the prefactor in (1.7) is the largest. Light operators with h p >h p increase the scrambling time. Note that primary operators withh p > h p do not violate the fast scrambling conjecture, as the existence of the identity operator bounds the scrambling time from below by (1.6). We also consider the scrambling of operator valued distributions, smearing the V and W operators over spatial scales L V , L W in order to better understand the relation between the energy scale of perturbations and scrambling time. We calculate the scrambling time for smearing length scales much smaller thermal length scale to be corresponding to a reduction in scrambling time due to high energy modes. For smearing length scales greater than β we argue that the scrambling time increases without limit. The scrambling time (1.6) is the intermediate regime for perturbations with thermal scale energy. The plan of this paper is as follows: In Section 2, we set up the necessary CFT conventions and formulae to analyze the OTOC, in Section 3 we use the semiclassical Virasoro block to calculate the scrambling time in large c two-dimensional CFTs with light higher-spin primaries. In Section 4 we calculate how the spatial smearing of the operators V and W changes the scrambling time, and in Section 5 we discuss the holographic interpretation of our results and future directions. Note: This paper has some overlap in scope with work [18], as the two sets of authors worked in collaboration for much of the project until deciding to publish separately. A discussion of differences in analysis, results and interpretation is given in appendix A. CFT background and conventions We are interested in the thermal four-point correlation function involving two operators V and W separated by Lorentzian time t and spatial distance x. Correlation functions in the thermal state can be mapped to expectation values in the vacuum using the conformal transformation with correlators of primary operators related by The specific correlation function we wish to work with is a four point function, normalised by the product of two-point functions, . Following the canonical choice, we use the global conformal transformations SL(2,C) to take z 1 ,z 1 → ∞; z 2 ,z 2 → 1; z 4 ,z 4 → 0 [19]. With this choice, the holomorphic cross ratio is z := z 12 z 34 and similarlyz =z 3 , and the ratio (2.3) becomes (2.5) where, in the second line, we have expanded in the z → 0 channel to write the fourpoint function in terms of the Virasoro conformal blocks. From Eq. (2.5) we see that the quantity of interest when comparing the OTOC to the disconnected product is z 2hv F p (z) and its anti-holomorphic counterpart. 1 Light intermediate channels In this section we investigate the importance of intermediate channels to the OTOC, their dominance over the identity block and subsequent effect on the scrambling time. In putative bulk theories, this corresponds to the exchange of massive particles. Following [15], we will work in the semiclassical limit c 1 where the conformal blocks F p (z) exponentiate, and use a result from [16] for the semiclassical conformal block, valid for c 1; h v , h p c; h w /c fixed but arbitrary, with α w := 1 − 24h w /c. This gives a conformal block Following the procedure detailed in [15] we consider the analytic continuation of the Euclidean four-point function to Lorentzian time which gives the OTOC's ordering of operators. This analytic continuation causes z to pass around its branch point at z = 1 and hence for z 2hv F p (z) to pass to the second Riemann sheet.z does not pass around its branch point atz = 1. The conformal block on the second sheet is We are interested in the behaviour of the conformal block in three different time regimes: before the fast scrambling time (1.6) but after the dissipation time t ∼ β, around the fast scrambling time, and after the fast scrambling time, corresponding to h w /c z 1, z ∼ h w /c, and z h w /c respectively. In these three regimes the conformal block takes the form To see how this function depends on t recall that z ∼ e − 2π β t . Figure 1 illustrates how the conformal block depends on the operator time separation for a few values of h p . For the identity block with h p = 0 it is constant at early times when z h w /c, then starts to exponentially decay around z ∼ h w /c. The light intermediate channels have significantly different behaviour, they initially grow exponentially with t and then start to decay roughly around the same time as the identity block. For the OTOC, one also needs to consider the effect of the antiholomorphic conformal block factorz 2hv F p (z). The key difference from the holomorphic block is that during the analytic continuation of the Euclidean four point function, the holomorphic cross ratio z passes around its branch point at z = 1 and goes to the second Riemann sheet, while the antiholomorphic cross ratio does not and stays on the principal sheet. The principal sheet antiholomorphic block acts to suppress the exponential growth of the second sheet holomorphic block,as seen by replacing holomorphic variables in (3.3) with their antiholomorphic counterparts, then takingz 1, For the identity block withh p = 0 this factor is trivial and irrelevant to the scrambling time derivation in [17]. However, forh p > 0, this factor acts to suppress the contribution of light primary operators, it is an exponentially decaying function in t. We will discuss the caseh p < 0 after combining this antiholomorphic conformal block with its second sheet holomorphic counterpart, to find the behaviour of the OTOC with t. At early times, though with t β, the product of conformal blocks grows exponentially with t as and surprisingly this contribution to the OTOC dominates over the identity block at t given by (1.6), assuming h p >h p and generic OPE coefficients. Around this time the second expression in (3.5) shows that all the holomorphic conformal blocks switch from exponential growth in t to exponential decay. The exception is the identity block as |z 2hv F II 0 (z)| is a monotonically decreasing function in time, starting at 1 before starting its exponential decay as scrambling takes effect. The holomorphic conformal blocks for the other primary operators reach their maximum magnitude at corresponding to a time separation At late times, all holomorphic blocks decay at the same rate determined by the third equation in (3.5), exp − 4π β h v t , and so the light intermediate states dominate over the identity block for all times with no crossover. Including the contribution of the antiholomorphic block gives at late times, when z h w /c. What we are most interested in is the value of t for which the commutator squared C(t) approaches 2 W (t)W (t) β V (0)V (0) β , or equivalently when the |OTOC | 1. The time taken for z 2hv F II p (z)z 2hv F p (z) to decay to an O(1) value is Assuming OPE coefficients that are not parametrically small in powers of h w /c in order to suppress the contribution of light primary operators, the scrambling time is determined by the primary operator for which the prefactor in (3.11) is largest, as the OTOC is dominated by the conformal block for that operator. Roughly speaking the larger the spin the longer the scrambling time. As explained in the Introduction, the existence of primary operators withh p > h p does not lead to a shorter scrambling time, the scrambling time in the 2D CFTs we are considering is bounded from below by (1.6). The apparent asymmetry between h p andh p occurs because we are looking at Lorentzian correlators and this breaks the symmetry between holomorphic and antiholomorphic sectors of the CFT. In parity invariant CFTs, where for each primary operator with (h p ,h p ) there is a conjugate operator with weights (h p , h p ), the scrambling time is parity invariant. While the decay time for a given block, given by (3.11), is not invariant under h p ↔h p , the OTOC from which the scrambling time is derived is a sum over the full spectrum, which will be parity invariant. Assuming the parity invariant CFT has light operators and generic OPE coefficients, the scrambling time is increased. Spatially smeared operators In this section we will change gears and consider a different computation, in which we smear the operators over finite length scales. We would like to understand the effect of these scales on the scrambling time, to see how scrambling depends on the perturbations' energies. Let us introduce our set up. The first step in the smearing procedure is to consider point operators that are not spatially coincident and then integrate against smearing functions. After the smearing procedure, each operator in a given pair will have finite spatial support and be centered about the same spatial position. Our labelling of the four operator positions is shown in Figure 2. After the conformal mapping given by the holomorphic and antiholomorphic cross-ratios z = z 12 z 34 /z 13 z 24 andz =z 12z34 /z 13z24 are . (4.3) We take t to be much larger than any x i , otherwise the V and W operators have support on spacelike separated regions and by causality those components of the operators commute. In this limit the cross ratio becomes From (3.5) we recall that the second sheet identity block is This has a dependence on the four operator spatial positions x i , through the dependence of z on x i , this is what we integrate against our smearing functions. We integrate the point operators V and W against Gaussians smearing functions with spatial width L V and L W . The V operator will be centred around x = 0 while the W has a spatial offset, it is smeared about x = x W , defined with the appropriate prefactor, This is an operator at Lorentzian time t, smeared over a length scale L W about a central position x W . The conformal block for the four smeared operators is then (4.7) In terms of the integral, smearing length scale affects scrambling time because the parts of the integration region which contribute significantly depend on L V and L W . Similar to the unsmeared identity block, we consider the perturbation to have been scrambled once the conformal block switches from being constant to exponential decaying with t. The non-Gaussian part of the integrand, given by (4.5), has two asymptotic z regimes: for early times with z h w /c it is 1, while for late times with z h w /c it is (4.8) Note that from (4.4) z is small for late times, or when the separation of operators within a pair is much less than the thermal length scale. The smaller the smearing length scale, the more the Gaussian smearing functions suppress large x i , and so the dominant contribution to the integral is for z h w /c with the conformal block given by (4.8). For L V , L W β separations of operators within a pair of order the thermal length scale and larger are suppressed, and we can approximate the cross ratio (4.4) by ) . (4.9) Combining (4.9) and (4.8) we can exactly evaluate the integral (4.7), giving the smeared conformal block Note that if h v is half-integer then this expression is exactly zero, however in discussing scrambling we wish to consider generic operators V , for which h v will not be halfinteger. The time separation at which (4.10) becomes O(1), marking the scrambling time, is valid for L V , L W β. The smaller the smearing length scale, the larger the energy scale of the operator and the faster the perturbation is scrambled. When L V and L W are sufficiently small they can decrease the scrambling time at leading order, however as these high energy perturbations are not a small perturbation to the thermal state this does not violate the fast scrambling conjecture. For operators smeared over length scales much larger than the thermal length scale, the region of {x i } ∈ R 4 for which the Gaussians in (4.7) give support grows beyond the region of size β 4 centered around (x 1 , x 2 , x 3 , x 4 ) = (0, 0, x W , x W ), and the z h w /c limit for which the conformal block is 1 becomes the important z limit. The larger one takes L V and L W , the larger the region in R 4 which is both not suppressed by the Gaussians and has z h w /c, and so the closer the smeared conformal block is to one for fixed t. By increasing the smearing length scale one increases the scrambling time. This is valid until one one reaches smearing length scales as wide as the lightcone. Interpolating between the result (4.11) for L V , L W β and the L V , L W β behaviour just described to L V , L W ∼ β one concludes that operators smeared over the thermal length scale exhibit fast scrambling, at least when considering only the contribution of the identity block. Discussion In this section we discuss the holographic interpretation of our results, the relation to the chaos bound, and possible future work. Let us first give a holographic interpretation for the dependence of scrambling time on smearing length scale. Perturbing the thermal state of a holographic 2D CFT with a single trace operator smeared over spatial length L is dual to releasing a particle of energy E ∼ L −1 from the asymptotic boundary of a BTZ black hole. As we increase the smearing length scale, we reduce the energy of the bulk particle. Near the horizon, time translation corresponds to a boost, such that on the t = 0 slice a particle of energy E released at the boundary at time t = −t w has proper energy Scrambling occurs when t w is large enough that the proper energy becomes of order G −1 N ∼ c, then the backreaction on the BTZ geometry becomes significant, leading to the lengthening of the wormhole and destruction of entanglement between the two CFT copies. If we increase L or equivalently decrease E then it takes a longer time to scramble. Equation (5.1) suggests that the dependence of scrambling time on the smearing length of the W operator is (β/2π) log(β/L W ), this is consistent with our result (4.11). The dominating contribution of higher dimension and spin primaries to the scrambling time in the CFT corresponds to bulk to bulk propagation of massive spin fields between the V and W fields. Assuming that the two-dimensional CFTs we have been studying that seem to have large scrambling times do in fact exist, and that they have a semiclassical quantum gravity dual, it is puzzling from the bulk perspective why the massive fields dual to the light intermediaries should increase the scrambling time. The arbitrarily rapid exponential growth of the conformal block seen in Eq. (3.7) for β t (β/2π) log c does not violate the chaos bound. The chaos bound on Lyapunov exponents is derived by showing that functions f (t) that are analytic and bounded |f | ≤ 1 on the half-strip given by Re{t} > 0, | Im{t}| ≤ β/4 satisfy the inequality then assuming the ratio of the OTOC to the disconnected product is of the form satisfy the above requirements for f , and one finds λ L ≤ 2π/β. The conformal block for light primaries we study in Section 3 is not of this form and does not obey the assumptions made in deriving the chaos bound. It is unusual in that it grows rather than decays with t, at an arbitrarily fast rate and to a value exponentially larger than the disconnected product. The chaos bound does not constrain the growth of the OTOC, so there is no inconsistency. That said, the conformal block is still a bounded function in t and assuming it is analytic and bounded on the whole half strip its rate of decay is bounded after reaching its maximum. The early growth of the OTOC seems strange, as one usually interprets the decay of the OTOC as the onset of chaos, growth seems to imply increasing order in a chaotic system. Moreover, there is an argument that in large N systems one expects the OTOC to be parametrically close to the disconnected product for all t β, as detailed in section 4.3.1 of [12]. In contrast, we found that each conformal block with h p >h p is exponentially larger than the disconnected product until long after the fast scrambling time. We are unclear as to the resolution of this apparent tension. It would be interesting to investigate if including heavy primaries makes the sum over Virasoro blocks divergent, and so necessitates a resummation. In Section 3 we considered only light primaries with ∆ c, but it is interesting to consider what effect primary operators of dimension ∆ ∼ c and heavier have on the OTOC and scrambling time. This question can partially be answered using the known semiclassical conformal blocks formula in the limit h p → ∞ with c/h p , h w /c and h v /c all small and fixed [20], however we are not aware of formulae for intermediate weight intermediates h p ∼ c. A natural extension of our analysis is the study of the OTOC in higher dimensional CFTs. Although some results are available for a class of interesting models [21], a general analysis of the Virasoro blocks is rendered difficult by the absence of semiclassical results for the conformal blocks (which can be attributed to the presence of only global descendants). We leave this interesting question for future work. the U.S. Department of Energy under grant DE-SC0009987, and by the Simons Foundation through the It from Qubit Simons Collaboration on Quantum Fields, Gravity and Information. AR is also supported by a KITP Graduate Fellowship, and is thankful to the Kavli Institute of Theoretical Physics for hospitality, and to Adam Levine for useful discussions. HRH would like to thank Pawel Caputa, Sunny Guha, Sunil Mukhi and Pranjal Nayak for useful discussions. BS thanks Jennifer Lin for discussions. A Remarks on reference [18] Here we note some differences in our paper compared to [18]. Our papers overlap in scope in that they study the effect of smeared operators and higher primaries on scrambling time. Let us highlight some differences in analysis and results to aid readers wishing to understand the results in the two papers. • With regards the light primaries, our scrambling time t * p is not related to the t * p given in equation (12) of [18]. Our t * p , for the dominating primary operator, corresponds to the time at which the commutator squared will approach the disconnected product 2 V V W (t)W (t) . In [18] the t * p computed is the time at which the ratio of the light primary operator block to the identity block saturates to a constant, which is not directly related to the scrambling time. The authors of [18] also did not consider the contribution of the antiholomorphic conformal block to the OTOC. • Our smearing analysis also differs from that in [18]. We smear each operator over its own Gaussian wavepacket. Reference [18] smears one V operator over a half-Gaussian on the positive x-axis, one over a mirror image half-Gaussian on the negative x-axis, and similarly for the W operators. Varying L then convolutes two separate effects on scrambling time, operator smearing length scale and spatial offset.
6,063.4
2018-09-25T00:00:00.000
[ "Physics" ]
Structural differences of amyloid-β fibrils revealed by antibodies from phage display Background Beside neurofibrillary tangles, amyloid plaques are the major histological hallmarks of Alzheimer’s disease (AD) being composed of aggregated fibrils of β-amyloid (Aβ). During the underlying fibrillogenic pathway, starting from a surplus of soluble Aβ and leading to mature fibrils, multiple conformations of this peptide appear, including oligomers of various shapes and sizes. To further investigate the fibrillization of β-amyloid and to have tools at hand to monitor the distribution of aggregates in the brain or even act as disease modulators, it is essential to develop highly sensitive antibodies that can discriminate between diverse aggregates of Aβ. Results Here we report the generation and characterization of a variety of amyloid-β specific human and human-like antibodies. Distinct fractions of monomers and oligomers of various sizes were separated by size exclusion chromatography (SEC) from Aβ42 peptides. These antigens were used for the generation of two Aβ42 specific immune scFv phage display libraries from macaque (Macaca fascicularis). Screening of these libraries as well as two naïve human phage display libraries resulted in multiple unique binders specific for amyloid-β. Three of the obtained antibodies target the N-terminal part of Aβ42 although with varying epitopes, while another scFv binds to the α-helical central region of the peptide. The affinities of the antibodies to various Aβ42 aggregates as well as their ability to interfere with fibril formation and disaggregation of preformed fibrils were determined. Most significantly, one of the scFv is fibril-specific and can discriminate between two different fibril forms resulting from variations in the acidity of the milieu during fibrillogenesis. Conclusion We demonstrated that the approach of animal immunization and subsequent phage display based antibody selection is applicable to generate highly specific anti β-amyloid scFvs that are capable of accurately discriminating between minute conformational differences. Background Alzheimer's Disease is a slowly progressing, irreversible neurodegenerative disorder and the most prevalent cause of dementia in the elderly. With 7.7 million new cases every year and a survival time after diagnose of 7.1 years [1] the number of over 35 million people suffering as of 2012 is thought to be tripled by the year 2050 according to the world health organization (WHO). Accompanied by this, the annual cost generated by dementia, currently exceeding 600 billion $, will most likely rise to more than 1,100 billion $ within the next 15 years. It is the socioeconomic impact which lays the foundation for the urgent need of diagnostic and therapeutic tools in AD that target the disease and its progression at an early stage. Histological hallmarks of AD are neurofibrillary tangles, comprised of hyperphosphorylated tau protein [2,3], and amyloid plaques that are composed of aggregated amyloid-β peptides [4][5][6]. Amyloid-β is regarded as the main culprit causing the neuropathology in AD and is released from the amyloid precursor protein by sequential cleavage with βand γ-secretases. This processing results in peptides of various amino acids (aa) in length with the majority being 40 aa (90%) and 42 aa (10%) long [7], hence the terminology Aβ40/42. Changes in the metabolism of Aβ lead to an imbalance between elevated peptide production and decreased clearance from the brain, shifting the concentration and facilitating self aggregation of βamyloid. Once a critical concentration is surpassed, the aggregation follows a nucleation-dependent polymerization process to form mature fibrils with various oligomeric intermediates along the way [8,9]. A multitude of diverse Aβ aggregates has been identified, such as dimers [10,11], heteromorphous oligomers [12][13][14][15][16], or protofibrils [17], that represent the last stage before the final transition into the fibril forms. Oligomers and protofibrils are widely regarded as the main toxic species in AD although the exact nature of the toxic entity -if such a form even exists [18] -has yet to be elucidated [19][20][21][22][23][24][25]. While on the one side researchers investigating how Aβ contributes its toxicity to AD there are still other problems close at hand: up until today it is neither possible to diagnose the disease at an early, presymptomatic stage nor to treat patients beyond symptomatic relief, e.g. alleviating behavioral problems. The first symptoms emerge decades after neuronal changes occur [26]. Therefore the current diagnoses target progressed characteristics of the disease and are composed of various imaging methods such as x-ray computed tomography (CT) succeeded by magnetic resonance imaging (MRI) [27,28] or positron emission tomography (PET) [29], additional to cognitive tests and the assessment of the patient's history regarding the worsening of cognition. Still, the combination of these tools does not result in an absolute accuracy of the diagnosis [30]. Additionally, to modify the progression of Alzheimer's disease it is essential to apply potential therapies at an early stage, long before amyloid plaques are formed [31]. Current treatment of AD involves acetylcholinesterase inhibitors (e.g. Donepezil) [32,33] and N-methyl-D-aspartate (NMDA) antagonists [34] to improve cognitive functionality, up until now only with remote success. For an early and accurate diagnosis of the disease as well as for a better treatment hypothesis, it is essential to get a deeper insight on the aggregation of amyloid-β. During the transition from Aβ monomer to fibrils, different conformational epitopes are expected to form, which may be used to differentiate between diverse aggregation forms of Aβ using antibodies specifically recognizing these conformational epitopes. Phage display and immune libraries from macaque have been proven in the past to be an effective instrument for the generation of conformation specific antibodies, already providing a source of binders against targets like ricin [35], anthrax [36,37], bacterial surface proteins such as Crf2 from Aspergillus fumigatus [38], the Venezuelan equine encephalitis virus (VEEV) [39] and the western equine encephalitis virus (WEEV) [40] or botulinum neurotoxin A [41]. A further distinct advantage of NHP (non human primate) derived immune libraries is the very high degree of identity of the antibodies to human antibodies [42] allowing for very easy transition of the scFvs from diagnostic to therapeutic tools. Phage display antibody generation further allows to control the conditions and conformations during the very moment of binder selection, offering additional chances to steer antibody specificity towards conformational epitopes [43]. Antigen preparation (Aβ42) Fractions of Aβ42 monomers, protofibrils and mature fibrils were prepared from synthetic Aβ42 peptide to serve as antigens. Depending on the purification method, the separation via SEC with one column resulted either solely in pure monomers ( Figure 1A) or a monomer fraction and a second peak representing a heterogeneous mixture of different sized oligomers, namely protofibrils ( Figure 1B). These protofibrils range between 15 kDa and 500 kDa and display various forms and morphologies, with diameters of 8-10 nm and a length of up to 200 nm. Protofibrils were further separated by two SEC columns connected in series to obtain smaller or larger oligomers ( Figure 1C). Earlier eluting fractions include filaments significantly larger than 200 nm (LO = large oligomers) while later eluting fractions consist predominantly of short fibrils (MO = medium oligomers) of up to 100 nm and small, circular aggregates (SO = small oligomers) that can be smaller than 10 nm. Mature fibrils are generated from monomers by incubation at 37°C for 24 h and 300 rpm. We observed the same distribution of aggregates among the fractions with two different running buffers: 10 mM Tris-HCl, pH 7.4 or 100 mM Na-Borate, pH 8.6. These running buffers were chosen depending on the later purpose of the antigen. Aβ42 in 10 mM Tris buffer cannot be used for amine coupling of the antigen (e.g. in SPR experiments) while Aβ42 in 10 mM Tris-HCl, pH 7.4 is more suitable for immunization. Immunization and antibody phage display library construction Late fractions of SEC purified Aβ42 oligomers (SO) were used for the immunization as well as for measuring the immune response by enzyme linked immunosorbent assay (ELISA). Ten days after the fifth boost, the antibody titer was determined to be 1 per 80,000. Nine weeks later a sixth boost was administered. PCR products of antibody genes were collected six and nine days after the last boost. The DNA fragments were pooled and subcloned into pGemT, resulting in a total of 2.7*10 6 and 4.4*10 5 individual clones for V H and V L respectively. pHAL35, a modified version of the pHAL14 phage display vector, was used for phage display library construction by two consecutive cloning steps. First, V L gene fragments for the κ (kappa) and λ (lambda) were inserted using the restriction sites MluI and NotI followed by cloning of the V H gene fragments via SfiI and HindIII. The final libraries comprised a total of 2.9*10 7 individual clones. The insert rates were determined by colony-PCR to be 60% for the kappa library and 80% for the lambda library. Both libraries were packaged using M13K07 as helperphage. Isolation of Amyloid-β specific scFvs Multi-step pannings, with or without competition with unwanted forms of Aβ42 antigen (e.g. panning on immobilized fibrils with soluble monomers added for competition), were carried out to generate antibodies with diverse specificities against Amyloid-β. In addition to the two macaque IgG derived immune phage display libraries, two IgM derived naïve human phage display libraries HAL7/8 [44] were employed. From a total of 54 pannings, 6088 antibody clones were analyzed by ELISA and 612 hits were identified. Eight unique monoclonal antibodies with individual sequences, named PaD97-D6 from the naïve human libraries and PaD172-F8, PaD172-F12, PaD213-A5, PaD218-E6, PaD233-E5, PaD235-D2 and PaD236-H2 from the immune libraries were selected (Table 1) based on their specificity, their above average absorption or because of their high signal to noise ratio in the screening ELISAs. All eight antibodies were produced as scFvs and scFv-Fc fusions (Yumabs) [45] in mammalian cell culture. PaD172-F8, PaD218-E6 and PaD235-D2 could not be produced properly and disregarded for the following experiments. Yumabs consist of a human IgG1 Fc part that is linked with two scFvs instead of Fab-fragments. The specificity of PaD97-D6, PaD172-F12, PaD213-A5, PaD233-E5 and PaD236-H2 was initially verified on different forms of Aβ42, i.e. monomers, small, medium and large oligomers and mature fibrils by ELISA ( Figure 2). Here, all antibodies except PaD213-A5 showed no predominant binding to any distinct form. Only PaD213-A5 exhibited specificity towards Aβ42 fibrils. Additionally, binding to fibrils of other amyloidogenic peptides was evaluated in the same manner. These peptides included mature fibrils of Aβ40, α-synuclein, Huntingtin (Htt (aa105-138)) and fibrils of Tau (isoform F), the K18 domain and the PHF6 domain of Tau. PaD97-D6 exhibited some cross reactivity with Tau fibrils of the isoform F (data not shown). PaD213-A5 differentiates between various Aβ42 fibrils Aβ42 peptide was purified in two different running buffers, 10 mM Tris-HCl, pH 7.4 or 100 mM Na-Borate, pH 8.6, depending on its later application. Repeated immunological assays elucidated the selectivity of PaD213-A5 towards a distinct form of Aβ42 fibrils. This antibody exhibited no affinity to mature fibrils produced in Tris-HCl buffer while on the other hand binding to fibrils generated in Na-Borate buffer ( Figure 3A). TEM investigation revealed major differences in the composition of the fibrils. Na-Borate derived fibrils exhibited a compact bundle of 4-8 individual fibrils twisted helically every 130-150 nm ( Figure 3B) while Tris derived fibrils consisted of one discrete fibril with a helical twist around its axis about every 50 nm ( Figure 3C). All antibodies detect different epitopes The determination of the epitope of the Amyloid-β specific antibodies was performed using a peptide spot membrane ( Figure 4). Each spot on the membrane consisted of 15 AA of the Aβ42 peptide with an offset of 1 AA. Epitope mapping was performed with all antibodies to verify binding to linear epitopes. No binding was detected with PaD213-A5 since it is fibril specific, i.e. detecting a conformational epitope. PaD97-D6, PaD172-F12 and PaD236-H2 bound to the N-terminus of Aβ42 albeit differing in the exact epitope with position 1 to 13 for PaD97-D6 ("DAEFRHDSGYEVH"), position 4 to 13 for PaD172-F12 ("FRHDSGYEVH") and position 5 to 13 for PaD236-H2 ("RHDSGYEVH"). A more precise determination of the epitopes for these three antibodies was impeded by the spot sizes of 15 AA in length. PaD233-E5 bound to the central region of Aβ42. Here, the exact epitope was more narrowly determined by amino acids 17 to 22 ("LVFFAE") ( Figure 5). Affinity determination of the scFvs by surface plasmon resonance (SPR) Affinity determination was carried out on various Amyloidβ monomers, protofibrils and fibrils, via BIAcore™ with different antibody concentrations and resulted in K D values in the micro-to nanomolar range ( Table 2). The antibodies targeting the aminoterminal end of Aβ42 and each antibody, PaD97-D6, PaD172-F12 and PaD236-H2, exhibited similar affinities towards all three forms of antigen. In contrast, PaD233-E5 which binds to the core region of Aβ42 shows an 100-fold elevated affinity to Aβ42 monomers, with a K D of 10 nM, when compared to protofibrils and fibrils. PaD213-A5 bound solely to Aβ42 fibrils with a K D of 36 μM. Yumabs inhibit Aβ42 fibrillogenesis in a concentration dependent manner When binding to Aβ42 monomers, an inhibitory effect of the antibodies on fibril formation could be possible. We tested the effect of all antibodies on the formation of mature Aβ42 fibrils from pure monomers by visualizing potential fibrils using transmission electron microscopy (TEM) and measuring Thioflavin T (ThT) fluorescence. ThT is a dye that, upon binding to amyloid fibrils, exhibits fluorescence. Thus it allows for the assessment of fibril formation, which was investigated in this study by combining part of the sample with ThT stock solution every six hours during the first 24 h, every 12 h during the next 24 h and with a final checkpoint after 96 h ( Figure 6). Bivalent scFv-Fc antibodies (Yumabs) were able to interfere with fibril formation at a substoichiometric level for PaD97-D6, PaD233-E5 and PaD236-H2 ( Figure 6). The influence is most notable for PaD233-E5, the antibody targeting the central α-helical region of Aβ42. Addition of 4 μM scFv-Fc antibody to 5 μM Aβ42 monomers resulted in a reduction in ThT fluorescence of about 25% for PaD97-D6, nearly 50% for PaD236-H2 and even more elevated forPaD233-E5 after 96 h of incubation ( Figure 6). Comparison with PaD213-A5 or the negative control scFv-Fc antibody indicates that this effect is not contributed to antibody concentration or design. Interestingly, PaD172-F12, also directed against the N-terminal end of Aβ42 like PaD97-D6 and PaD236-H2, did not show an inhibitory effect. The reverse mechanism, a disintegration of preformed fibrils by antibody addition, was evaluated by ThT reading and TEM analysis as well. No antibody mediated disintegration of mature fibrils (data not shown). Discussion Aβ42 oligomers were chosen for the immunization of the NHP due to their reported elevated toxicity, making them a potential target for immunotherapy. Using the immune libraries and two previously established human naïve libraries [32] in a multistep panning, we created numerous antibody fragments specific for β-amyloid with an interesting spectrum of different binding properties. The initial validation utilizing titration ELISAs demonstrated antibody specificity towards either form of Aβ42 aggregates but no predominant preference for PaD97-D6, PaD172-F12, PaD233-E5 or PaD236-H2. Epitope mapping further revealed that three of these four antibodies detect the N-terminal part of Aβ42 whereas PaD233-E5 binds to the central region. This is consistent with previous findings that the amino-terminal region of Aβ42 is immunodominant in human [46], NHP [47] as well as in dog [48], mouse [49] and rabbit [50] explaining the quantity of antibodies and antibody fragments directed against this part of the peptide in this work and previous studies [49,[51][52][53][54][55], with Bapineuzumab being the most prominent one. Solely PaD213-A5 demonstrated a high selectivity towards Aβ42 fibrils and did not bind to any other form of Aβ42. Remarkably, PaD213-A5 was able to even discriminate between two different Aβ42 fibril preparations, depending whether the amyloid-β peptide was purified in 10 mM Tris-HCl/pH 7.4 or in 100 mM Na-Borate/pH 8.6. Meinhardt et al. [56] already described other preparation dependent polymorphisms in Aβ40 fibrils. Based on their findings, it seems likely that the difference in the acidity of the buffers contributes to a morphological change in the fibril structure, a hypothesis that is supported by our TEM analysis. It can be hypothesized that PaD213-A5 distinguishes between both types of fibrils through the detection of a conformational epitope which may well be dependent on the helical twist angle or the interspace distances between two single strands that make up the mature β-amyloid fibril. While there are antibodies and polyclonal sera that are fibril specific [57][58][59] the specificity observed here was not reported for any other known antibody. It remains to evaluated whether these structural differences have any significance in vivo. To investigate the antibodies for a potential application as disease modulators, we assessed their impact on the fibrillization of Aβ42 monomers in vitro. The fibrillogenesis of Aβ42 is a nucleation-dependent polymerization process [8]. When a certain concentration threshold of monomers is surpassed small aggregates termed "nuclei" accrue and polymerization starts. These nuclei are elongated by addition of monomers forming larger aggregates and ultimately fibrils. It has been previously shown that antibodies targeting the N-terminal end of amyloid-β exhibit an inhibitory effect on the fibrillogenesis [60][61][62][63]. With the majority of our antibodies recognizing Aβ42 monomers this gives rise to the idea that they can intervene in the initial aggregation by preventing interactions of β-amyloid peptides thus retarding or even inhibiting fibril formation [64]. PaD97-D6 and PaD236-H2 demonstrate a concentration dependent retardation of fibril formation resulting in shorter fibrils and an overall stronger appearance of unstructured aggregates. They do not prevent fibrillization entirely, which suggests a steric hindrance during monomer-monomer attachment [61]. Albeit also binding to the amino-terminal region of Aβ42, PaD172-F12 exhibited no substantial effect on fibril formation. With no major discrepancies in the affinity compared to PaD97-D6 or PaD236-H2, this result is likely to be accounted to the minute differences in epitopes. It is plausible that PaD172-F12 attaches to monomers in such way that no steric hindrance is administered towards the core region of Aβ42. A partial masking of that area by an antibody would minimize monomer-monomer interaction and impede fibril formation. Epitope mapping demonstrates that PaD97-D6 binds Aβ 1-13 while PaD172-F12 and PaD236-H2 bound further downstream (Aβ 4-13 for PaD172-F12 and Aβ 5-13 for PaD236-H2). Obviously, PaD97-D6, PaD172-F12 and PaD236-H2 attach to the monomer with different spatial arrangements. Further, the region of the epitope on the Aβ42 peptide may contribute to the similar K D values to different aggregates measured for these antibodies. PaD97-D6, PaD172-F12 and PaD236-H2 bind to the amino-terminal end of the β-amyloid peptide, an epitope that is exposed in monomers and aggregates during fibrillogenesis [54]. This may allow nearly equal affinities of the before mentioned antibodies to all three forms. PaD233-E5 impacts fibril formation, which is not surprising as it targets the central region of Aβ42 with Aβ 17-22 (LVFFAE), a part of the hydrophobic core element (LVFF) that is essential for β-sheet formation during fibrillization [65]. Together with the elevated affinity towards Aβ42 monomers, this effect can be accounted to two probable modes of action or a mixture of both. PaD233-E5 either masks the LVFF-motif thus directly preventing monomer-monomer interaction. This effect was postulated by Legleiter et al. for the antibody m266, the murine progenitor of Solanezumab [61]. m266 targets the same epitope as PaD233-E5, binding to Aβ 16-24 (KLFFAEDV) [66] and prevents the formation of fibrils and even protofibrils. The other possible explanation is the attachment of PaD233-E5 to Aβ42 monomers thus shifting the concentration threshold of soluble β-amyloid beneath the critical limit necessary for the polymerization process. Interestingly, PaD233-E5 has a much more pronounced influence on amyloid-β fibrillogenesis than any other antibody as visualized by TEM. Yet the ThT absorbance after 96 h is similar to that of PaD236-H2 which might be an indication for the formation of smaller aggregates with a β-sheet rich content. This would suggest the latter mode of action described for PaD233-E5 to be more dominant in the inhibition process. The impact on AD immunotherapy of the antibodies presented in this work has to be further validated. Recently, Bapineuzumab (directed against the N-terminus of Aβ) and Solanezumab (directed against the central region of Aβ), both not conformation specific antibodies, failed to meet the expected endpoints in clinical phase 3 studies albeit having shown positive results in preceding studies (reviewed in [67]). The results of the initial characterization for the Yumabs in this work are promising. Especially PaD213-A5 exhibits a highly interesting property of differentiating between Aβ42 fibrils based on their conformation that is not yet described in literature and its implication on AD diagnosis and therapy has to be further validated with in vivo data. Conclusion Among the investigated antibody fragments we found three scFvs exhibiting a general specificity towards βamyloid while two scFvs, PaD213-A5 and PaD233-E5, presented a tendency to better bind to certain forms of Aβ42. PaD213-A5 is highly specific for mature Aβ42 fibrils and identified a novel structural variation in fibrillar structures. PaD233-E5, albeit binding also oligomers and fibrils, showed a 100fold increased affinity towards monomers. It is also one of the three antibodies exhibiting an inhibitory effect on the fibrillization of Aβ42 monomers. While the in vivo relevance of these differences is still to be established, the study confirms that the approach of animal immunization and subsequent phage display based antibody selection is applicable to generate highly specific anti β-amyloid scFvs that are capable of accurately discriminating between minute conformational differences. Antigen preparation Aβ42 peptides were synthesized by Dr. James I. Elliott at Yale University (New Haven) [68]. All Aβ42 antigens, including monomers, protofibrils and different size oligomers derived thereof by further fractionation as well as Fibrils were prepared according to [18,69]. TEM sample grid preparation and image acquisition 5-10 μL of sample was deposited on a formvar coated 200 mesh TEM grid (EM Science, Hatfield) and incubated for 1 min. Excess fluids were wicked away with a piece of filter paper. The grid was washed twice by applying 10 μL of dH 2 O before incubating the sample twice with 10 μL of 2% (w/v) uranyl acetate for 1 minute each. The grid was dried with a vacuum pump, incubated for 5 min at room temperature to completely dry off and stored in the designed container. Imaging was carried out on a Tecnai G2 Spirit microscope at an acceleration voltage of 80 kV. Ethics statement and animal care All animal studies presented were given specific approval from the Institut de Recherche Biomédicale des Armées octobre 1990 relatif aux conditions de l'expérimentation animale pour le Ministère de la Défense" and (iv) "instruction 844/DEF/DCSSA/AST/VET du 9 avril 1991 relative aux conditions de réalisation de l'expérimentation animale". Animal care procedures complied with the regulations detailed under the Animal Welfare Act [70] and in the Guide for the Care and Use of Laboratory Animals [71]. Animals were kept at a constant temperature (22°C+/−2°C) and relative humidity (50%), with 12 hours of artificial light per day. They were housed in individual cages (6 per room), each of which contained a perch. Animals were fed twice daily, once with dried food and once with fresh fruits and vegetables, and water was provided at the same time. Food intake and general behavior were observed by animal technicians during feeding times, and veterinary surgeons were available for consultation if necessary. Veterinary surgeons also carried out systematic visits to each NHProom twice weekly. The environmental enrichment program for the nonhuman primates was limited to games with animal care staff and access to approved toys. The well-being of the animals was monitored by the attending veterinary surgeon. Animals were anesthetized before the collection of blood or bone marrow by an intramuscular injection of 10 mg/kg ketamine (Imalgene®, Merial, Lyon, France). Analgesics were subsequently administered, through a single intramuscular injection of 5 mg/kg flunixine (Finadyne®, Schering Plough, Courbevoie, France) in the days after interventions if the animal technicians suspected that the animal was in pain, on the basis of their observations of animal behavior. None of the nonhuman primates were killed during this study. Animal immunization A male macaque (Macaca fascicularis) was immunized with a total of 6 subcutaneous injections of purified and sterile filtered small oligomers of Aβ42. Injections were carried out with 50 μg antigen (inj. 1-3) and 50 μg antigen (inj. 4-6) at a one month interval, except for the sixth injection which was given 2 months after the fifth. Construction of the anti Aβ42 scFv phage display library Six and nine days respectively after the last boost, RNA was isolated using Tri Reagent (Molecular Research Center Inc, Cincinnati, USA) from the bone marrow of the immunized macaque and transferred into cDNA by reverse transcription. DNA was amplified by PCR using seven different oligonucleotide primers for the coding regions of the light chain and nine different primers for the heavy chain [72]. After amplification, PCR products were pooled and subcloned into pGemT (Promega, Madison, Wisconsin). Antibody inserts in pGemT were re-amplified with individual primer sets for the kappa (κ) and lambda (λ) sublibraries introducing specific restriction sites for the cloning of the final library as described [44]. Library packaging was carried out using M13K07 as helperphage. Selection of recombinant antibodies against Aβ42 ScFvs were isolated in vitro by panning the macaque derived immune libraries as well as the human naïve libraries HAL7/8 [44] as described previously [73]. Antigen coating was carried out at 4°C overnight in 100 mM Na-Borate buffer and constant amounts (1 μg) of antigen were used as bait during the three panning rounds. To increase the possibility of obtaining antibodies specific for one Aβ42 conformation, competition with unwanted conformations of Aβ42 was done using 3 μg of antigen or 5 μg for Aβ42 fibrils respectively. Individual colonies of bacteria infected with eluted antibody phage were isolated and inoculated in MTP (microtiter plate) wells to produce soluble antibody fragments as described previously [74]. The produced scFvs were analyzed for specific binding by ELISA on diverse aggregates of Aβ42, corresponding to the panning. Enzyme linked immunosorbent assay (ELISA) Two kinds of ELISA (screening ELISA, antigen titration ELISA) were performed as described before [74]. In both cases a total of 100 ng of antigen per cavity was coated in 96well MTPs (High Binding, Costar) at 4°C overnight. All following steps were carried out at room temperature on a rocker. For screening, scFvs were detected by mAb 9E10, recognizing the c-myc tag and a goat anti-mouse antibody conjugated to horseradish peroxidase (Sigma A0168). For titration, scFvs were detected by a mouse anti penta-His (34660, Qiagen), recognizing the His tag and a goat anti-mouse antibody conjugated to horseradish peroxidase (A0168, Sigma Aldrich). Bound scFv-Fc antibodies were detected using a peroxidase-labeled goat anti-human antibody recognizing the Fc fragment (A0170, Sigma-Aldrich). Thioflavin T (ThT) measurements To assess the state of fibrillogenesis by Thioflavin T (ThT) measurement, 20 μL of sample was mixed with 10 μL of ThT (100 μM) and 70 μL of glycine NaOH, pH 8.5 (500 mM) in a well of a black 384-well Nunc plate (Sigma-Aldrich). Fluorescence was measured in triplicates on an Analyst™ AD fluorometer (Molecular Devices Cooperation) at an excitation wavelength of λ = 450 nm and emission wavelength of λ = 485 nm. Epitope mapping The peptide sequence of Aβ42 was divided into overlapping peptide fragments of 15 aa length with an offset of 1 aa. The N-terminus was acetylated and two additional glycines were added to the sequence to allow for proper binding of the antibodies to the aspartic acid, the first aa of Aβ42. The peptides were synthesized by the SPOT technique [76,77] and covalently bound to a continuous cellulose membrane via their carboxy-terminus (JPT Peptide Technologies GmbH). After initial incubation for 5 min in methanol to prevent the precipitation of hydrophobic peptides the membrane was rinsed with 1xTBS (50 mM TRIS, 137 mM NaCl, 2.7 mM KCl, pH adjusted to 8.0 with HCl) and blocked in 2% (w/v) skim milk powder in 1xTBS (2% M-TBS) for 1 h at room temperature on a rocker. ScFv-Fc antibodies (10 μg/mL in 2% M-TBS) were incubated on the membranes for 1.5 h at room temperature. Bound antibodies were detected by using a peroxidase-labeled goat anti-human antibody recognizing the Fc fragment (A0170, Sigma-Aldrich). Development with SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific) according to manufacturer's protocol on a ChemiDoc™ MP system (BioRad). Affinity measurement Antibody affinities were analyzed by surface plasmon resonance (SPR) using a BIAcore2000™. Aβ42 monomers and protofibrils were immobilized on separate CM5 chips (General Electric-Biacore), fibrils were immobilized on a CMD50m chip (Xantec) via amine coupling according to the manufacturers protocols. ScFvs were diluted to 100 nM -10,000 nM (additionally to 15,000 nM for PaD97-D6 and 15,000 nM + 20,000 nM for PaD213-A5) and added to the chips in HBS-EP buffer according to the manufacturer's protocol at a flow rate of 25 μL/min. Timeframes were 200 s for association and 600 s for dissociation. After each dilution, the chip was regenerated with NaOH according to the manufacturer's protocol. Data fitting was performed using the "1:1 binding with drifting baseline" algorithm of the BIAevaluation™ software.
6,735.4
2015-06-18T00:00:00.000
[ "Biology" ]
Unravelling the Epigenome of Myelodysplastic Syndrome: Diagnosis, Prognosis, and Response to Therapy Simple Summary Myelodysplastic syndrome (MDS) is a type of blood cancer that mostly affects older individuals. Invasive tests to obtain bone samples are used to diagnose MDS and many patients do not respond to therapy or stop responding to therapy in the short-term. Less invasive tests to help diagnose, prognosticate, and predict response of patients is a felt need. Factors that influence gene expression without changing the DNA sequence (epigenetic modifiers) such as DNA methylation, micro-RNAs and long-coding RNAs play an important role in MDS, are potential biomarkers and may also serve as targets for therapy. Abstract Myelodysplastic syndrome (MDS) is a malignancy that disrupts normal blood cell production and commonly affects our ageing population. MDS patients are diagnosed using an invasive bone marrow biopsy and high-risk MDS patients are treated with hypomethylating agents (HMAs) such as decitabine and azacytidine. However, these therapies are only effective in 50% of patients, and many develop resistance to therapy, often resulting in bone marrow failure or leukemic transformation. Therefore, there is a strong need for less invasive, diagnostic tests for MDS, novel markers that can predict response to therapy and/or patient prognosis to aid treatment stratification, as well as new and effective therapeutics to enhance patient quality of life and survival. Epigenetic modifiers such as DNA methylation, long non-coding RNAs (lncRNAs) and micro-RNAs (miRNAs) are perturbed in MDS blasts and the bone marrow micro-environment, influencing disease progression and response to therapy. This review focusses on the potential utility of epigenetic modifiers in aiding diagnosis, prognosis, and predicting treatment response in MDS, and touches on the need for extensive and collaborative research using single-cell technologies and multi-omics to test the clinical utility of epigenetic markers for MDS patients in the future. Introduction Myelodysplastic syndrome (MDS) is a malignant disease characterised by inefficient haematopoiesis and cytopenias [1]. It commonly affects the ageing population (>65 yrs) and is predicted to rise in incidence. There is a high economic burden associated with MDS due to high with wild-type or mono-allelic mutations of TP53. However, multiple mutations in TP53 were able to predict outcomes independent of the revised international prognostic scoring system. MDS patients carry a median of 9 somatic mutations within the exome, this includes both driver and passenger mutations, which is considerably less than most solid cancers [30]. More than 30 driver mutations have been identified in MDS, typically patients harbour 2 or 3 driver mutations, the number increasing with risk severity [20,31,32]. These driver genes can be categorised into distinct functional pathways involving DNA methylation, RNA splicing, chromatin modification, transcription, signal transduction and others. Some of the most frequently mutated genes in MDS belong to pathways such as RNA splicing (SF3B1, SRSF2, U2AF1, U2AF2, ZXRSR2, SF1, and SF3A1) or epigenetic regulation [20]. The latter being involved in DNA methylation (DNMT3A, TET2, IDH1/IDH2) or chromatin/histone modification (MLL2, EZH2, and other PRC2 components, ARID2 and ASXL1) [20]. Therefore, this highlights the importance of epigenetics such as changes to DNA methylation and histone modifications in the pathogenesis of MDS. Epigenetic Modifiers Cancer is typically defined by the accumulation of genetic mutations that lead to uncontrolled cell division. However, other factors such as epigenetics are known to also play a pivotal role in cancer initiation and progression [33]. Epigenetics which translates as the study of factors "on top of" (epi) genes, describes mechanisms that can modify gene expression without changing the DNA sequence itself [34]. Therefore, epigenetic factors act as a master switch, having the capability to regulate gene expression. While genetic modifications consist of mutations in tumor suppressor genes and oncogenes, epigenetic modifications are typically more complex and comprise changes in DNA methylation, chromatin structure, histone modifications, nucleosome remodelling, and non-coding RNAs [33]. During the development and progression of MDS, a myriad of epigenetic changes has the propensity to affect gene expression and cellular function, many of which have untapped potential in aiding clinical decision making throughout the course of a patient's journey with MDS. DNA Methylation DNA methylation is the addition of a methyl group (-CH3) to the 5 carbon of cytosines that are followed by a guanine (CpG sites), which results in 5-methylcytosine (5mC) ( Figure 1A) [35]. This reaction is catalysed by a family of enzymes known as DNA methyltransferases (DNMTs), and include DNMT1, DNMT3A and DNMT3B [35]. DNMT3 isoforms are responsible for adding new methylation marks to DNA (de novo methylation) at loci which were previously unmethylated, whereas DNMT1 is known primarily as the maintenance enzyme, since it is responsible for maintaining methylation marks on the newly-synthesised strand after DNA replication ( Figure 1B) [35,36]. However, all three function together to maintain methylation marks during DNA replication, particularly in CpG-dense regions [37]. The removal of methylation marks is initiated by ten-eleven translocase (TET) enzymes, namely TET1 and TET2, by oxidising 5mC to 5-hydroxymethylcytosine (5hmC), which can then undergo base-excision repair (BER), converting back to an unmodified cytosine ( Figure 1) [35]. Methylation predominately occurs at CpG poor regions and at repetitive elements, whereas CpG dense regions (termed CpG islands) are usually lacking methylation in normal somatic cells [35,38]. DNA methylation in gene promoters influences transcription factor binding and chromatin structure [39,40] leading to transcriptional repression as methylation blocks interactions between transcription factors and the DNA, or facilitates binding of repressive factors, resulting in decreased gene expression [41,42]. In contrast, DNA methylation in gene bodies influences transcriptional activation [40] and RNA splicing [43,44], leading to increased gene expression. Therefore, changes in DNA methylation can impact a multitude of genes and thus cellular functions. It is not surprising that mutations in DNMTs and TETs are observed in cancers, particularly MDS and AML. These mutations in DNA methylation machinery are known to influence global DNA methylation changes observed in cancers, e.g., DNMT3A mutations in AML are associated with genome-wide hypomethylation [45,46]. Most solid malignancies display global hypomethylation with hypermethylation present at specific sites in the genome [47]. Interestingly, MDS is typically characterised by global hypermethylation, and this may explain why MDS patients respond well to HMAs [48]. group is added to the 5 carbon of a cytosine ring by DNMT enzymes giving rise to 5mC. This methyl group is oxidised by TET enzymes resulting in 5hmC which undergoes further oxidisation and base-excision repair (BER) to convert back to an unmodified cytosine. (B) De novo methylation is predominantly carried out by DNMT3A/B enzymes and during DNA replication methylation marks present on the template strand ("old" DNA) are copied onto the daughter strand ("new" DNA) mainly by DNMT1 enzyme. TET1 and TET2 enzymes instigate DNA demethylation via oxidisation. Non-Coding RNAs Up until a couple of decades ago, 98% of the genome within each cell was considered "junk" DNA due to its non-coding nature, i.e., does not code for any proteins [49][50][51]. Since then, it was discovered that these areas of the genome harbour non-coding RNAs (ncRNAs) that act like a switch to turn genes on or off, hence regulating gene expression. There are different classes of non-coding RNAs typically grouped by size, with small ncRNAs such as micro-RNAs (miRNAs) and piwi-interacting RNAs (piRNAs), and larger ncRNAs such long non-coding RNAs (lncRNAs) [51]. miRNAs and lncRNAs in particular, have been shown to play functional roles in diseases such as cancer [51]. Micro-RNAs miRNAs are small, ncRNAs (~22 nucleotides) that are found in plants and animals [51]. They contain a "seed" region (~6-8 nucleotides) that binds to the 3 UTR of target mRNA transcripts via sequence complementarity, resulting in mRNA decay or inhibition of translation [52,53]. Therefore, miRNA function in post-transcriptional gene regulation, which results in decreased protein expression of target mRNA. Each miRNA can potentially target hundreds of mRNAs, some of which may belong to the same pathways or pathways with similar functions [52]. Many miRNAs have been shown to play a pivotal role in cancer and cancer progression, in which changes to the expression of specific miRNAs have led to the disruption of key pathways or proteins that are important in cancer biology [54]. For example, miRNAs that target tumour suppressor genes are typically upregulated in cancers, as this prevents expression of tumour suppressors and supports the growth of cancers [53,55]. Conversely, many miRNAs that target oncogenes are commonly downregulated to allow the expression of oncoproteins that drive cancer initiation and progression [53,55]. The key miRNAs which have been described to have a role in the pathophysiology of MDS are discussed below. Long Non-Coding RNAs LncRNAs are long, non-coding transcripts (>200 nucleotides) that do not encode proteins [51]. There is potentially more than 15,000 lncRNAs expressed in the human genome, and they have been shown to function in many ways [56]. LncRNAs can recruit different components of the chromatin remodelling complex to change chromatin organisation [57,58]. They can act as a sponge by binding to miRNA via base complementarity and therefore reduce the effects of miRNA, and they can enhance or inhibit transcription [57,58]. LncRNAs can affect cellular functions via a range of mechanisms, and it is no surprise that these molecules are exploited in different types of cancers. They have been shown to modulate cancer cell proliferation, migration, immune escape and apoptosis, among other common features of cancer progression [51,59]. For example, a lncRNA that acts as a sponge for an anti-tumour miRNA (targets oncogenes) would result in upregulated expression of oncogenes which promotes tumour initiation and/or progression. Indeed, this has been shown recently in gastric cancer with the lncRNA UCA1 [60]. Epigenetic Modifiers That Aid in the Diagnosis of MDS Given the importance of DNA methylation and ncRNAs in cancer biology, epigenetic modifiers in MDS, including changes in DNA methylation, miRNAs and lncRNAs, and how they may aid in MDS diagnosis, prognosis and predicting response to treatment will be discussed below. DNA Methylation as a Diagnostic Tool for MDS Diagnostic testing is usually initiated once patients have become symptomatic and cytopenias are prominent. Some of the most mutated genes in MDS are members of the DNA methylation machinery such as DNMT3A, TET2, IDH1 and IDH2 [61]. Mutations in DNMT3A and TET2 have been observed in clonal haematopoiesis and early in MDS [62,63]. These mutations often lead to global changes in DNA methylation or pronounced changes at specific genomic sites. Mild cytopenias without overt features of myelodysplasia within the bone marrow are now increasingly recognised such as clonal cytopenias of uncertain significance (CCUS) [64]. Whether DNA methylation signatures may have the potential to aid in the recognition of pre-MDS states such as CCUS or Clonal Haematopoiesis of Indeterminate Potential (CHIP) needs to be determined by prospective studies [64]. Analysis of 5mC in bone marrow mononuclear cells from MDS patients using immunocytochemistry showed that~85% of cases displayed significantly higher levels of 5mC compared to control patients with anaemia of chronic disease [65]. This suggests that in MDS, detection of 5mC levels which are indicative of hypermethylation, may be a useful tool in diagnosing MDS. Indeed, DNA hypermethylation (especially hypermethylation at enhancers) is commonly observed in MDS, particularly in cases involving TET2 loss of function mutations [66,67]. DNA methylation changes at specific sites in the genome have also been observed in MDS (Table 1). It was recently shown that CpG island methylation associated with six genes (ABAT, DAPP1, FADD, LRRFIP1, PLBD1, and SMPD3) in bone marrow cells is a marker of MDS, and could diagnose MDS with 95% specificity and 91% sensitivity [68]. Another group has also shown significantly increased ABAT methylation and decreased ABAT gene expression in MDS compared to controls [69]. Significantly higher gene-specific promoter methylation of SOX7 (55% of patients) [70], ID4 [71], SOX17 [72], DLX4 [73], GPX3 [74], DLC-1 [75], CDKN2A/B [76], and WNT antagonists (sFRP1/2/4/5, DKK-1/3) [77] have also been found in MDS. Moreover, significantly higher ID4 gene promoter methylation could distinguish MDS from aplastic anaemia, which can be challenging particularly MDS with a low blast count, hypoplasia and/or normal karyotype [71]. Hypomethylation of the let-7a-3 promoter has also been observed in MDS patients compared to controls [78]. Overall, global DNA methylation levels and methylation at specific sites show promise as biomarkers for the diagnosis of MDS. However, for DNA methylation markers to be utilised in MDS diagnosis, they would need to be validated in patient cohorts and ideally in peripheral blood mononuclear cells. The latter would provide a less invasive test to diagnose MDS using peripheral blood markers without the need for frequent, invasive bone marrow aspirates. Table 1. Epigenetic modifiers that may aid in the diagnosis of myelodysplastic syndrome (MDS). miRNA and lncRNA Signatures for the Diagnosis of MDS The expression levels of ncRNAs such as miRNAs and lncRNAs are dysregulated in MDS, and therefore may also aid in diagnosis ( Table 1). Many of the genes listed in the above-mentioned 6-gene methylation signature are targets of miRNA and lncRNA with expression changes in MDS. This study found 72 miRNAs and 214 lncRNAs with significant differential expression in MDS together with gene expression and methylation changes compared to healthy controls, forming an integrative network that may aid in the diagnosis of MDS [103]. In addition, overexpression of the DLK1-DIO3 region, which harbours a large miRNA cluster and MEG3 (lncRNA) gene promoter, was observed in 50% of patients before treatment with AZA, and this was in conjunction with the diagnosis of AML with myelodysplasia-related changes [79]. Therefore, overexpression of the miRNA cluster before treatment may aid in the diagnosis of AML with myelodysplasia-related changes in higher risk MDS patients. Another group found a co-expression signature which contained 6 differentially expressed lncRNAs that were co-expressed with ABAT in MDS patients [96]. The expression of one of these lncRNAs (lncENST00000444102) and ABAT were significantly downregulated in MDS [96]. miRNAs Studies over the last decade have started to provide evidence for the potential clinical utility of miRNA expression profiling in the diagnosis of MDS (Table 1). Early studies found miRNA signatures that discriminated MDS from healthy controls, such as miR-378 [80], miR-632 [80], miR-636 [80] and let-7 family members [81]. miRNA expression profiling has also been able to discriminate between risk groups [81,84] and between MDS with chromosomal alterations and normal karyotype [104]. A higher percentage of miRNAs has also been observed in low-risk MDS, compared to controls and high-grade MDS [82]. More recently, increased expression of haematopoiesis-related miRNAs (miR-34a, miR-125a and miR-150) were observed in MDS, and higher expression of miRNAs clustered on 14q32 was found in early MDS [83]. Another area of interest involving miRNAs, is their presence in extracellular vesicles (EVs) in the plasma of MDS patients. EVs, such as exosomes, contain cargo that consists of small RNAs and miRNAs that can be delivered to cells via intercellular communication [105]. Two recent studies have explored the expression of miRNAs in EVs or exosomes in MDS patients. Enjeti et al. 2019 [94] observed significantly higher numbers of small RNAs and miRNAs in EVs from plasma of red-cell transfusion-dependent MDS patients, with upregulated expression of miR-548j and miR-4485, and down-regulation of miR-28 and let-7d. Another group found 21 exosomal miRNAs with strong association with MDS [95]. They also found 7 miRNAs that were present in both MDS and severe aplastic anaemia with strong association such as miR-378i (AUC 0.99), miR-574-3p (AUC 0.87), miR-196a-5p (AUC 0.85), miR-3200-3p (AUC 0.83) and miR-196b-5p (AUC 0.79) [95]. Therefore, although not routinely utilised in the clinic yet, exosomal miRNAs may prove to be a useful tool in the diagnosis of MDS. lncRNAs Hypermethylation of the MEG3 gene promoter was observed in 35% of MDS cases in 2010, which was the first study implicating a lncRNA in MDS [106]. Since then, more studies have analysed the expression of specific lncRNAs and there has also been a shift towards exploring the global profile of lncRNA expression in MDS ( Table 1). Knowledge of global changes in lncRNAs is important to better understand how they are globally influencing cancer cell functions given their complexity in mode of action and potential to interact with multiple targets. A study in 2013 had an interesting finding, in which conditional deletion of the lncRNA XIST in hematopoietic cells of mice, which is required for X chromosome inactivation during embryogenesis, led to a highly aggressive mixed MDS and MPN phenotype with complete penetrance [107]. This suggests that the lncRNA, XIST, protects hematopoietic cells from malignancy. Since these early studies, more lncRNAs have been found to display deregulated expression in MDS using global profiling of CD34+ BM cells from MDS patients and these include: linc-BDH1-1, linc-FAM75A7-7, linc-HHLA2-2, linc-JMJD1C-3, linc-PRKD1-2 and linc-RPIA [98], as well as TC07000551.hg.1, TC08000489.hg.1, TC02004770.hg.1, and TC03000701 [99]. Overexpression of CCAT2 was also observed in MDS patient CD34+ BM cells and mononuclear PB cells compared to healthy age matched controls [100]. In addition, increased expression of a novel lncRNA, LOC101928834, was found in MDS and AML, and could discriminate MDS-RAEB patients from controls (AUC 0.9048) [101]. Significantly decreased expression of LEF1-AS1 has also been shown in MDS compared to healthy controls [97]. Lastly, with the recent advent of single-cell technologies, gene expression profiling of lncRNAs in single cells from MDS patients (CD34+ aneuploid cells) has started to highlight deregulated lncRNAs and the pathways they are involved in. This study found 590 downregulated lncRNAs which are involved in immune response, cellular response and gene expression, and DNA damage response [102]. Conversely, the 372 upregulated lncRNAs were associated with cell metabolism and cell signalling [102]. Our understanding of the functional roles of lncRNAs and their utility as diagnostic biomarkers in MDS is still yet to be thoroughly tested and confirmed. DNA Methylation Signatures That Predict Prognosis DNA methylation changes have also been associated with predicting prognosis in MDS patients of various sub-groups, particularly with regards to overall survival (OS) ( Table 2). High methylation levels globally across the genome have been associated with significantly lower OS and increased progression to AML. However, on multivariate analysis it was not an independent variable for OS or progression [108,109]. A recent publication grouping MDS patients into DNA methylation clusters has identified subtypes that are genetically distinct and correlate with OS [110]. In addition, hypomethylation of CD93 in MDS patients resulted in shorter OS rates [110] and MDS patients with let-7a-3 promoter hypomethylation (23.2% of patients) had significantly shorter OS than those without hypomethylation [78]. The latter being an independent prognostic risk factor for low-risk MDS patients [78]. Interestingly, hypomethylation of DNMT3A resulted in shorter OS and this was confirmed to be an independent prognostic factor in MDS [111]. Table 2. Epigenetic modifiers associated with MDS patient prognosis. DNA methylation levels also correlate with MDS prognostic risk groups. High methylation index which examines global methylation levels in promoters and gene bodies, was significantly increased in higher-risk IPSS-R MDS patients [118]. FOXO3 and CHEK2 promoter methylation were also associated with high-risk parameters, with no methylation in these sites in healthy controls [119]. Moreover, SHP-1 [120], DLC-1 [75], HRK [121] and SOX17 [72] promoter hypermethylation have also been shown to associate with high-risk MDS. Methylation at a specific site in the genome has also been linked to a better prognosis in MDS. Hypermethylation in a region preceding the MEG3 gene before the commencement of AZA therapy in 50% of MDS patients was associated with longer PFS [79]. Therefore, DNA methylation changes in regulatory regions of specific genes may hold promise in predicting patient prognosis in MDS. miRNAs That Predict Prognosis The associations found between miRNA expression and patient risk groups, progression, and survival at different stages of disease progression have also been described in MDS ( Table 2). The expression of a 10-miRNA signature and the expression of miR-15a and miR-16 have been shown to closely associate with prognosis scoring, permitting discrimination between lower and higher risk MDS cases [81,84]. Increased expression of miR-181 family members was also observed in higher risk MDS patients, and this overlapped with AML [81]. Moreover, the expression of 5 miRNAs, including three members of the miR-181 family, was able to identify MDS patients at higher risk of progression [122]. Differences in the expression of miRNAs between risk groups has also been observed. Higher-risk MDS patients displayed decreased expression of miR-17-5p and miR-20a compared to low-risk patients and let-7a was under expressed in patients with intermediate or high-risk MDS [123]. Lower expression of miR-21, miR-126, miR-146b-5p and miR-155 was found in IPSS low-intermediate risk MDS compared to higher-risk patients [93]. In addition, the circulating levels of miR-27a-3p, miR-150-5p, miR-199a-5p, miR-223-3p and miR-451a were decreased in higher-risk MDS and this was linked to prognosis. The expression of circulating miRNAs has also been linked to PFS and OS in MDS. Recently, a small ncRNA signature in EVs containing low levels of miR-1237-3p and high levels of miR-548av-5p was associated with improved OS in MDS [83]. Moreover, lower expression of let-7a and miR-16 was significantly associated with PFS and OS [129]. However, only let-7a was a strong independent predictor of OS [129]. A 7-miRNA signature is also an independent predictor of survival in MDS with 75% accuracy and performs better than traditional risk models [130]. More recently, miR-451a expression was shown to be an independent predictor of PFS, and miR-223-3p expression led to significantly better OS [128]. lncRNAs That Predict Prognosis There are very few reports investigating the link between expression of lncRNAs and prognosis in MDS to date ( Table 2). Overexpression of MEG3 lncRNA was associated with poor prognosis in 50% of MDS cases, and after AZA therapy, MEG3 expression levels decreased and were closer to that of healthy controls [79]. Moreover, AML and MDS patients with higher HOXB-AS3 expression displayed significantly shorter OS [131]. In MDS patients this equated to adverse prognosis with median OS of 14.6 months with high HOXB-AS3 expression compared to 42.4 months [131]. Subgroup analysis showed that high HOXB-AS3 expression could only predict poor prognosis in the lower-risk MDS group [131]. High serum expression of the lncRNA KCNQ1OT1 [132] and high expression of LOC101928834 [101] have also been shown to associate with poor survival in MDS. Lastly, MDS patients with a modelled high lncRNA score displayed shorter OS and were more likely to progress to leukemia [99]. Therefore, increased expression of lncRNAs appears to negatively influence patient prognosis in MDS. Epigenetic Modifiers as Biomarkers for Response to HMAs in MDS HMAs such as DAC and AZA are used for the treatment of high-risk MDS patients. Although the use of HMAs has tripled survival rates for MDS patients, less than 50% of patients respond. Therefore, biomarkers that can accurately predict response to HMAs are important. Given the role that DNA methylation, miRNA and lncRNA play in MDS pathogenesis, these are potential candidates. DNA Methylation as a Biomarker for Treatment Response in MDS MDS patients with mutations in epigenetic machinery such as DNMT3A, TET2, IDH1 and IDH2, tend to respond well to HMA therapy [133,134]. These mutations tend to occur with other mutations, and typically remain stable during treatment with AZA, irrespective of treatment response [135]. MDS patients with a TET2 mutation appear to respond better to HMAs, particularly if they do not have ASXL1 clonal mutations [136,137]. In terms of global DNA methylation levels, the decrease in methylation globally during HMA therapy, as opposed to baseline levels, has been shown to predict better response to HMAs [112]. In contrast, another study observed stable global methylation levels as assessed by LINE-1 methylation before and after AZA treatment in MDS patients who responded to AZA [138]. This observation could be due to differences in method of DNA methylation analysis, length of treatment and patient cohort. Although global DNA methylation levels do not always appear to predict response to therapy, DNA methylation levels at specific genomic sites have been linked to treatment response (Table 3). A significant reduction in CpG methylation of EZH2 (promoter) and NOTCH1 (intragenic) was shown at best haematologic response in MDS patients who responded to AZA [138]. Therefore, hypermethylation at these sites before treatment and subsequent hypomethylation during treatment may predict response to AZA therapy. High methylation and hence low expression of cytidine deaminase (CDA; detoxification of AZA) [139] PLCB1 (cell signalling transduction) [140][141][142][143] or CDKN2B (cell cycle regulator) [144] before treatment, coupled with decreased methylation and increased gene expression following AZA, may predict a better clinical response / hematologic response, respectively. However, another study found that lower baseline levels of CDKN2B methylation occurred in AZA responders, and although AZA reduced methylation, this did not correlate with treatment response [145]. Methylation BCL2L10 (apoptotic regulator) [146] may also predict response to HMAs, however its predictability is unclear. More recently, the reduction of DLC-1 (Rho GTPase activator) methylation following AZA treatment was also associated with a better response to AZA in MDS patients [147]. Increased accumulation of the deoxyribonucleoside form of AZA (5-AZA-CdR) in DNA [148] and less incorporation of AZA into RNA [149] have been associated with better treatment response. Some of the non-responders to AZA failed to incorporate adequate levels of 5-AZA-CdR into DNA, whereas others had incorporation and DNA hypomethylation, but this resulted in no clinical benefit [148]. Therefore, it appears that response may not be exclusively due to incorporation into DNA and the extent of DNA demethylation, but also to the regions of the genome that have undergone demethylation. Moreover, no significant differences in methylation (promoter and gene body) were observed before AZA treatment in MDS patients, regardless of subsequent treatment response [118]. Sequential assessment of whole blood DNA methylation levels in MDS patients treated with AZA found that AZA responders showed significantly higher recovery of hypomethylated DNA at the time of next course of AZA compared to non-responders, who did not display normalised methylation levels [118]. Table 3. Epigenetic modifiers that can predict response to hypomethylating agents (HMAs) in MDS. In summary, methylation studies have shown global and gene-specific promoter hypermethylation in MDS (Table 3), but there seems to be conflicting evidence regarding the degree of global demethylation following hypomethylating treatment and hematologic response. Research is starting to focus on assessing methylation changes in not just promoter regions but also other genomic regions (gene bodies, intergenic and enhancer regions). Therefore, it appears that DNA methylation changes at several specific genomic sites may provide benefit in predicting response to HMAs in MDS patients in the future. ncRNAs as Biomarkers That Predict Response to HMA Therapy in MDS The expression of miRNAs in serum or blasts extracted from MDS patients may also be useful biomarkers for predicting response to HMA therapy (Table 3). Serum levels of miR-21 have been shown to predict response to HMAs (ROC 0.648), with low baseline expression observed in responders, and this was associated with improved overall response rate (ORR) and PFS [150]. Decreased expression of miR-100-5p and miR-133b and increased expression of miR-17-3p have also been found to predict better ORR [124]. Moreover, a plasma miRNA signature (miR-423-5p, miR-126-3p, miR-151a-3p, miR-125a-5p, miR-199a-3p) was recently shown to predict response to AZA [83]. In contrast, MDS patients with low expression of miRNAs that regulate DNMT1, such as miR-126*, displayed significantly lower response rates, higher relapse rates, and shorter PFS and OS [127]. Decreased expression of miR-126* over time was also associated with increased risk of secondary resistance to AZA [127]. Therefore, the expression of specific miRNAs at diagnosis may aid in stratifying patients into treatment groups, and miRNA profiling throughout treatment may also predict response and resistance to HMAs. Similar to miRNAs, lncRNAs have the potential to be used as biomarkers for treatment response in MDS (Table 3). However, there is only one study to date that has found lncRNAs that are associated with response to HMAs. Increased expression of lncRNAs PU.1 and JPD2 led to a favourable clinical response to AZA [151]. More studies are needed that focus on ncRNAs to determine those that may help predict response to HMAs and patient outcomes such as PFS and OS. Epigenetic Modifiers in the MDS Bone Marrow Micro-Environment (BMME) The bone marrow microenvironment (BMME) consists of an array of cell types such as mesenchymal stromal cells (MSCs), bone progenitor cells, endothelial cells, neurons and immune cells [152,153]. Many of these cell types play a supportive role in normal haematopoiesis and show abnormal function in disease states such as MDS [154][155][156][157]. While most of the research in MDS has focused on myeloid blasts, other cell types in the BMME may not only be dysfunctional in MDS but may also be targets for novel therapies and/or contain biomarkers for diagnosis, prognosis, and predictors of response to therapy. The BMME has started to gain more attention recently in terms of its role in the pathogenesis of MDS. DICER1 gene deletion (inhibits DICER mediated miRNA processing) in bone marrow osteoprogenitor cells in mice induced MDS and AML-like haematological characteristics [23], highlighting the importance of the BMME and specifically the role of miRNAs in the BMME in MDS. DNA Methylation in BMME Widespread changes in MSCs in bone marrow from MDS patients that have been observed include chromosomal abnormalities [158,159], dysfunction [158,159], high levels of inflammatory cytokines [159,160] and aberrant DNA hypermethylation [161][162][163] compared to healthy controls, with hypermethylation occurring preferentially outside of CpG islands [162]. Following AZA treatment, MSCs from MDS (including high-risk patients) display significantly decreased DNA methylation, regardless of haematological response [161,162]. This is interesting because it shows that AZA can decrease methylation in cell types other than blasts, particularly MSCs that have a low proliferative rate. Moreover, only MSCs from MDS patients that reach complete remission seem to restore their normal phenotype and function compared to healthy donor MSCs [161]. MSCs that fail to respond to HMAs are associated with MDS patients with rapid and adverse disease progression [163]. Hypermethylation of FRZB has also been shown to decrease its expression in MDS stroma, leading to activation of WNT/β-catenin signalling in CD34+ cells from advanced cases of MDS and is associated with adverse prognosis (Figure 2) [162]. Methylation of SPINT2/HAI-2 gene in stromal cells was shown to cause low expression, leading to enhanced adhesion and survival of CD34+ cells, potentially via interactions with specific integrins (Figure 2) [164]. Treatment with AZA increased SPINT2/HAI-2 gene expression in MDS stromal cells but not stromal cells from healthy donors ( Figure 2) [164]. Therefore, DNA methylation in stromal cells plays an important role in the cross talk between MDS blasts and their BMME, influencing cancer cell survival and progression. Methylation levels of the PD-1 promoter in CD8+ T-cells have been shown to influence response to HMAs. Demethylation of PD-1 and subsequent PD-1 expression was observed in peripheral blood T-cells during AZA treatment (Figure 2) [165]. This significantly correlated with worse ORR and a trend towards shorter OS. In addition, patients that did not respond to AZA displayed significantly higher baseline PD-1 methylation levels compared to healthy controls [165]. Therefore, HMAs influence PD-1 expression in T-cells and associated immune response against MDS blasts. Thus, these patients may benefit from a PD-1 pathway inhibitor to help reactivate the immune system. miRNA and lncRNA in BMME There are limited studies examining the role of miRNAs and lncRNAs in MDS-MSCs in diagnosis, prognosis, and treatment response. This field is still in its infancy. However, there are some reports of differential expression in MDS-MSCs and altered functions. Global downregulation of miRNA expression was observed in MDS-MSCs from patients compared to healthy controls [166]. Three miRNAs (miR-155, miR-181a and miR-222) had significantly decreased expression in MDS-MSCs compared to healthy donors and these are known to target DICER1 and DROSHA, members of the canonical miRNA biogenesis pathway [166]. Interestingly, DICER1 and DROSHA expression were decreased in MDS-MSCs [166]. Therefore, changes in miRNA expression in MSCs may influence hematopoietic cell functions as these cell types interact directly and via microvesicles. MSCs from MDS patients have also shown impaired proliferation, differentiation and differential miRNA expression compared to healthy controls [168]. DICER1, miR-30d-5p, miR-222-3p and miR-30a-3p displayed significantly decreased expression, and miR-4462 was overexpressed in MDS-MSCs [168]. Exosomes and microvesicles are involved in intercellular communication via release of their cargo once cell uptake has occurred [168]. MSCs from MDS patients showed overexpression of miR-10a and miR-15a within their exosomes, and these miRNAs were incorporated into CD34+ cells, modifying the expression of MDM2 and p53, leading to increased CD34+ cell viability and clonogenic capacity ( Figure 2) [168]. Therefore, exosomes containing miRNAs released from MDS-MSCs are capable of being incorporated into hematopoietic progenitor cells and influence cellular functions. This provides another mechanism of cross talk between BMME and MDS blasts / progenitor cells and suggests that BMME may be a useful source of markers for diagnosis and prognosis, as well as provide novel therapeutic targets for MDS. Epigenetic Modifiers in MDS: Conclusions and Future Directions There are several DNA methylation, miRNA and lncRNA changes in MDS that may provide benefit in the diagnosis, prognosis and selection of therapies for MDS patients. These markers may also serve as therapeutic targets, leading to the development of novel targeted therapies, and may also provide benefit as markers for response to new targeted therapeutics currently being tested in the clinic such as BCL2 inhibitors and checkpoint inhibitors. Although the markers mentioned in this review show promise as biomarkers for MDS, their applicability in the clinic still warrants further investigation. More importance should be placed on studies with data from large and multiple patient cohorts, use of non-invasive methods (serum/serum EVs) and those that have displayed a high level of sensitivity and specificity as a biomarker in MDS. Ideally, a clinical test would consist of panels of serum markers-multiple miRNA and lncRNA markers on one panel to assess expression in peripheral blasts and/or EVs, and another panel assessing multiple DNA methylation markers on DNA extracted from peripheral blasts. This would allow non-invasive testing using blood samples for diagnosis, prognosis, and tracking of treatment response. From a simple blood sample, DNA and RNA from peripheral blasts and RNA from peripheral blasts and/or EVs could be extracted and applied to next-generation sequencing (targeted amplicon bisulphite sequencing for DNA) or real-time PCR targeted (bisulphite PCR for DNA methylation) panels, technologies that are already routinely used in clinical testing of MDS patient samples. Finding robust and reproducible markers for diagnostics and prognostics will ultimately improve clinical management and appropriate use of resources. Large prospective cohort studies will be needed to establish epigenetic modifiers as clinically useful biomarkers. With the advent of single-cell technologies and multi-omics over the last decade there is now the opportunity to not only delve deeper into how epigenetic processes collectively contribute to MDS pathogenesis but also examine the heterogeneity that exists within different cell types in a single patient [169][170][171][172]. This would also involve investigating epigenetic changes in the BMME (MSCs, T-cells) and peripheral blood (exosomes), instead of mainly focusing on MDS blasts within the bone marrow. We envision for the future clinically Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
8,139.6
2020-10-26T00:00:00.000
[ "Medicine", "Biology" ]
Temperature and Thickness Dependence of the Thermal Conductivity in 2D Ferromagnet Fe3GeTe2 The emergence of symmetry-breaking orders such as ferromagnetism and the weak interlayer bonding in van der Waals materials offers a unique platform to engineer novel heterostructures and tune transport properties like thermal conductivity. Here, we report the experimental and theoretical study of the cross-plane thermal conductivity, κ⊥, of the van der Waals two-dimensional (2D) ferromagnet Fe3GeTe2. We observe an increase in κ⊥ with thickness, indicating a diffusive transport regime with ballistic contributions. These results are supported by the theoretical analyses of the accumulated thermal conductivity, which show an important contribution of phonons with mean free paths between 10 and 200 nm. Moreover, our experiments show a reduction of κ⊥ in the low-temperature ferromagnetic phase occurring at the magnetic transition. The calculations show that this reduction in κ⊥ is associated with a decrease in the group velocities of the acoustic phonons and an increase in the phonon–phonon scattering of the Raman modes that couple to the magnetic phase. These results demonstrate the potential of van der Waals ferromagnets for thermal transport engineering. ■ INTRODUCTION The electric-field control of the conductivity of atomic-thick graphene, 1,2 shortly afterward extended to NbSe 2 and MoS 2 , 3 opened up new possibilities for material properties manipulation in the novel world of two-dimensional (2D) van der Waals (vdW) materials and heterostructures. 4Particularly, on vdW materials, the extreme bonding anisotropy is translated into a giant anisotropy also in the thermal transport, where the in-plane thermal conductivity κ ∥ is much larger than the crossplane one κ ⊥ , 5 despite the prediction of phonon mean-free paths (mfp) of the order of several tens of nanometers across the weakly bonded planes. 6,7Defects and imperfect layer stacking result in a mixed contribution of ballistic transport (large mfp, coherent phonons) and diffusive transport (small mfp), which reduces very much the thermal conductivity across the 2D planes. 6,8hermal transport is a crucial aspect for developing functional devices, which rely on an efficient heat dissipation to the base substrate, in a process determined by the thermal conductivity of the material itself and the thermal boundary conductance (TBC) of the interface with the substrate. 9,10 particularly interesting 2D material regarding heat dissipation is the itinerant ferromagnet Fe 3 GeTe 2 (FGT): charge doping through Li + -intercalation modulates its magnetic anisotropy and increases T C up to room temperature, 11 while a strong spin-phonon coupling 12 produces a significant effect of magnetic ordering on the thermal conductivity, opening the door to gate-tunable 2D thermal devices.First-principles calculations in other 2D magnetic materials, like 2H−VSe 2 , CrI 3 , FeX 3 , and RuX 3 (X = Cl, Br, and I), 13−16 have predicted a large change of the thermal conductivity in their magnetically ordered phase as well, although an experimental confirmation of such a large switching of the thermal conductivity associated with magnetic ordering in 2D materials is lacking. In this work, we report experimental measurements combined with a theoretical analysis of the thickness and temperature dependence of the thermal conductivity in FGT.We have observed an increase in the cross-plane thermal conductivity with thickness, characteristic of a mixed ballistic propagation of long mfp phonons with diffusive transport, as well as a large drop in the thermal conductivity in the magnetically ordered phase.Both effects can be understood by our ab initio analysis of the thermal conductivity based on density functional theory (DFT) calculations. ■ RESULTS AND DISCUSSION FGT is a 2D itinerant ferromagnet with T C ≈ 200 K, which decreases with the number of layers but retains the magnetic order down to the single-layer limit. 17Neutron diffraction data support a ferromagnetic (FM) order also along the c-axis, 18 although theoretical calculations and analysis of experimental magnetic susceptibility suggested an antiferromagnetic (AF) stacking below T C ≈ 152 K. 19 From the structural point of view, the material is weakly bonded off the plane via van der Waals interactions, which facilitates its mechanical exfoliation and transfer of few layer thick flakes to a substrate.The unit cell consists of two vdW planes with hexagonal symmetry, each formed by 3 Fe atomic planes (see Figure 1). The crystals for this study were exfoliated from larger pieces obtained from HQ graphene (see Supporting Information for further details of the structural and chemical characterization of the samples).DC magnetization data of bulk crystals show that T C ≈ 200 K, as expected for fully stoichiometric crystals (Figure 2a shows how the zero-field-cooled and field-cooled magnetization curves separate from each other at the transition temperature).Temperature-dependent X-ray analysis shows a change in the slope of the c-axis parameter at T C (Figure 2b), but no change in the space group of the crystal accompanies the transition. Few layer thick flakes of FGT were prepared by mechanical exfoliation and transferred to (0001) sapphire substrates using PDMS stamping. 20Transferred FGT flakes have tens of microns in lateral dimensions and thicknesses ranging from 15 to 250 nm (Figure 2c,d; see also Supporting Information).The flakes are always thicker than 5 layers, considered to be the border between 2D and 3D magnetism, 21 so that the comparison with the bulk calculations is justified. Thermal conductivity was measured by Frequency Domain Thermoreflectance (FDTR), using a ≈60 nm thick layer of Au as a transducer 22 (see Figure S4 in the Supporting Information).To extract κ and the TBC from the FDTR phase-shift curves, we fitted the most common model wherein total energy conservation and energy transfer between layers are imposed by a transfer matrix, as explained elsewhere 23 and in the Supporting Information.The multiparameter fitting can lead to unrealistic results if several parameters are kept free and the initial guess is distant from the global minimum.To reduce the number of fitting parameters, the thickness of the Au layer was measured by X-ray reflectivity, and its thermal conductivity was estimated from the sheet electrical resistance measured by the van der Pauw method and the Wiedemann−Franz law.The thickness of the FGT flakes was measured by atomic force microscopy (AFM).Heat capacities were taken from the literature, confirmed by differential scanning calorimetry (DSC), and kept fixed for each temperature in all fittings. 24he thermal conductivity of the substrate was measured and confirmed with the values from the literature. 25Since vdW materials present high anisotropy between the conductivity inplane κ ∥ (κ xx , κ yy ) and cross-plane κ ⊥ (κ zz ), their values could be considered separately in the model.However, the sensitivity to κ ∥ is very low, and it has a negligible influence in the κ ⊥ value (see Supporting Information Figure S4 for the sensitivity analysis).Thus, κ ∥ = κ ⊥ was assumed.In this way, the free parameters in the fittings are reduced to κ ⊥ of FGT and the TBC between Au/FGT, G1, and between FGT/Al 2 O 3 , G2.We considered the initial values for G1 ≈ 30−40 MW m −2 K −1 , similar to Au/MoS 2 , 6 and G2 ≈ 25 MW m −2 K −1 , as reported for MoS 2 /Al 2 O 3 , 26 and MoS 2 /SiO 2 6 interfaces.On the other hand, the variability of G2 between mechanically transferred flakes may be an important source of error.For that reason, multilayer flakes like those shown in Figure 2c are important to reduce problems associated with the variability of G2, as they allow the measurement of κ ⊥ for different thicknesses with the same FGT/sapphire interface (see also Figure S3 in the Supporting Information). Initial values of κ ⊥ of 1 Wm −1 K −1 , typical for other 2D materials, were used for an estimation of the sensitivity to different parameters in different frequency ranges.The spot size was varied between a 1/e 2 diameter ≈ 4 and 11 μm for achieving better sensitivity to TBC and κ ⊥ .The fittings shown in Figure 3c to obtain the κ ⊥ and TBCs reported in this work were performed from 1 to 50 MHz, where the sensitivity for these parameters is maximum (see Figure S4 in the Supporting Information). Figure 3a shows the AFM topography of two partially overlapping flakes of thicknesses 22 and 41 nm, respectively.The 30 × 30 μm phase shift map at 20 MHz shows the variations in the contrast due to the differences in κ ⊥ and TBC.The whole frequency phase-shift spectra for each point marked in (a,b) are presented in Figure 3c,d at different temperatures, demonstrating good sensitivity to thickness and temperature. The thickness dependence of κ ⊥ at room temperature is shown in Figure 4a.To reduce errors from sample preparation and defects, several flakes were measured, and each flake was measured several times.Thus, the error bars represented in the figure are obtained from the statistical variance.In this figure is observed an increase of κ ⊥ with the thickness of the sample, of the order of ≈0.5 W/m K in a range of ≈200 nm.Although small, this is of the same order of magnitude as that reported for other van der Waals materials, like MoS 2 6 or SnSe 2 , 27 and it is consistent with our DFT calculations for FM and nonmagnetic (NM) phases (Figure S11 of Supporting Information).The calculated accumulated κ ⊥ ; Figure 4c) shows that more than 50% of the heat at 300 K is carried by phonons with a mean-free path larger than ≈200 nm, suggesting an important contribution from ballistic phonons along the c-axis, as in other vdW structures. 6,28n the case of pure ballistic transport, phonons can propagate without thermal resistance inside the material so that R ⊥ = R int + t/κ ⊥ should be a constant, independent of thickness.However, the measured experimental cross-plane thermal resistance, R ⊥ , also increases with the thickness (Figure 4b).On the other hand, in a purely diffusive regime, R ⊥ (t) is linear with a constant slope = 1/κ ⊥ . 8For FGT, R ⊥ increases linearly with thickness above ≈60 nm, giving κ ⊥ ≈ 1.9(1) Wm −1 K and R int ≈ 46 m 2 K/GW, but it deviates from this behavior for thinner samples, with a vanishing resistance as t → 0. The change in slope suggests some thickness-dependent contribution, and although the data in Figure 4b seem to extrapolate to zero, the thinner samples measured in this work are t ≈ 25 nm, so we cannot exclude a small finite value of R ⊥ close to the monolayer limit (note that a residual value as small as ≈10 m 2 K/GW has been reported for a few monolayers of MoS 2 ). 6e have also measured κ ⊥ in the superposition region of two crystals, point 3 in Figure 3a: κ ⊥ is substantially reduced in the overlapping region of total thickness 63 nm (green triangle in Figure 4a).Actually, the phase-shift curve of point 3 can be fitted with two layers, of 22 and 41 nm each, with their corresponding κ ⊥ , and a high interlayer thermal resistance between both flakes of ≈180 m 2 K/GW (TBC ≈ 10−12 MW m −2 K −1 ).The value of the TBC between the two FGT flakes is of the same order of magnitude as reported for interfaces between dissimilar 2D materials, like graphene/MoS 2 or MoS 2 /WSe 2 , 9 although in this case, the large interfacial resistance occurs between films of the same composition without any mass density or compositional mismatch. Finally, the experimental temperature dependence of κ ⊥ is shown in Figure 5a for two different thicknesses (points 1 and 2 in Figure 3a).A reduction of κ ⊥ between 25 and 65% occurs below T C in the transition to the magnetic phase.Note that the jump in κ ⊥ is clearly observed in the raw phase-shift curves (Figure 3d) and, therefore, cannot be attributed to fitting artifacts. It is common in magnetic and ferroelectric materials that the formation of domain walls causes a reduction of thermal conductivity due to phonon scattering on domain boundaries. 29,30However, the κ ⊥ obtained is robust to external magnetic fields up to 50 mT, applied with a strong toroidal permanent magnet (see Supporting Information Figures S7 and S8).Based on previous reports, 31 this applied field should be enough to switch between stripe domains and uniformly magnetized states; the negligible effect of the magnetic field on κ ⊥ indicates that the in-plane magnetic domains are not the cause of the sudden change of κ ⊥ at T C .We have also discarded as a cause of the jump on κ ⊥ the eventual changes in the crystal structure since the powder X-ray diffraction of the original bulk crystal revealed only a small change in the c-axis lattice parameter and thermal expansion without any crystallographic transformation (Figure 2b).Below T C , magnons could be an additional source of heat flow, providing an increase in the thermal conductivity; however, the opposite trend is found experimentally, suggesting that heat transport by phonons is the dominant effect in this system, at least for κ ⊥ . In order to shed light on the observed experimental behavior, we carried out DFT-based calculations on the system.We have studied several magnetic orderings in the system and found that the ground state is the solution where FM layers couple antiferromagnetically.We have computed the temperature dependence of κ ⊥ in the small-grain limit 32 for the different experimental cases (Figure 5b).The thickness of the flakes was used as the boundary length for the 41 and 22 nm flakes.To identify possible changes at the transition, we have modeled differently the system for temperatures above and below T C .The ground-state FM solution was considered below T C , but above T C , we will consider a nonmagnetic (NM) solution as a possible proxy for the disordered paramagnetic phase above the Curie temperature.Our calculations show an ≈30% drop in the thermal conductivity between the NM and FM phases in good agreement with the experimental observations.Note that the theoretical underestimation of the thermal conductivity is related to the limitations of the small-grain limit used in the calculations in which the boundary scattering is overestimated, especially at higher temperatures and for larger samples.However, all of the qualitative features are well captured (the change at the transition and also the thickness dependence).Further details about the thickness dependence of the calculations can be found in the Supporting Information. For understanding the reduction of κ ⊥ in the magnetic phase, we have analyzed the phonon band structures and the weighted phase space (WPS) available for three-phonon processes for the NM and FM configurations (Figure 6).The WPS gives us an idea of the frequencies involved in phonon scattering processes that are different in the FM and NM states.From the phonon band structures, we can observe that the acoustic phonons undergo a shift toward lower frequencies, especially in the A−Γ path, related to a decrease in group velocities in the out-of-plane direction and hence in the thermal conductivity 33 of the FM phase compared to the NM one discussed above. Moreover, FM ordering shows an increase in the WPS, specifically a peak around 2 THz that is substantially different from the NM calculation.This peak is related to the phonon modes highlighted in Figure 6a,b and corresponds to two E 1g Raman active modes.In the magnetic phase, these modes are about 0.3 THz lower in energy, showing more crossings with the acoustic modes.In the NM calculation, two additional modes appear (an infrared-active A 2u mode and a higher-lying B 1g mode).These move up to about 3 THz in the FM calculation.The frequency lowering of the modes in the FM calculation leads to the observed additional scattering and causes a reduction in the thermal conductivity. Strong coupling between Raman-active modes and a particular magnetic order has been reported in other twodimensional magnets. 34Here, we observe that, together with the decrease in the group velocities of the acoustic modes, this coupling produces a considerable reduction of the lattice thermal conductivity in the magnetic phase of FGT. ■ CONCLUSIONS To summarize, we have combined experimental FDTR and ab initio calculations to demonstrate that the cross-plane thermal conductivity of 2D ferromagnet FGT presents a mixed contribution of diffusive and ballistic phonons.We have also shown that κ ⊥ presents an abrupt reduction below the Curie temperature due to additional phonon scattering produced by a downshift in the frequency of acoustic and Raman-active optical phonons in the magnetic phase.Also, artificial stacking of a few layer thick FGT is a useful way of reducing the crossplane thermal conductivity in this material. ■ EXPERIMENTAL AND COMPUTATIONAL DETAILS Thermal conductivity was measured by a commercial FDTR from Fourier Inc. using a sinusoidally modulated pump laser (λ = 405 nm, f = 2 kHz to −50 MHz, 1 mW) and a continuous wave 532 nm probe laser (3 mW).Both lasers have Gaussian spot sizes 1/e 2 with a radius of 3.7 or 10.5 μm.The probe beam is split before reaching the sample to work as a reference signal, improving the signal-to-noise ratio at low frequencies and compensating phase-shift offsets from beam paths and electronics.The same setup is described in detail in the ref 22.A 60 nm gold-thin film deposited by Ar plasma sputtering works as a reflective transducer.The fitting model considers Fourier heat conduction: the heat flux q = −κ∇T, where κ is a tensor to account for the material thermal conductivity anisotropy.The sample temperature is controlled inside a cold finger optical cryostat, down to 80 K.The whole stage is mounted on a piezoelectric table, which allows μm precision location of the laser spots on the sample.To promote the adhesion of FGT to the sapphire substrate, the samples were annealed under vacuum at 100 °C before the experiments. −39 For all calculations, we performed a full relaxation of the structure (both atomic positions and lattice parameters were optimized) with a mesh of 16 × 16 × 3 k-points in the irreducible wedge of the Brillouin zone.The exchange−correlation potential chosen was the generalized gradient approximation in the Perdew−Burke−Ernzerhof scheme. 40he second-order interatomic force constants (IFCs) were determined using the Phonopy code 41,42 in a 2 × 2 × 2 supercell with a k-mesh of 8 × 8 × 2 with no further relaxation of cell shape or volume.Third-order anharmonic IFCs were computed using the machinery of the ShengBTE code, 32 considering interactions to third neighbors in a 2 × 2 × 2 supercell.The lattice thermal conductivity was calculated by solving the Boltzmann Transport Equation (BTE) for phonons by an iterative self-consistent method implemented in the ShengBTE code within a mesh of 36 × 36 × 8 q-points and a scalebroad parameter of 0.1. Additional figures with X-ray powder diffraction pattern of exfoliated crystal,FTDR sensitivity analysis, and details of FDTR measurements with magnetic field (PDF) ■ Figure 1 . Figure 1.Crystal structure of Fe 3 GeTe 2 : left: lateral view of the structure.The cell is formed by two layers shifted with respect to each other.In the low-temperature magnetic phase, the layers become ferromagnetic (FM) with an antiferromagnetic interlayer coupling (as schematically shown by the arrows depicted).Right: top view of the structure showing the hexagonal symmetry of the ab plane.Fe, Ge, and Te atoms are shown in gold, purple, and green, respectively. Figure 2 . Figure 2. (a) Temperature dependence of the magnetization zero-field (ZFC) and field cooling (FC) curves measured at H = 100 Oe, and (b) lattice parameters measured for a bulk crystal of FGT.(c) Atomic force microscopy (AFM) image of one flake transferred to the surface of a (0001) sapphire substrate.The corresponding height profile along the line in panel c is shown in panel (d). Figure 3 . Figure 3. (a) 30 × 30 μm AFM topography of two partially overlapping flakes.(b) Phase-shift map at 20 MHz of the same region (enclosed within the square) observed in (a), with the corresponding points marked.In this image, the flakes are already covered with 60 nm of Au for the FDTR measurements.(c) Phase-shift vs frequency curves for the three points marked in (a,b), along with the fitting to the thermal model.The curve of the substrate, as a reference, is also shown.(d) Phase-shift vs frequency curves of point 2 at different temperatures.There is a large change around 200 K associated with the magnetic ordering temperature (see text). Figure 4 . Figure 4. Measured thermal conductivity (a), and thermal resistance R ⊥ = t/κ ⊥ , (b) at room temperature for different flakes with varying thicknesses.Circles and squares correspond to different sets of crystals transferred to different substrates.The green solid triangle in (a) corresponds to κ ⊥ at point 3 in Figure 3a, the region of superposition of the two flakes.The dotted lines are linear fittings.(c) Accumulated κ ⊥ as a function of the phonon mean free path at 300 K.The shaded area shows the thickness range of the flakes studied by FDTR in this work. Figure 5 . Figure 5. Experimental (a) and theoretical (b) temperature dependence of the thermal conductivity of two flakes with thicknesses 22 and 41 nm, corresponding to points 1 and 2 in Figure 3a, respectively.(b) Calculated temperature dependence of κ ⊥ for the NM and FM phase of bulk FGT, considering the NM (FM) phase above (below) the experimental value of the magnetic transition temperature. Figure 6 . Figure 6.Phonon band diagrams for the magnetic (FM) (a) and non-magnetic (NM) (b) ordering and weighted phase space (WPS) available for three-phonon processes as a function of the frequency (c).The shaded area in (c) corresponds with the highlighted region in both band diagrams.The FM ordering shows an enhanced peak compared to the NM ordering in the WPS around 2.1 THz associated with Raman modes, leading to more scattering processes causing a reduction in the cross-plane thermal conductivity.
5,056.8
2023-07-24T00:00:00.000
[ "Physics", "Materials Science" ]
Analyzing the Effects of Capacitances-to-Shield in Sample Probes on AC Quantized Hall Resistance Measurements We analyze the effects of the large capacitances-to-shields existing in all sample probes on measurements of the ac quantized Hall resistance RH. The object of this analysis is to investigate how these capacitances affect the observed frequency dependence of RH. Our goal is to see if there is some way to eliminate or minimize this significant frequency dependence, and thereby realize an intrinsic ac quantized Hall resistance standard. Equivalent electrical circuits are used in this analysis, with circuit components consisting of: capacitances and leakage resistances to the sample probe shields; inductances and resistances of the sample probe leads; quantized Hall resistances, longitudinal resistances, and voltage generators within the quantum Hall effect device; and multiple connections to the device. We derive exact algebraic equations for the measured RH values expressed in terms of the circuit components. Only two circuits (with single-series “offset” and quadruple-series connections) appear to meet our desired goals of measuring both RH and the longitudinal resistance Rx in the same cool-down for both ac and dc currents with a one-standard-deviation uncertainty of 10−8 RH or less. These two circuits will be further considered in a future paper in which the effects of wire-to-wire capacitances are also included in the analysis. Introduction Many laboratories are now attempting to employ the integer quantum Hall effect (QHE) [1][2][3] to realize an intrinsic ac resistance standard by using ac bridges to compare ac quantized Hall resistances R H with ac reference standards. In experiments reported to date [4][5][6][7][8][9], the measured values of the ac quantized Hall resistances R H unfortunately varied with the applied frequency f of the current, and differed from the dc value of R H by at least 10 -7 R H at a frequency f of 1592 Hz (where the angular frequency = 2f is 10 4 rad/s). Furthermore, some sample probe leads had to be removed at the device in order to reduce the frequency dependence to this still significant amount. Lead removal creates two problems: (1) parasitic impedances within the QHE resistance standard (which arise from capacitances, inductances, lead resistances, and leakage resistances) become more difficult to measure or estimate, making it harder to apply corrections to the measured values of R H ; and (2) measurements of both R H and the longitudinal resistance R x can not be made during the same cooldown, which has been found to be necessary [10] in order to obtain reliable values of R H with direct (dc) currents. Our desired goal at NIST is to measure both R H and the longitudinal resistance R x in the same cool-down for both ac and dc currents with all sample probe leads attached, and to do this with a one-standard-deviation uncertainty equal to or less than 10 -8 R H in order to verify and replace parts of the calculable capacitor chain [11] that provides the System International (SI) value of R H at NIST. The one-standard-deviation uncertainty of the entire NIST calculable capacitor chain is 2.4 ϫ 10 -8 R H . Therefore, we need to achieve uncertainties of 10 -8 R H or less in the ac R H measurements. Therefore, the frequency dependence of R H is a serious problem that must be addressed. This paper investigates the effects of the capacitances-to-shield, and the series inductances and series resistances of sample probe leads on measurements of the ac R H . It also identifies ways to eliminate or minimize the frequency dependences resulting from these parasitic impedances. Most of the capacitances-to-shield arise from the capacitances between the inner and outer conductors of the coaxial leads and connectors within the ac quantized Hall resistance standard; a smaller amount arises from the capacitances between the quantum Hall effect device plus sample holder and the surrounding conducting surfaces of the sample probe. Strategy We investigate the effects of capacitances-to-shield on measurements of R H by using equivalent electrical circuits and multiple connections to the quantum Hall effect device. The multiple connections will be defined in Secs. 7-9. We derive exact algebraic equations for the currents and quantum Hall voltages of the standard. The discrete circuit components consist of: (a) capacitances and leakage resistances to the shields of the ac quantized Hall resistance standard; (b) inductances and series resistances of the internal and external sample probe leads and connectors; and (c) quantized Hall resistances, longitudinal resistances, and voltage generators within the quantum Hall effect device itself. These circuit components include everything within the standard except wire-to-wire capacitances between pairs of the inner conductors. Significant wire-to-wire capacitances can exist between pairs of conducting surfaces of the quantum Hall effect device, the sample holder, and the bonding wires between them. The wire-to-wire capacitances may be important, but their inclusion makes the circuit analyses extremely difficult, so they are excluded at this intermediate stage where we are trying to find viable circuit candidates for the final analysis of a complete equivalent circuit representation of an ac quantized Hall resistance standard. We give a brief explanation of the dc quantum Hall effect in Sec. 3. Section 4 describes our equivalent electrical circuit model of an ac quantized Hall resistance standard. Single-series "normal", single-series "offset", double-series, and quadruple-series circuits are explained and analyzed in Secs. 5-7 and Sec. 9. We find that two of these circuits (those with single-series "offset" and quadruple-series connections) appear to meet our desired goals of measuring both R H and the longitudinal resistance R x in the same cool-down for both ac and dc currents with an uncertainty of 10 -8 R H or less. These two circuits will be analyzed in more detail in a future paper in which the effects of wire-towire capacitances are also included in the analysis. DC Quantum Hall Effect The quantum Hall effect (QHE) has been successfully used as an intrinsic dc resistance standard. In the integer dc QHE [1][2][3], the Hall resistance R H of the i th plateau of a fully-quantized, two-dimensional electron gas (2DEG) is R H (i ) = V H (i )/I T , where V H (i ) is the quantum Hall voltage measured between potential probes located on opposite sides of the device, and I T is the total current flowing between the source and drain current contacts at the ends of the device. Under ideal conditions, the values of R H (i ) in standards-quality devices satisfy the relationships R H (i ) = h /(e 2 i ) = R K /i , where h is the Planck constant, e is the elementary charge, i is an integer, and R K is the von Klitzing constant, R K ≈ 25 812.807 ⍀ [12]. However, the conditions are not always ideal. The values of R H (i ) can vary with the device temperature T [13] and with the applied current I T [14]. Thus the measured dc values of R H (i ) are not necessarily equal to h /(e 2 i ). The current flow within the 2DEG is nearly dissipationless in the quantum Hall plateau regions of highquality devices, and the longitudinal resistances R x (i ) of this standard become very small over ranges of magnetic field in which quantized Hall resistance plateaus are observed. The dc longitudinal resistance is defined to be R x (i ) = V x (i )/I T , where V x (i ) is the measured longitudinal voltage drop between potential probes located on the same side of the device. The dc values of R x (i ) can also be temperature [13] and current [14] dependent. Equivalent Electrical Circuit of an AC QHE Standard The quantized Hall resistance R H (i ) of an ac QHE resistance standard (ac QHRS) can be experimentally compared with the impedances of ac reference stan-dards using ac measurement systems. NIST initially plans to use ac resistors as reference standards, and an ac ratio bridge measurement system for the comparisons. Figure 1 shows an equivalent electrical circuit representation of an ac QHRS in which the QHRS is being measured with an ac bridge using four-terminal-pair [15,16] techniques. (Neither the ac reference standard nor the ac ratio bridge are shown in the figure.) This circuit of an ac QHRS is rather detailed, so we explain it one step at a time, starting with the periphery of the standard, then proceeding to the QHE device within the central region of the figure, and finally discussing properties of the sample probe leads within the standard. The ac QHRS of Fig. 1 is bounded by an electrical shield indicated schematically by thick lines. Actual shields have complicated surface geometries. They consist of: (a) conductive surfaces surrounding the QHE device and its sample holder at liquid helium temperatures; (b) the outer conductors of eight coaxial leads within the sample probe; and (c) the outer conductors of eight coaxial leads extending from the top of the sample probe to room temperature access points S, 1 through 6, and D. The electrical shields will also be referred to in the text as "outer conductors". To simplify the figure, we label only currents in the inner conductors. The ac QHRS has electrical access at room temperature via four coaxial measurement ports labeled Inner/ Outer, Detector, Potential, and Drive. These four ports are used in the four-terminal-pair measurements, where each coaxial port is referred to as a "terminal-pair". The four coaxial ports are connected to room temperature access points S, 4, 3, and D in the figure. The ideal four-terminal-pair measurement definition [15,16] of R H (i ) is satisfied by the following three simultaneous conditions: (1) the current I Dr at the Drive coaxial port is adjusted so that there are no currents in the inner or outer conductors of the Potential coaxial port, i.e., I Pt = 0; (2) the potential difference is zero across the inner and outer conductors of the Detector coaxial port; and (3) there are no currents in the inner or outer conductors of the Detector coaxial port, i.e., I Dt = 0. It is implicit in the four-terminal-pair definition that each coaxial port is treated as a terminal-pair, and that the current in the inner conductor of every port is equal and opposite to the current in the outer conductor (the shield). Coaxial chokes [17] (located outside the ac quantized Hall resistance standard and therefore not shown in the figure) assure that this equal and opposite current condition is satisfied for each of the four terminal-pairs in the circuit. The current I Ot exits the ac QHRS at the Inner/Outer port and enters the ac reference standard (not shown). A "virtual" short has been drawn in Fig. 1 as a line between the shield and inner conductor at the Detector coaxial port to indicate four-terminal-pair condition number (2). We let the Detector potential be zero, i.e., V Dt = 0. At bridge balance the ac quantized Hall voltage V H (i ) = V H (3,4) = V Pt is defined as where ⌬ H is the correction factor to R H (i ) to be determined in this analysis. Next we describe the equivalent circuit model of the QHE device located in the central dashed-line region of Fig. 1. This model is based on that of Ricketts and Kemeny [18]. The device has contact pads that provide electrical access to the 2DEG at the source S' , the drain D' , and the potential pads 1' through 6' . Each contact pad is located at the end of an arm of the QHE device. Every arm in the equivalent circuit has an intrinsic resistor whose value is R H (i )/2. We assume that the device is homogeneous, i.e., that the quantized Hall resistances R H (i ) are all measured on plateau regions, that their values are the same on all the Hall potential probe sets, and that they are all measured at the same magnetic flux density value. R H (i ) can, however, vary with temperature [13] and current [14]. While V Pt has been observed to vary with frequency [4][5][6][7][8][9], it is not clear whether this is due to a frequency dependence of R H (i ), of ⌬ H , or of both R H (i ) and ⌬ H . Calculations of the intrinsic impedance of the 2DEG due to the internal Hall capacitance across the QHE device [19], however, predict a negligible frequency dependence of R H (i ) itself, implying a frequency dependence of ⌬ H arising from parasitic impedances in the ac QHRS. We therefore simplify the model, and assume that the dc values are appropriate for the R H (i )/2 resistances in the figure. The symbols r a , r b , r c , and r d in Fig. 1 represent real (in-phase) longitudinal resistances within the QHE device. Their measured dc values can vary with temperature [13] and current [14]. Sample probes normally used in dc QHE measurements have ten leads, with a pair of leads to the source contact pad S' and another pair to the drain contact pad D' . Only one lead of each pair carries the current, so the dc values of all four longitudinal resistances r a , r b , r c , and r d can be obtained using four-terminal measurements. In order to reduce the heat load on the liquid helium, sample probes for the ac QHE usually have a single coaxial lead to each of the eight contact pads. Therefore only r b and r c can be determined directly via four-terminal-pair ac measurements. For example, a four-terminalpair ac longitudinal resistance measurement of r b could is being measured in an ac ratio bridge using four-terminal-pair [15,16] measurement techniques. The ac ratio bridge is not shown, nor is the ac reference resistance standard with which the QHE standard is being compared. be made by moving the Potential coaxial port from access position 3 to position 2 in Fig. 1, and measuring the ac longitudinal voltage V x (2, 4). where ⌬ 24 is the correction factor to r b to be determined in this analysis. Values for r a and r d could be estimated from their dc r a /r b and r d /r c ratios if the measured r b /r c ratio happens to be the same for both ac and dc measurements using the same sample probe during the same cool-down. With one exception [20], the reported ac longitudinal resistances obtained from the real, in-phase components of the ac longitudinal voltage measurements are significantly larger than the dc longitudinal resistances in the same device under the same temperature and magnetic field conditions. The ac longitudinal resistances increase with increasing frequency of the applied current, and are of order 1 m⍀ at 1592 Hz [4,5,21]. The large ac longitudinal voltages might be due to intrinsic frequency dependences of r a , r b , r c , and r d within the device, to ⌬ 24 , ⌬ 46 , etc. corrections caused by parasitic impedances of the QHRS, or to both of them. Calculations of the kinetic inductance of the 2DEG and the magnetic inductance of the device [20] provide no plausible explanations via intrinsic impedance for significant frequency dependences of r a , r b , r c , and r d , suggesting that the frequency dependence of the ac longitudinal resistance is due to parasitic impedances of the QHRS, and therefore to the correction factors ⌬ 24 , ⌬ 46 , etc. However, we will assume the worst-case scenario in our numerical calculations, that is r a , r b , r c , and r d are themselves frequency dependent and have 1 m⍀ values at 1592 Hz. At some moment in time, a positive current I a enters the 2DEG via device drain contact pad D' in Fig. 1, and current I d exits the 2DEG via source contact pad S' . The magnetic flux density B is directed into the figure from above. Under these current and magnetic field conditions, the drain contact pad D' and the potential probe contact pads 1' , 3' , and 5' at the device periphery are at higher potentials than contact pads S' , 2' , 4' , and 6' . These current and flux density directions are chosen to be consistent with those we have used in earlier calculations [19,[22][23]. Potentials at the contact pads S' , 1' through 6' , and D' are produced by arrays of voltage generators, where each voltage generator V AB is located between a pair of arms A and B of the equivalent circuit. The voltages are defined as where I A and I B are the magnitudes of the current flowing in arms A and B. The currents I A and I B within the absolute quantity sign of Eq. (3) are added if they both enter or both leave the voltage generator, and are subtracted if one current enters and the other current leaves the generator. For example V 1D = [R H (i )/2]| I a Ϫ I C 1 |. The voltages generated are functions of R H (i ); therefore their values can vary with temperature [13] and current [14] (and also possibly with frequency). Diamond-shaped voltage generator arrays of Ricketts and Kemeny [18] are employed in the equivalent circuit of the QHE device, rather than the ring-shaped voltage generator arrays introduced later by Delahaye [24] and then subsequently used by Jeffery, Elmquist, and Cage [25]. Although both arrays give essentially identical results [22], the calculations are much simpler with the diamond arrays when longitudinal resistances are included in the circuits [22]. We therefore use diamond arrays. For clarity, the voltage generators are indicated in the figure as batteries, with positive terminals oriented to give the correct potentials along each arm at the instant considered. The ac currents alternate direction, so the voltage generators reverse sign each half cycle. Thus, for the part of the period in which the currents flow in the directions indicated in Fig. 1, the voltage generators have the polarities shown. Half a period later the currents change direction, and all the voltage generators reverse polarities. The QHE device is mounted on a sample holder at the bottom of the sample probe. The QHE device and the sample holder are located within the shaded region of Fig. 1. Thin wires connect the device contact pads S' , 1' through 6' , and D' to coaxial leads which extend to room temperature access points S, 1 through 6, and D located outside the sample probe (but still within the ac QHRS). Each arm of the equivalent circuit has a resistance r S , r 1 through r 6 , or r D . This resistance includes the contact resistance to the 2DEG, the wire resistance connecting a contact pad on the device to a coaxial lead, and the inner conductor resistance of that coaxial lead. The inner conductor lead resistances vary with the liquid helium level in the sample probe. They can be measured pair-wise (using access points S, 1 through 6, and D) as a function of liquid helium level via two-terminal dc resistance measurements by temporarily replacing the QHE device with electrical shorts at positions S' , 1' through 6' , and D' . The cooled inner conductor coaxial lead resistances are typically each about 1 ⍀ in ac quantized Hall resistance standards. The outer conductor coaxial lead resistances depend on the type of coaxial cable, and their values also vary with liquid helium level. Typical values range between about 0.1 ⍀ and 1 ⍀ in ac quantized Hall resistance experiments. Each sample probe lead has an inductance L S , L 1 through L 6 , or L D , that is electrically connected in series with the lead resistances r S , r 1 through r 6 , or r D , producing lead impedances z S , z 1 through z 6 , or z D , where z S = r S + j L S . Due to severe space limitations in the figure, these impedances are unconventionally drawn as resistors within rectangles. The inductance of each coaxial lead of a typical ac QHE sample probe is about 1 ϫ 10 -6 H. We assume that the bonding pad wires are thick enough to not vibrate in the magnetic field when applied ac currents flow through them [4], but the outof-phase "inductance" generated by this vibration [4] could be included in the lead inductances if necessary. The eight coaxial leads, labeled S, 1 through 6, and D, each have an inner and an outer conductor. The outer conductors of the coaxial leads are connected together outside the sample probe to help satisfy the four-terminal-pair measurement conditions. As mentioned earlier, the outer conductors of these leads act as electrical shields, and are represented schematically as thick lines in Fig. 1. (Other outer conductors of the ac QHRS also contribute to the thick lines.) Large capacitances-to-shield, labeled as C S , C 1 through C 6 , and C D , exist between the inner and outer conductors of these coaxial leads. The open-circuit capacitances can be individually measured at access points S, 1 through 6, and at D as a function of liquid helium level by temporarily removing the QHE device from the sample probe at the points S' , 1' through 6' , and D' . The capacitance-to-shield of each coaxial lead in typical ac QHE sample probes is about 250 pF, but it should be reduced to about 100 pF (1 ϫ 10 -10 F) in a short sample probe being designed at NIST. A predominately 90Њ out-of-phase current I C S , I C 1 through I C 6 , or I C D flows through each coaxial lead. These currents, and all the other currents in Fig. 1, have the correct signs for their dominant phase components in the half-cycle under consideration. This is verified in Sec. 5, where it is found that all currents shown in the figure have positive signs for their major components. The coaxial leads are not the only sources of capacitances-to-shield. There are also additional contributions from the QHE device-sample holder combination and the electrical shielding surrounding them. These additional capacitances-to-shield are labeled C A and C B in Fig. 1, where they are placed at either end of the QHE device. (Note that rather than explicitly using C A and C B , one-eighth of the additional capacitances C A + C B could instead be added to each of the eight coaxial lead capacitances C S , C 1 through C 6 , and C D , but that would make the coaxial lead capacitance notation very confusing.) The additional capacitances C A and C B can be determined by two methods. In the first method the magnetic field is adjusted so the QHE device is on a QHE plateau. The external coaxial leads from the bridge are removed from the Drive and Inner/Outer ports of the ac QHRS, and an applied voltage signal is placed across the inner and outer conductors of the Drive port. A measured voltage signal appears across the inner and outer conductors of coaxial leads S, D, 1, 3, and 5 for the magnetic field direction assumed in Fig. 1, so these particular coaxial leads draw most of the 90Њ out-of-phase current. Therefore the measured total capacitance-toshield C T is approximately C T (B ) ≈ C 1 + C 3 + C 5 + C D + C A , and the value of C A can be obtained by subtracting the value of C 1 + C 3 + C 5 + C D from C T (B ). The magnetic field is reversed. Then C T (ϪB ) ≈ C 2 + C 4 + C 6 + C S + C B when the voltage signal is placed across the inner and outer conductors at the Inner/Outer port, thus yielding the value of C B . In the second method the magnetic flux density B is reduced to zero. The quantum Hall voltages disappear, so the voltage generators can be replaced in the circuit by electrical shorts. The QHE device now behaves like a two-dimensional sheet resistance, and the R H (i )/2 resistances located at the source and drain ends of the QHE device in Fig 1 are zero. Longitudinal resistances r a , r b , r c , and r d become much larger than they were when on a QHE plateau. Their values can be obtained by four-terminal resistance measurements in a dc sample probe. The R H (i )/2 resistances of the six QHE side arms are replaced by much smaller resistances whose values can be obtained from two-terminal measurements via room temperature access points S, 1 through 6, and D once the appropriate lead and longitudinal resistances are subtracted. An applied voltage signal placed across the inner and outer conductors of the Drive port would cause a voltage signal to appear across the inner and outer conductors of all capacitances-to-shield. Thus the total capacitance-toshield is given by the expression C T = C S + C 1 + C 2 + C 3 + C 4 + C 5 + C 6 + C D + C A + C B , where C A ≈ C B if the QHE device, the sample holder, and the bonding wires between them are all symmetrically arranged. We expect both C A and C B will be about 1 pF or smaller in the NIST sample probes. The equivalent circuit accounts for leakage currents between the ac QHRS's inner conductors and the shields via resistances r K A and r K B located on either side of the QHE device. Rather large voltages are used when measuring leakage resistances, so it would be safest to temporarily replace the device with shorts when measuring the total open-circuit leakage resistance r Lk at access point S, 1 through 6, or D. If the leakage resistances are symmetrically distributed, then r K A ≈ r K B ≈ 2r LK . (Their values are large compared with the lead resistances, so they are essentially connected in parallel within the circuit.) The NIST sample probes will be constructed so these leakage resistances are very large; r K A and r K B should be at least 10 14 ⍀, but in the numerical examples of this paper we will assume 10 12 ⍀ arising from dirty coaxial connectors. The capacitances, inductances, and leakage resistances of Fig. 1 contribute parasitic impedances to measurements of the ac QHRS. These capacitances, inductances, and leakage resistances are drawn as discrete circuit elements. In reality they are distributed within the standard. They could, in principle, be better represented. For example, we could replace capacitance-toshield C 1 with a capacitor of value C 1 /2, and place a second capacitor of value C 1 /2 and a series-connected outer shield impedance z' 1 between the other side of circuit element z 1 at point 1' and the first C 1 /2 capacitor. This distributed impedance would, however, greatly complicate the circuit analyses, with little gain in accuracy. (Our discrete elements circuit over-emphasizes the capacitance-to-shield currents if z 1 Ն z' 1 and gives the same capacitance-to-shield currents if z 1 = z' 1 .) This completes the description of the equivalent circuit. The next section analyzes the circuit. Analysis of the Single-Series "Normal" Circuit Kirchoff's rules are used to sum the currents at branch points and the voltages around loops to obtain exact algebraic equations for the equivalent electrical circuit shown in Fig. 1. We refer to this circuit as single-series "normal": single-series because there is just one current lead connected to the source contact pad S' , and another current lead connected to the drain pad D' of the QHE device; and "normal" because the Hall voltage leads are connected to the central arms 3 and 4 of the device. Exact Single-Series "Normal" Equations Finding the exact algebraic equations for all the currents, and for the correction factor ⌬ H to the quantum Hall voltage, as defined by Eq. (1), is rather difficult because there are many coupled equations, especially for the multi-series circuits [24] examined later in this paper. All the solutions of this paper were independently derived by each author, and shown to be identical. Then each author independently used computer software to obtain identical numerical results for several test cases. It is important to obtain the exact solutions, rather than initially guess approximate solutions, because the frequency dependent effects we are trying to minimize or eliminate are small, but significant. The results are presented here in order to spare others the task of deriving them. To simplify the final algebraic expressions, we define some substitutions of variables, and substitutions of substitutions. The particular substitutions depend on the choice of loops. For example, the variables A and B listed below result from a voltage loop around the path C S , S' , C B , and back through the shield to C S . This gives We express all currents, and the quantum Hall voltage, in terms of I Ot because that is the current that enters the ac reference standard (not shown in Fig. 1). Three of the current solutions are trivial because of the four-terminal-pair definition [15,16] listed in Sec. 4 The remaining exact equations for the single-series "normal" circuit currents are The exact equation for the quantum Hall voltage is obtained by summing the voltages between the inner conductors of the Detector coaxial port and the Potential coaxial port. Taking the path through arm 4, voltage generators V c4 and V c3 , and arm 3 we find that which can also be expressed in the form by using Eqs. (4) and (5). An approximate solution in this form will be given in Eq. (11). A Numerical Example Contributions of the parasitic impedance within the ac QHRS to the measured value of V H (3,4) can be investigated by using numerical examples in Eqs. (5) and (6). We assign cardinal numbers to circuit element values to emphasize that the results are not intended to provide corrections to existing experimental data because the effects of wire-to-wire capacitances are not included at The 90Њ out-of-phase (j) parts of shunt currents I C 5 , I C 3 , I C 1 , I C D , and I C A , are much larger than for shunt currents I C 2 , I C 6 , I C S , and I C B because contact pads 5' , 3' , 1' , and D' are all near the quantum Hall potential, rather than near the shield potential. A 1 % out-of-phase current passes through each of the coaxial cable capacitances C 5 , C 3 , C 1 , and C D in this example. That is not necessarily a problem if the bridge Drive can provide this extra 4 % of out-of-phase current to I Dr . Expressing Eq. (6a) in the form of Eq. (6b), we find that The 5 ϫ 10 -8 in-phase correction to R H is too large compared with the desired 1 ϫ 10 -8 R H absolute accuracy, but even worse, there is a 1 % contribution to V H (3,4) in the 90Њ out-of-phase j term. Auxiliary balances in the NIST high precision ac bridges are not capable of providing out-of-phase adjustment signals larger than 5 ϫ 10 -4 R H , so the 1 % out-of-phase signal is unacceptable. We next list the approximate solutions to show the source of this problem. Approximate Single-Series "Normal" Solutions Many of the terms in the following approximate solutions were obtained by algebraically finding the dominant contributions to the exact equations. Other terms were found by "educated guesses" and "trial-and-error". We verified all the terms by changing the values of relevant circuit element components in the computer programs. The following approximate solutions give numerical results that agree with the results from the exact solutions to within at least two significant figures for both the real and imaginary parts of the numerical results. Other terms may need to be added to these approximate equations if the circuit components have values significantly different from those listed in Eqs. (7). I a ≈ I Ot + I C 5 + I C 3 + I C 1 (10m) Expressing Eq. (6a) in the form (6b), We see from Eq. (11) that sample probe lead 5 is the dominant source of the 1 % out-of-phase component of the quantum Hall voltage signal. The next subsection investigates the effect of removing this lead. Disconnecting Sample Probe Lead 5 Equation (11) predicts that the out-of-phase term j[ C 5 R H ] in the expression for ⌬ 34 can be reduced by disconnecting coaxial cable 5 at position 5' , where 5' is either located at the potential contact pad on the QHE device, or at an intermediate contact point in the sample holder. There is a capacitance C 5' between the QHE device and the shield that replaces capacitance C 5 in Fig. 1. Also, a shield impedance z 5' replaces the lead impedance z 5 . The most significant terms of Eq. (11) are now If we assume in the numerical examples that when they are 250 pF. All experiments which have measured ac values of V H (3,4) have had to remove coaxial lead 5 because of the effects due to the large capacitance-to-shield C 5 presented above. Equations (11) or (12) might be used to apply corrections to the experimental data in order to reduce the 5 ϫ 10 -8 in-phase error in R H I Ot . However, there are several points of concern: (a) our approximate and exact equations do not include the effects of wire-to-wire capacitances, and these may be significant; (b) the out-ofphase component of V H (3,4) has been reduced to about 1 ϫ 10 -4 R H I Ot by removing lead 5, but great care must be taken to correct for the in-phase (phase defect) contributions of the bridge components used to null the out-of-phase signal because these in-phase (phase defect) signals can be unintentionally added to the real, second-order terms of the in-phase component of V H (3,4) in Eqs. (11) or (12) that vary with 2 ; and (c) it is not trivial to measure the value of C 5' in order to apply the correction with lead 5 disconnected. We will not consider further the single-series "normal" circuit as a viable ac QHRS candidate because lead 5 must be disconnected, and that violates one of our desired goals. Figure 2 shows an equivalent electrical circuit representation of an ac QHRS using single-series "offset" connections to the QHE device. It is single-series because there is just one current lead connected to the source contact pad S' , and another current lead connected to the drain pad D' of the QHE device, and "offset" because the Hall voltage leads are connected to the off-center arms 5 and 6 of the device. Arms 5 and 6 are closest to the low potential end of the device at S' , and nearest to the ac reference resistor (not shown in the figure). Those arms were chosen in an attempt to reduce the effects of shunt currents through I C 5 that we found in Sec. 5. Exact Single-Series "Offset" Equations To simplify the final algebraic expressions, we again define some intermediate substitutions of variables, and substitutions of substitutions. Let Three of the current solutions are trivial because of the four-terminal-pair definition [15,16] I Dt = I Pt = I C 6 = 0. The remaining exact equations for the single-series "offset" circuit currents are The exact equation for the quantum Hall voltage is obtained by summing the voltages between the inner conductors of the Detector coaxial port and the Potential coaxial port. Taking the path through arm 6, voltage generators V S6 and V S5 , and arm 5 we find that which can also be expressed in the form V H (5,6) = [1 + ⌬ 56 ]R H I Ot . (19b) A Numerical Example We investigate the parasitic impedance contributions of the ac QHRS on the measured value of V H (5,6) by using the cardinal numbers listed in Eqs. (7) in Eqs. (18) and (19). The numerical results for the currents are The 90Њ out-of-phase parts of shunt currents I C 5 , I C 3 , I C 1 , I C D , and I C A are again much larger than for shunt currents I C 2 , I C 4 , I C S , and I C B because contact pads 5' , 3' , 1' , and D' are all near the quantum Hall potential, rather than near the shield potential. A 1 % out-of-phase current once again passes through each of the coaxial cable capacitances C 5 , C 3 , C 1 , and C D in this example, which is not necessarily a problem if the bridge Drive can provide this extra 4 % of out-of-phase current to I Dr . Expressing Eq. (19a) in the form of Eq. (19b), we find that for 250 pF coaxial leads. The 2 ϫ 10 -8 in-phase correction to R H for 100 pF leads is larger than our desired 1 ϫ 10 -8 R H total uncertainty, but a correction could be made to the measurements via the approximate equation that might provide sufficient accuracy. We will therefore consider the single-series "offset" circuit as a possible ac QHRS in a future paper which includes the effects of wire-to-wire capacitances. The approximate equations for the currents will be given in that paper. Figure 3 shows an equivalent electrical circuit representation of an ac QHRS using two double-series connections to the QHE device. It is called double-series because there are two current paths to the device provided by a short coaxial lead outside the sample probe that connects room temperature access points 3 and D at point Y. Another short coaxial lead connects access points 4 and S at point Z. Short coaxial leads connect point Y with the Drive and Potential ports, and point Z with the Inner/Outer and Detector ports. For simplicity, we have placed all the parasitic impedances of the short coaxial cables in the cables and coaxial connectors labeled Ot, Dt, Pt, and Dr. These connections were first used by Delahaye [24] in ac quantized Hall resistance measurements (but points Y and Z were at the sample holder rather than outside the cryostat). Most subsequent ac experiments have used double-series or triple-series connections. Exact Double-Series Equations To simplify the final algebraic expressions, we again define substitutions of variables, and substitutions of substitutions. Let Six of the current solutions are trivial because of the four-terminal-pair definition [15,16] I Dt = I Pt = I C S = I C Dt = I C 4 = I r Dt = 0. (26a) The remaining exact equations for the double-series circuit currents are I c = I d + I C 5 + I C 6 (26k) The exact equation for the quantum Hall voltage is obtained by summing the voltages between the inner conductors of the Detector coaxial port and the Potential coaxial port. Taking the path through point Z, arm 4, voltage generators V c4 and V c3 , arm 3, and point Y we find that which can also be expressed as (27b) A Numerical Example We investigate the parasitic impedance contributions of the ac QHRS on the measured value of V H (Y,Z) for a particular example of the double-series circuit by using the cardinal numbers listed in Eqs. (7), plus the following cardinal numbers for the additional circuit elements r Ot = r Dt = r Pt = r Dr = 10 -3 ⍀ (28a) C Ot = C Dt = C Pt = C Dr = 10 -12 F. The 90Њ out-of-phase parts of shunt currents I C 5 , I C 3 , I C 1 , I C D , I C A , I C Pt , and I C Dr are again much larger than for shunt currents I C 2 , I C 6 , I C B , and I C Ot because contact pads 5' , 3' , 1' , and D' are all near the quantum Hall potential, rather than near the shield potential. A 1 % out-of-phase current passes through each of the coaxial cable capacitances C 5 , C 3 , C 1 , and C D in this example, which once again is not necessarily a problem if the bridge Drive can provide this extra 4 % of out-of-phase current to I Dr . Expressing for 250 pF coaxial leads. The 1 ϫ 10 -8 R H in-phase correction to R H for 100 pF leads meets our desired 10 -8 R H absolute accuracy, but there is a 1 % contribution to V H (Y,Z) in the 90Њ out-ofphase j term. Auxiliary balances in the NIST high precision ac bridges are not capable of providing out-ofphase adjustment signals larger than 5 ϫ 10 -4 R H , so the 1 % out-of-phase signal is unacceptable. The approximate solutions are listed in the next subsection to show the source of this out-of-phase problem. Approximate Double-Series Solutions Some of the terms in the following approximate solutions were obtained using the results of the dc double-series analysis of [22]. Most terms were found in a tedious process by changing the individual values of circuit element components by an order of magnitude in the computer program, observing the calculated results, and then finding the algebraic expressions that produced these results. The approximate solutions yield numerical results that agree with the exact numerical results listed in Eqs. (29) and (30) to within at least two significant figures for both the real and imaginary parts, but other terms may need to be added to these approximate equations if the circuit components have values significantly different from those listed in Eqs. (7) and (28). I c ≈ I c a = I d a + I C 5 a + I C 6 a (32i) I a ≈ I a a = I b a + I C 1 a + I C 2 a (32q) I r Dr ≈ I r Dr a = I D a + I 3 a + I C D a + I C Pt a (32w) I Dr ≈ I Dr a = I r Dr a + I C Dr a . As expected, Eqs. (32i) and (32h) suggest that the current I C 5 in Fig. 3 enters the Drive, goes to point Y, to point D' , through longitudinal resistances r a , r b , and r c , through arm 5, and then exits through capacitance-toshield C 5 . We would have likewise assumed that the current I C 1 enters the Drive, goes to point Y, to point D' , through r a , through arm 1, and then exits through C 1 . However, the approximate Eqs. (32p) and (32k), where I C 1 appears in I 3' , suggest that I C 1 enters the Drive, goes to point Y, to point 3, through arm 3, travels "upstream" through r b , through arm 1, and then exits through C 1 . The current I C 3 , on the other hand, enters the Drive, goes to point Y, to point 3, and then exits through C 3 , bypassing the device altogether; this latter effect provides an advantage to double-series connections by reducing shunt currents within the device. Expressing Eq. (27a) in terms of Eq. (27b), we find that Eq. (33) gives the approximate quantum Hall voltage correction terms. We see from Eq. (33) that sample probe lead 5, just as in the single-series "normal" case, is the dominant source of the 1 % out-of-phase component of the quantum Hall voltage signal in the numerical example for this double-series connection to the QHE device. The next subsection investigates the effect of removing this lead, which was effective before in the single-series "normal" case of Sec. 5. Disconnecting Sample Probe Lead 5 Equation (33) predicts that the out-of-phase term j[ C 5 R H ] in the expression for ⌬ YZ can be reduced by disconnecting coaxial cable 5 at position 5' , where 5' is either located at the potential contact pad on the QHE device, or at an intermediate contact point in the sample holder. There is a capacitance C 5' between the QHE device and the shield that replaces capacitance C 5 in Fig. 3. Also, a shield impedance z 5' replaces the lead impedance z 5 . If we assume in the numerical examples that when they are 250 pF. All experiments which have measured ac values of V H (Y,Z) for double-series connections have had to remove coaxial lead 5 because of the effects due to the large capacitance-to-shield C 5 presented above. Equation (33) could be used to apply corrections to the experimental data in order to reduce the 9.4 ϫ 10 -8 in-phase error in R H I Ot . However, our approximate and exact equations do not include the effects of wire-towire capacitances; the bridge auxiliary balance could introduce unintentional in-phase contributions because of the large out-of-phase component of V H (Y,Z); and it is not trivial to measure the value of C 5' in order to apply the correction with lead 5 disconnected. Double-Series Connections at the QHE Device Many experiments have made double-series connections to the QHE device at the bottom of the sample probe by using short bonding wires to form the circuit. Points Y and Z of Fig. 3 are thus moved from outside the sample probe down to the sample holder. There are no coaxial leads connected to points 1, 2, 5, and 6, so their capacitances-to-shield become much smaller. Four coaxial leads labeled Ot, Dt, Pt, and Dr connect the QHE device to the outside world. The double-series circuit shown in Fig. 3 remains exactly the same for this case, as do Eqs. (24) through (27). The values of some circuit components, however, change. We use the following cardinal values in our numerical example for 250 pF coaxial leads. Equation (40) implies a very small in-phase error in R H I Ot . This is not supported by measurements, which have observed errors in R H I Ot of order 10 -7 . The discrepancy could either be due to unintentional in-phase contributions from the bridge auxiliary balances arising from the large out-of-phase component of V H (Y,Z), or because our equations do not include the effects of wireto-wire capacitances. To assist laboratories who are making double-series connection measurements at the QHE device we list the additional terms that should be added to the approximate current and quantum Hall voltage solutions given by Eqs. (32) Ϫ j[ C Ot r Ot + C Pt r Pt ]}. We once again caution the reader that these approximate equations do not include the effects of wire-towire capacitances. This circuit is not a good candidate for further analysis because the quantized Hall and longitudinal resistances could not be measured on the same cool-down. Triple-Series Circuit The double-series circuit of Fig. 3 could be converted to a triple-series circuit by adding short coaxial leads between points Y and 1 and Z and 6. We do not consider this triple-series circuit since it would involve several additional months of effort to perform the analysis, and the problems found in double-series circuits in Sec. 7 due to large shunt currents through C 5 also occur in triple-series circuits. Either coaxial lead 5 would have to be disconnected at position 5' at the QHE device end of the sample probe, or the triple-series connections would have to be made at the device. Neither choice satisfies our goal of measuring the ac and dc quantized Hall and longitudinal resistances on the same cool-down. We therefore proceed to quadruple-series connections, which turns out to satisfy our requirements at this stage of analysis. Figure 4 shows an equivalent electrical circuit representation of an ac QHRS using two quadruple-series connections to the QHE device. It is quadruple-series because short coaxial leads outside the sample probe connect room temperature access points 5, 3, 1, and D at point Y, providing four current paths to the device. Other short coaxial leads connect access points 2, 4, 6, and S at point Z. Short coaxial leads outside the sample probe connect point Y with the Drive and Potential ports, and point Z with the Inner/Outer and Detector ports. Exact Quadruple-Series Equations To simplify the final algebraic expressions, we once again define substitutions of variables, and substitutions of substitutions. Let pair [15,16] measurements of ac quantized Hall resistance standards. The discrete circuit components include all of the parasitic capacitances, inductances, and leakage resistances of the standard except the wire-to wire-capacitances. Exact algebraic equations have been derived for the currents and quantum Hall voltages for single-series "normal", single-series "offset", double-series, and quadruple-series circuit connections to the device. We find that the single-series "offset" and quadruple-series connections appear to meet our desired goals of measuring both the quantized Hall resistance R H and the longitudinal resistance R x in the same cool-down for both ac and dc currents with an absolute accuracy of 10 -8 R H or better. These two circuits will be further considered in a future paper in which the effects of wire-to-wire capacitances are also included in the analysis.
11,685.8
1999-07-01T00:00:00.000
[ "Physics" ]
Evaluation of Motion Standard Based on Kinect Human Bone and Joint Data Acquisition In order to improve human bone and joint data, we propose a method to collect data and judge the standard of motion. Kinect is a 3D somatosensory camera released by Microsoft. It has three cameras in total. The middle is a color camera, which can take color images and obtain 30 images per second; on the left is the infrared projector, which irradiates the object to form speckle. On the right is the depth camera to analyze the infrared spectrum. On both sides are two depth sensors to detect the relative position of people. On both sides of Kinect are a set of quaternion linear microphone arrays for speech recognition and fi ltering background noise, which can locate the sound source. There is also a base with built-in motor below, which can adjust the elevation angle. It can not only complete the collection of color images, but also measure the depth information of objects. The experimental results show that we use MSRAction3D data set and compare the same cross-validation method with other latest research methods in the fi gures. The highest recognition rate of this method (algorithm 10) is the second, and the lowest and average recognition rates are the highest. The improvement in the lowest recognition rate is obvious, which can show that this method has good recognition performance and better stability than other research methods. Kinect plays a relatively important role in the movement of human bone and joint data acquisition. Introduction Body motion recognition has always been a hot issue concerned by people. However, there are still many basic problems in the field of computer vision that are not perfect and have not been reasonably solved. We live in the real threedimensional world, and the images obtained by ordinary cameras are two-dimensional, which leads to the problem of lack of information and inaccurate recognition. Therefore, we use Kinect human bone data to identify human actions, which can reduce the lack of information in data collection. According to the knowledge of three views and projection introduced earlier, if we can use two-dimensional plane human body projection in three different directions to express human action, we can more truly show human motion in the three-dimensional world [1]. This chapter uses the two-dimensional plane projection feature of human bone data. The human action recognition method is to project the three-dimensional joint points into the twodimensional space, construct the feature vector composed of three upward joint angles, represent the combination of translation and rotation through the changes of 17 joint angles of human body, and select the method of multiclassification support vector machine to classify 20 human actions in MSRAction3d data set [2]. The Kinect somatosensory device developed by Microsoft can directly capture the body movements of patients without requiring patients to wear and operate any peripheral devices. It has a more natural and convenient human-computer interaction mode and is more suitable for the development and adoption of community and family medical rehabilitation platforms [3]. Therefore, this paper selects Kinect somatosensory equipment as the human motion sensing carrier and designs a somatosensory rehabilitation training platform based on Kinect sensor. As shown in Figure 1, the platform integrates the functions of basic motion acquisition and rehabilitation evaluation, pre-sets typical rehabilitation training movements for the shoulder and elbow joints of the left and right limbs, and collects template motion flow data and training motion flow data through somatosensory sensors. Once the data was processed by the logic of the algorithm, the similarity of the data flows between the two groups of functions was calculated and examined by using the time dynamic control algorithm integration method and Hausdorff distance measurement algorithm. In addition, this paper examines the process of kinematic measurement of rehabilitation treatment to measure the effectiveness of rehabilitation treatment and perform interventions to clarify the algorithm and theoretical in paus [4]. Related Works Since the 1980s, Internet information technology has been greatly developed. A series of human-computer interaction technologies, such as human-computer interaction and intelligent gesture recognition, have emerged one after another. Human-computer interaction means that human and computer user interface interact in some way to produce information data input and output. In real life, people can use gestures and language to complete the information input to the PC, and the computer can complete the data output through pictures, videos, and other ways, so as to realize the information interaction between people and computers. The human-computer somatosensory interaction is based on the acquisition of depth image. The acquisition of depth image mainly includes the following three ways: structure light, time of flight, and multicamera [5]. The Kinect device selected in this design mainly applies structure light detection technology. Prime sense names this depth measurement technology as light coding. Light coding is a kind of structured light technology, but the depth calculation method of light coding is different from that of other structured lights. Light coding is a depth detection algorithm independently developed by Cristache, C. M. It is a structured light technology using special depth calculation method. Compared with the conventional structured light algorithm, the light source of the light coding algorithm is the diffraction spot randomly generated by the laser passing through the ground glass, which has high randomness and is related to the pattern and distance [6]. Of course, the prior light source calibration process cannot be omitted. Compared with the traditional structured light algorithm, light coding algorithm is a kind of three-dimensional space coding, which is controlled by the chip product ps1080 of Kulczyk, T. when calibrating the light source [7]. Moreover, the measurement accuracy is only affected by the density of the calibrated reference plane and has nothing to do with the spatial geometric position of the reference object. The first generation of Kinect somatosensory devices adopts light coding algorithm technology to collect the depth information of threedimensional space and compare it with the previously saved speckle reference image to obtain the distance between the target object in the Kinect field of view scene and the Kinect camera. However, according to Ma et al.'s paper on Kinect depth data accuracy analysis, with the increase of the distance between the object and the sensor, the random error of depth measurement will also increase, and increase from a few millimeters to 4 meters (when the maximum range of the sensor is reached) [8,9]. For improving this accuracy, Naufal, A., Anam, C., Widodo, C. E., and Dougherty, G. put forward a theoretical error analysis, which can clarify what factors affect the accuracy of data. Vignesh et al. studied and developed a new somatosensory rehabilitation system, which aims to improve the enthusiasm of patients' rehabilitation training and improve the efficiency of patients' rehabilitation training [10]. In this system, the motion data of patients can be recorded in real time, and the obtained motion data can be compared with the corresponding standard motion data in the database, so as to judge the recovery of patients' condition. In the system Kinerehab, patients are reflected on the interface with the image of virtual animation, which has a good and natural human-computer interaction atmosphere, enhances the interest of rehabilitation training, makes patients interested in rehabilitation training, and stimulates patients to train independently. Fajri et al. studied and developed a rehabilitation training system based on human tracking technology, which is mainly aimed at the recovery of the shoulder and elbow [11]. The system takes the position information of the patient's key joint points through the sensor, modifies the effective position information of the key joint points according to the standard action position information, and then imports the modified standard action data information into the system model. The system can effectively reduce the wrong actions of patients in rehabilitation training and avoid the damage of wrong actions to human body. At the same time, the system also has a scoring mechanism to evaluate the recovery of patients. Kim designed and studied a rehabilitation training system aimed at improving patients' balance ability. In this system, sensors are used to obtain depth image information, reshape the 3D model of human body, and conduct humancomputer interaction in the form of simple games through virtual reality scenes, so as to achieve the purpose of sports rehabilitation [12]. As a new rehabilitation training platform, robot is an effective clinical intervention means to assist doctors in the reconstruction of patients' motor [13]. In the early stage, the motion rehabilitation robot provided resistance and active force through the spring support of free inertia balance. Wang et al. also used the human bone detection ability of Kinect somatosensory equipment to realize the imitation control of 16 joint humanoid robot for human action [14]. Their experimental results show that the time lag of the developed control system is only a short 200 ms. In addition, Afrieda. N. has developed humanoid robots that can recognize human joints in the whole body. They also use Kinect somatosensory equipment to solve the joint angle of the robot by using analytical geometry method, which can solve the kinematics of the robot more quickly [15]. Method With the support of Kinect for windows SDK, it is possible to monitor the work of one or two human bones moving into the Kinect vision by controlling the bones, receiving data from human bone marrow, and then get a triangle integration of all joints [16]. The human skeleton frame design in this article is the first generation of Kinect. The number of each joint is shown in Figure 2 to support the description of the human skeleton. According to the sequence of Kinect bone data acquisition, number 20 human bone joint points from a to t, as shown in Figure 2. Each number corresponds to the position coordinates of a human bone joint point, which represents different parts of the human body. Table 1 is the naming of joint points corresponding to the number. 20 joint points represent the complete structure of human bones. Through the analysis of human motion, using the coordinate data of human joint points, every two adjacent joint points are formed into a joint vector, which contains the motion information of the joint [17]. It is also easy to construct the two-dimensional joint vector of the human body. Assuming that the coordinate data of the two-dimensional space of two adjacent joint points of the human body are Uðx 1 , y 1 Þ and Vðx 2 , y 2 Þ, respectively, the joint vector of the two-dimensional space composed of them is According to formula (1), set point Uðx 1 , y 1 Þ and point Vðx 2 , y 2 Þ to represent the joint points of the left ankle and left foot, respectively, and the vector in the formula represents the activity state of the end of the left foot in twodimensional space, covering the details of the movement of the left leg. A total of 19 human two-dimensional joint vectors are composed of 20 human bone joint points. If you look at different human functions in threedimensional space from an angle, you will not be able to tell the truth of the order, because it will create a local block and covers some of the characteristics of movement. Therefore, we use three images of the human body and the projection method to reflect objects in three-dimensional space in two-dimensional planes, find the objects of motion at differ-ent plane angles, to vary class, and pay for defects. Consider only the features of motion in a triangle [18]. Because changes in the angle of the human joint affect the nature of translation and rotation, angular data can be used to determine the location of various movements. We consider the angle of the joints of the human body in three places inside the two planes, operate the plane from three different angles of view, and create the angles comfort in three different planes: the angle between the joints and the joints in the big picture and the anterior-posterior angular projection of the major plane of the human body. The angle of integration on the left watch is the projection of the joint from left to right of the left plane of the human body. The angle of inclination at the top is the approximate angle of inclination in the lower plane of the human body from the top to the bottom [19]. The angle of the selected human bone marrow is shown in Figure 3. Figure 3 shows 17 human joint angles selected by us. According to the 19 human two-dimensional joint vectors constructed above, the size of human bone joint angle can be calculated by using cosine similarity formula, as shown in the formula θ is the angle of phase t of each bone data, and U ðtÞ and V ðtÞ are both vectors of phase t. Since the collected kinect bone data is in the form of three-dimensional coordinates (x, y, z), first, reduce the dimensionality of the threedimensional bone data to make it become the bone data on the two-dimensional projection plane of the XOY surface, and then use the two-dimensional coordinates to Calculate the size of the joint angle. cos Experimental Results and Discussion On this basis, aiming at the problem of low accuracy and stability of human action recognition in complex environment with high noise, a method of human action recognition based on hierarchical feature fusion was proposed, which divided different parts of human body according to the composition of human body structure system. The layered strategy is adopted, which is conducive to the decomposition of complex human movements. Firstly, according to the bone joint coordinates obtained by Kinect, the features of human joint angles in two-dimensional space are extracted, and the actions are roughly classified by the support vector machine (SVM) method. Then, the body vector, angular velocity, and acceleration in 3d space were extracted, and the movements were classified by HMM. With the knowledge of human models, the human body can be divided into five sections. Part I: physical. The body includes the head, neck, back, and hips. Part II: left hand. The left hand includes the forearm, left wrist, left elbow, and left shoulder. Part III: right hand. The right hand includes the arm, wrist, elbow, and shoulder. Part IV: left foot. The left leg includes the left foot, the ankle, the left knee, and the left thigh. Section V: right. Right foot includes the right foot, right ankle, right knee, and right foot. The body is an essential part of the human body. The motion feature information of some waists of the human body comes from this part of the joint points, and the motion feature information of the hands and feet comes from the joint points of the limbs [20]. By dividing the structure of the human body, it is possible to combine these five components to represent some of the most important functions of the human body. Therefore, in terms of proportions, this paper adopts a hierarchical concept. In step 1, first, divide the above five functions into categories with the same type of combination. For example, the two hands are a combination of the second and third sections. This is a rough proportion. The second layer redivides the functions of the same type of combination and filters the specific functions, which are the details of the process. We take the vector of the joint angle, including the projection of 17 joints on both sides of the three planes included in Chapter 3, according to the characteristics of the first rough decision and distribution of human activities. When distinguishing human actions with the same combination mode, we extract features from kinematics theory. A complete human action can be divided into main action and auxiliary action. The main action reflects the global state of the motion mode, and the auxiliary action reflects the local state of the motion mode. Only by combining the characteristics of the main and auxiliary actions can this action be expressed more accurately [21]. For the five parts of the human body divided above, we construct their limb vectors in three-dimensional space, respectively, shown as {3} represents in the three-dimensional space, t represents a certain time, and the joint points that are prone to drift at the end points of hands and feet are temporarily rounded off. Finally, the limb vectors in the threedimensional space at all times of the five parts are represented by GT f3g , AJ f3g , BK f3g , EP f3g , and FQ f3g , respectively. According to the different contribution to the expression of human action, two joint angles are selected from each part, which are called the main action joint angle. The torso part selects angle θ 4 and angle θ 9 , the left arm part selects angle θ 3 and angle θ 2 , the right arm part selects angle θ 6 and angle θ 7 , the left leg part selects angle θ 12 and angle θ 13 , the right leg part selects angle θ 15 and angle θ 16 , and the angular velocity of the joint angle is shown as The human action sequence is continuous and changes with time. The change of joint angle before and after forms the value of angular velocity ω. the limb vector and the angular velocity of active joint angle are the characteristics of active action, representing the overall movement of human limbs and trunk [22]. The bending of human limbs and trunk is reflected by the change of the distance between joint points. The human body is projected into the YOZ side plane from the left view direction. The distance between joint points of five bones is shown as where dðy, zÞ represents the Euclidean distance between the two joints in the lateral plane and t represents the distance between the head and end joint points of the five parts of the human body at a certain time, reflecting the bending of the limbs and trunk in motion. The change of the distance before and after the time forms the speed v, and the acceleration is a physical quantity describing the speed of the change of human motion, as shown in The distance between five joints in the lateral plane and the acceleration of motion are regarded as the characteristics of auxiliary action. The accelerations of the five parts are ∂ 1 , ∂ 2 , ∂ 3 , ∂ 4 , and ∂ 5 . The characteristics of main action and auxiliary action together constitute the characteristics of the second level fine classification of human action. Because everyone's height and arm length are different, even if two people make the same action posture, there will be some errors. In order to eliminate individual differences, divide the items in the formula by the shoulder width d AB and the mean value d of the Euclidean distance between joints in the plane of YOZ side, as shown in where d AB represents the width of each person's shoulder and D represents the average distance between the five major joints of the human body at all times in the YOZ side plane. The feature vector of the final rough classification is expressed as ½θ 1 f2g , θ 2 f2g , θ 3 f2g ⋯ θ 17 f2g . The matrix composed of feature vectors of fine classification is expressed as : ð17Þ Each row in the matrix represents a set of eigenvectors. In this system, we will compare the similarity between the standard action data sequence template and the action data sequence collected in real time, so as to achieve the effect of rehabilitation evaluation. In reality, the time taken by the patient to complete a set of rehabilitation exercises is usually inconsistent with the time taken by the standard action template. Some patients with serious injuries take several times as long as the standard action template. For two sets of motion data sequences with different lengths, the operation of corresponding time points alone cannot meet the accuracy requirements of the current system. DTW, namely dynamic time warping algorithm, is a widely used speech recognition algorithm. At first, it is to solve the error caused by different tones in speech recognition. Compared with other algorithms, dynamic time warping algorithm expresses the relationship between two sequences with inconsistent length through the time warping function under certain conditions. At present, it is widely used in gesture recognition, language recognition, and other fields. The central idea of DTW is to extend or shorten two data sequences with different time lengths to ensure the consistency of the length of the two data sequences to be tested and select an appropriate path from the constructed distance matrix to minimize the sum of the distances obtained by the two sequences. In the recognition of human action sequence, it is actually to find the minimum distortion distance between the current sequence and the standard template. It is assumed that the two sequences of the standard template and the template to be tested are R and T, and the length is m frames and N frames, respectively [23] 5 Wireless Communications and Mobile Computing shown as In general, for two sequences with different lengths, it is necessary to construct a m × n matrix grid, and the matrix element ði, jÞ represents the distance dðR i , T j Þ between R i and T j . The matrix is shown as : ð19Þ DTW algorithm is to find an optimal path through the grid matrix, and the grid points passed by this path are the points that need to be processed and calculated in the two sequences. Let this path be the regular path W, then the K -th element of W is defined as W k = ðI, jÞ k . That is, the regular path is shown as According to the constraints of continuity and monotonicity, the path of each grid point has only three directions. If the grid point is ði:jÞ at this time, the path of the next grid point is only ði + 1, j + 1Þ, ði, j + 1Þ, ði + 1, jÞ. There are many regular paths that meet this condition, but the formula of minimum distortion distance is shown as The path meeting the minimum distortion distance needs to meet three conditions: (1) The path must meet the requirements from the beginning Wð1, 1Þ of the sequence to the end W k ð m, nÞ of the sequence (2) The path needs to meet the sequence of time, namely m + 1 ≥ m&n + ≥n (3) The path selection needs to meet the monotonicity, and m and n increase by 0 or 1 in turn. That is, if W k−1 = ði, jÞ, W k has three choices ði + 1, j + 1Þ, ði + 1, jÞ, and ði, j + 1Þ The path with the smallest sum of cumulative distances of adjacent elements is the optimal regular path. According to the above formula, the DTW cumulative distance formula can be deduced as A Kionix KXSD9 three-axis accelerometer is included in the Kinect internal structure to prevent the error caused by Kinect being placed on an uneven plane and improve the stability of Kinect's depth image acquisition. Among them, the camera in Kinect can be adjusted according to the needs of users. The moving touch drive motor in Kinect can adjust the elevation of the camera to match the user's position change. Kinect also has a focusing system. If the user exceeds the field of vision, Kinect can automatically drive the base motor to adjust vertically by ±28° [24]. The field of view of Kinect imaging is 43.5°vertical and 57.5°horizontal. The maximum distance range that Kinect sensor can track and recognize is 0.8 m to 4 m, but in practice, in order to ensure accurate data, the recognition distance is 1.2 m to 3.5 m as shown in Table 2: Kinect is a "pipeline" system functional architecture, as shown in Figure 4. The original sensor data stream includes depth data stream, color image data stream, and audio data stream. Researchers can directly develop applications based on the original data stream information obtained by Kinect SDK. Kinect uses depth measurement technology in acquiring depth images, which is also known as light coding. That is to mark and code the space to be measured with light source. This technology belongs to structured light technology. Compared with other structured light technologies, the depth calculation method of this technology is different from other technologies. The light source of light coding is called "laser speckle." Laser speckle is the diffraction spot image formed after the light source irradiates the nonsmooth object or passes through the ground glass. These speckle images will constantly change with the distance of the light source, with a high degree of random variation. Generally speaking, the speckle images at any two places in the spatial environment are inconsistent. All speckle images in the whole space are saved and recorded, and then the structured light is irradiated into the space for marking. At this time, put a new object into the space and only view the speckle image generated by the object to obtain the specific position of the object [25]. Today, with the rapid development of science and technology, somatosensory technology has gradually matured. At present, in addition to being mainly used in entertainment games, there are also various human-computer interaction systems developed by many researchers. Therefore, at present, many companies at home and abroad are carrying out research and development in the field of motion sensing, designing, and manufacturing their own motion sensing equipment. Three of the most popular products on the market are the following. Xtion, a motion sensing device designed and produced by ASUS, mainly uses the Structure Light detection technology to obtain depth images. The Structure Light is used to 6 Wireless Communications and Mobile Computing obtain spatial data and calculate other data, such as depth image and object skeleton. In the working process of the equipment, an infrared laser emitter emits the encoded near infrared light source. After the infrared light is reflected, it is captured by the infrared camera, and then the corresponding feature coding correspondence is calculated to determine the depth information. Leap-motion, a motion-sensing device developed by Leap (USA), controls a person's hands by detecting their movements. Leap-motion uses multicamera detection technology. It has two infrared cameras and uses powerful data processing chips to quickly process the image data to detect the hand movements of the target. The leap-motion sensor's field of view detects the spatial shape of an inverted quadpyramid with effective detection range of 25 mm to 600 mm. The product also has an open SDK (Software Development Kit), which can meet the needs of developers to develop and research on Windows, Linux, and Mac, the three mainstream operating system platforms. Kinect for Xbox 360 is Microsoft's external motion sensing device that was officially unveiled on June 2, 2009. Structure Light detection technology is mainly used in obtaining depth images, which is the same as Xtion's principle. In terms of open source development, Microsoft has also designed SDK tools containing rich API interfaces, so developers can combine various languages to develop Kinect programming. By comparing the above three kinds of motion sensing devices, it can be found that Kinect for Xbox 360 motion sensing devices is more convenient and high-precision for controlling people. Moreover, Microsoft also provides a large number of API interfaces in Kinect SDK, which is more advanced, so that Kinect can not only get the original depth data in the work, but also get the target object bone node data. Therefore, Kinect for Xbox 360 is selected as the somatosensory device in this design. In order to obtain the coordinate information of human key joint points in real time to judge the similarity algorithm in the next step, Kinect sensor is used to track human joint points to obtain the motion data sequence in real time. The human skeleton is composed of human skeleton joint points, and each joint point has a relative position and direction. Kinect bone tracking module first detects the human body through the depth image information technology in Kinect and then corrects the human body posture. When the correction is successful, the measured human body can be tracked in real time, and the relevant bone information can be obtained in real time. When the correction fails, it will enter, while the cycle processes to calibrate again. In the example provided by Kinect, the programs of two technologies of depth image data and bone tracking are provided, respectively, but there is no relevant program of data extraction. After learning the relevant application software, the two technologies of depth image and bone tracking are combined to successfully record and extract the data information of key bone joint points of human body. The base note is as follows: firstly, initialize the device environment, create new objects and create new user generators to store relevant data information, and facilitate subsequent calls. Secondly, register relevant callback functions and calibrate the skeleton. The functions to be called include the generation of new users and the detection of bone posture. Finally, bone tracking is performed, and relevant bone information is updated and read in real time. The flow chart of obtaining human bone information is shown in Figure 5. The hardware setting of the simulation environment for the experiment in this chapter is Intel(R) Core(TM)i5-4210M CPU @2.60GHZ, 8GB RAM. Software settings are Windows 10, 64 bit operating system, and MATLAB R2017B. The MSRAction3D public data set is used. The data set contains 567 samples. Each action category is repeated by 10 different male and female subjects for 2~3 times. We take Figure 4: Interaction between application and Kinect sensor. ples. In the previous chapter, we directly classified 20 different human actions at one time. This chapter first roughly classifies these 20 actions into seven categories and then subdivides them: class I: single arm action; the second category: the movement of both arms; category III: single leg movements; the fourth category: trunk movement; category 5: the movement of both arms and legs; category 6: trunk plus arms; and category 7: trunk plus legs plus one arm. Because the number of subjects with different height and weight is not included in the study set, only the proportion of male and female subjects is included in the study set. We use MSRAction3D data set and compare the same crossvalidation method with other latest research methods in Figures 6, 7, and 8. The highest recognition rate of this method (algorithm 10) is the second, and the lowest and average recognition rates are the highest. The improvement in the lowest recognition rate is obvious, which can show that this method has good recognition performance and better stability than other research methods. Conclusion Based on Kinect human skeleton and joint data collection, the expression of human motion is completed through limbs, and all human bones together constitute human limbs. Rehabilitation exercise training is mainly to help people with movement disorders gradually recover their limb motor function through existing medical technologies and means. The traditional training methods are not suitable for patients to carry out rehabilitation training in family or community environment, regardless of the use cost or the complexity of operation. At the same time, the complex and cumbersome process of sports data acquisition often affects the rehabilitation doctors' judgment of patients' limb recovery. Therefore, this paper studies and designs a set of motion obstacle auxiliary evaluation training system based on Kinect. The system combines Kinect somatosensory sensor and virtual reality technology. By capturing and col-lecting human motion data information in real time and calculating relevant algorithms, patients can carry out rehabilitation training and complete the evaluation of rehabilitation training actions independently, so as to improve the rehabilitation efficiency of patients. Data Availability The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest It is declared by the author that this article is free of conflict of interest.
7,508.2
2022-08-31T00:00:00.000
[ "Engineering", "Computer Science" ]
FeynMG: a FeynRules extension for scalar-tensor theories of gravity The ability to represent perturbative expansions of interacting quantum field theories in terms of simple diagrammatic rules has revolutionized calculations in particle physics (and elsewhere). Moreover, these rules are readily automated, a process that has catalysed the rise of symbolic algebra packages. However, in the case of extended theories of gravity, such as scalar-tensor theories, it is necessary to precondition the Lagrangian to apply this automation or, at the very least, to take advantage of existing software pipelines. We present a Mathematica code FeynMG, which works in conjunction with the well-known package FeynRules, to do just that: FeynMG takes as inputs the FeynRules model file for a non-gravitational theory and a user-supplied gravitational Lagrangian. FeynMG provides functionality that inserts the minimal gravitational couplings of the degrees of freedom specified in the model file, determines the couplings of the additional tensor and scalar degrees of freedom (the metric and the scalar field from the gravitational sector), and preconditions the resulting Lagrangian so that it can be passed to FeynRules, either directly or by outputting an updated FeynRules model file. The Feynman rules can then be determined and output through FeynRules, using existing universal output formats and interfaces to other analysis packages. Introduction The increasing complementarity of high precision data from cosmological observations and high energy physics experiments makes it necessary to consider non-minimal gravitational couplings or the impact of additional degrees of freedom that are coupled through the gravitational sector with strengths that need not be Planck-suppressed.Examples include scalar-tensor theories of gravity [2], such as the Brans-Dicke theory [3] or, more generally, the Horndeski theories [4,5] (including beyond Horndeski [6,7] and DHOST [8,9] theories), in which the gravitational sector includes both the metric and an additional scalar degree of freedom.Other relevant examples include those in which the Higgs is non-minimally coupled to gravity, as is required in Higgs inflation [10][11][12][13][14][15][16][17] or so-called Higgs-Dilaton models [18][19][20][21].Indeed, such non-minimal couplings of the Higgs field to the scalar curvature are readily motivated by considering the renormalization group evolution of the operators of the Standard Model of particle physics plus gravity [22][23][24].Moreover, the ability to make Weyl rescalings of the metric and so-called disformal transformations [25][26][27] allows us to make connections between scalar-tensor theories of gravity and gauge-singlet, scalar extensions of the Standard Model of particle physics, such as Higgs-or neutrino-portal theories [28][29][30][31][32][33][34][35]. The challenge, however, is the proliferation of operators that non-minimal gravitational couplings provide, alongside degeneracies with operators that directly couple new degrees of freedom to the Standard Model.Dealing with this requires linearization of the extended gravitational sector, transformations of the metric, expansion around non-trivial vacuum configurations, the diagonalization of kinetic and mass mixings, and the truncation of infinite series of operators [36,37].This is usually done on a model-by-model basis, and it is a tedious and time-consuming process, which is ripe for automating, and doing so is the focus of this article. We present a Mathematica package FeynMG, which is designed to work alongside the well-known FeynRules package [1].FeynRules is an extensive Mathematica package that enables the user to output the Feynman rules for a given Lagrangian in formats that can be read in by a range of high energy physics analysis software, including CalcHep/CompHEP [38,39], FeynArts [40], FeynCalc [41], FormCalc [42], MadGraph [43], Sherpa [44], Whizard/Omega [45] and ASperge [46].Symbolic algebra packages have also been developed to deal with the complex tensor algebra that arises in General Relativity.A recent example is FeynGrav [47], a package that introduces gravity in its canonical form (the Einstein-Hilbert action) to FeynRules.xAct [48] is perhaps the most wellknown package, having already been followed by multiple compatible packages that allow the study of gravity in different cosmological scenarios.In particular, the package xIST/COPPER [49] extends xAct for general scalar-tensor theories, and it was used in Ref. [50] to calculate the effect of modified gravity on cosmological perturbations.In this sense, FeynMG extends FeynRules as xIST/COPPER extends xAct. FeynMG is intended as a 'preconditioner'.It takes as inputs a FeynRules model file and the Lagrangian of an extended gravitational sector.FeynMG then provides the functionality to implement the minimal gravitational couplings to the Lagrangian from the original model file and cast the complete theory in a form that can be further processed using the existing FeynRules package and its interfaces.However, we emphasise that FeynMG contains functionality that may be useful for theories that are being analysed independent of the couplings to gravitational sectors, and this will be highlighted throughout this article. The remainder of this article is structured as follows.In Section 2, we describe the general form of the problem of coupling the Standard Model to extended gravitational sectors.We then present the package FeynMG, summarizing the implementation in Section 3 and describing its usage in Section 4. Finally, our conclusions are presented in Section 5, and additional technical details are provided in the Appendices. Throughout this work, while it is a convention that is uncommon in the gravitation and cosmology literature, we use the "mostly minus" metric signature convention (+, −, −, −), in which timelike four-momenta p µ have p 2 > 0, since this is the convention commonly used by existing particle physics software packages.We use lower-case Greek characters for the Lorentz indices of the curved spacetime and lower-case Roman characters for the Lorentz indices of the flat, tangent space necessary for writing the Dirac Lagrangian in a generally covariant form.D denotes gauge covariant derivatives, general (i.e., gravitational) and gauge covariant derivatives are denoted by ∇, and an update to the general and gauge covariant derivative that is useful for scalar-tensor theories of Brans-Dicke type is represented by D. We work in natural units, but do not set Newton's gravitational constant to unity. Method We begin by reviewing how a Minkowski quantum field theory is minimally coupled to gravity and how additional scalar fields that are non-minimally coupled to the scalar curvature of the gravity sector can give rise to new interactions in that quantum field theory. For simplicity, we work with a toy model of QED plus a real scalar prototype of the Higgs sector.Generalizing to a complex scalar field that is charged under U (1) would be a technical complication that does not add to the main points that we wish to illustrate below.The action of this model in Minkowski spacetime is given by where we have introduced a would-be Higgs field φ, a Dirac fermion ψ, which will later be chosen as a proxy for the electron, and the U (1) gauge field A µ , which corresponds to the photon, with its usual field-strength tensor Note that the Dirac fermion is charged under U (1), and it is minimally coupled to the photon field via the gauge covariant derivative where q is the electromagnetic coupling.Before analysing the interactions induced by extending the gravitational sector beyond the usual Einstein-Hilbert action, we first need to insert all the minimal gravitational couplings that have so far been ignored by working in Minkowski spacetime.This means that, for every pair of contracted Lorentz indices, we must include a factor of the metric g µν .Additionally, for every γ matrix appearing in the Dirac Lagrangian, we must include a vierbein e µ a , which satisfies η ab e µ a e ν b = g µν , where η ab is the flat spacetime metric.(We remind the reader that the flat-space indices of the vierbein are raised and lowered with the flat-space metric.)The latter is necessary since the algebra of the γ matrices is defined with respect to the Minkowski metric, i.e., {γ a , γ b } = 2η ab ; the vierbeins relate the curved and flat, tangent spaces.By this means, we obtain the minimally coupled action where we have also included a factor of √ −g in the spacetime volume element.Herein, the Minkowski gauge covariant derivative has been promoted to the general covariant derivative. For scalar fields, the gravitational covariant derivative just trivially reduces to a partial derivative, such that ∇ µ φ → ∂ µ φ.However, when acting on a vector Y ρ , the covariant derivative takes the form where Γ ρ µν = 1 2 g ρλ (∂ µ g λν + ∂ ν g µλ − ∂ λ g µν ) are the usual Christoffel symbols.This definition for the covariant derivative is chosen such that ∇ ρ g µν = 0, but it can take many other forms.For instance, we will later define and work with a different choice that will be more convenient for the specific case of Brans-Dicke theories [37].However, it does not matter which definition one uses in this action, given that the following property will always hold since the curvature-dependent terms are symmetric under the permutation of µ and ν.Finally, the covariant derivative acting on a fermion field, including the dependence on the gauge field from QED, is given by where is the spin connection.The latter is defined by where With these minimal couplings now included, the action takes the form We can now proceed to append the gravitational sector.The minimal choice for the gravitational sector is the Einstein-Hilbert action, giving the full action where R is the Ricci scalar, and M Pl is the Planck mass, which determines the strength of the gravitational force.We can, however, also consider extended gravitational sectors, and one of the simplest examples is the Brans-Dicke scalartensor theory [3], in which a dynamical scalar field replaces the Planck mass.Such theories are described by an action with the following generic form: Herein, X is a real scalar field, subject to the self-interaction potential U (X) and coupled non-minimally to the Ricci scalar R through the function F (X). From a phenomenological perspective, there are tight constraints on the latetime evolution of Newton's gravitational "constant", e.g., from observations of the Moon's orbit [51].We must therefore choose the functions F (X), Z(X) and U (X), such that F (X) = M 2 Pl is approximately constant, e.g., by X obtaining an approximately constant vacuum expectation value (vev).Notice that the field X is not or, at least, does not appear to be canonically normalized, by virtue of the function Z(X) included in its kinetic term.In fact, additional contributions to the kinetic energy of the field X arise through the coupling to the scalar curvature.Moreover, while the matter sector does not contain any direct couplings to the field X, these couplings may be hidden in the mixing between the tensor and scalar degrees of freedom of the extended gravitational sector.The interactions between the field X and the would-be Standard Model fields become manifest once we have dealt with these mixings, and doing so is the main purpose of the package FeynMG. For the Brans-Dicke example above, there are two ways that we can proceed, as will be described in the next subsections: 1. We can make a Weyl rescaling of the metric to remove the non-minimal gravitational coupling of the field X to the Ricci scalar, taking us to the so-called Einstein frame.2. We can continue in the Jordan frame (where the curvature couplings are manifest), by analysing how the metric degrees of freedom mediate interactions between the field X and our would-be Standard Model fields. Before describing these two cases, however, it is important to note that the presence of additional non-minimal gravitational couplings, e.g., R µν ∇ µ ∇ ν X (as arises in the Horndeski class of scalar-tensor theories, where R µν is the Ricci tensor), the Weyl rescaling of the metric (or more generally a disformal transformation [25][26][27] of the metric) may not be able to remove all non-minimal couplings simultaneously.In these cases, we may not be able to transform into an Einstein frame and will have little choice but to continue working with nonminimal interactions with gravity. Weyl transforming into the Einstein frame Our aim is to isolate the new interactions between the matter fields that arise because of the modifications to the gravitational sector.The most common way of doing this is to transform to the Einstein frame.This amounts to a redefinition of the curvature-dependent objects (called a Weyl transformation) such that the resulting gravitational action does not present any non-minimal couplings. For the Lagrangian defined in Eq. ( 9), this transformation will take the following form where gµν , ẽµ a and MPl are the metric, vierbein and Planck mass in the Einstein frame, respectively.To get through the algebra, the following transformations will be useful: where F (X) = ∂F (X)/∂X and all the curvature-dependent quantities with a tilde are built with the Einstein-frame metric gµν or vierbein ẽµ a .Applying these transformations to the Jordan-frame action, we obtain wherein we have recovered a canonical Einstein-Hilbert term for the gravitational action.However, all the couplings of the Brans-Dicke scalar arising from the modification of gravity now appear explicitly in the matter Lagrangian.Notice, in particular, that most of the kinetic energies of the fields are not canonically normalised due to these new couplings. To canonically normalise the field X, we must solve the integral where X 0 is taken to be zero for simplicity.For the rest of the fields, we rescale them according to their classical scaling dimension, i.e., where F ( X) ≡ F (X).With this, the Lagrangian takes the following form: where Ũ ( X) ≡ U (X) and F ( X) = ∂ F ( X)/∂ X.Thus, one of the main inconveniences of working in the Einstein frame is that it loses the simplicity of the Lagrangian defined in the Jordan frame.This is because the Weyl transformation and the redefinition of the fields introduces factors of F ( X) throughout the Lagrangian, which, on making a series expansion of F ( X), will introduce infinite towers of operators that involve the SM fields and increasing powers of the scalar field X. At this point, we can already make an important observation: The couplings between the SM fields and the scalar field X arise only through the scalar kinetic terms and terms with dimensionful parameters, i.e., those terms that are not invariant under Weyl transformations.Thus, for the Standard Model (illustrated already by the toy model described here), the modifications to the dynamics from the new scalar field X are, in the Einstein frame, communicated by the Higgs sector, with the squared mass parameter µ 2 of the tree-level Higgs potential playing the dominant role at low momentum exchange.In this way, there are strong parallels between the Brans-Dicke-type scalar-tensor theories and Higgs portal theories (see Ref. [36]). Expanding the fields around their vacuum expectation values will give rise to kinetic and mass mixings between φ and X.Thus, when two fermions interact via their Yukawa coupling and exchange a would-be Higgs boson ( φ) in the t channel, there are two contributions to the central potential: a short-range interaction due to the heavy mode (the Higgs boson) and a long-range interaction due to the light mode (the light, additional scalar boson), see Ref. [36].Such long-range forces arising from the additional scalar fields of extended gravity sectors are often referred to as "fifth forces".In this way, even if the original matter Lagrangian is only minimally coupled to gravity in the Jordan frame, there can be experimentally testable modifications to the force laws that depend on the dynamics of the new scalar field. Given how these new interactions manifest in the Einstein frame, it is instructive to consider how the same modifications to the dynamics manifest in the Jordan frame, without making the Weyl transformation (at least at first).This is the focus of the next subsection. Staying in the Jordan frame We can determine the modifications to the dynamics without performing a Weyl transformation to the Einstein frame and work directly in the Jordan frame.In this frame, new interactions between the fields of the matter sector arise through the gravity sector itself, and we proceed by perturbing the metric around a flat spacetime [52][53][54] in the gravitational weak-field limit. Expanding the metric up to leading order in perturbations corresponds to where η µν is the usual flat spacetime metric and h µν is the perturbation in the metric, which, once quantized, corresponds to the graviton.The higher order terms in the expansion of g µν are necessary to satisfy g µν g νρ = δ ρ µ to all orders.For the gravitational sector of the Brans-Dicke-like theory [Eq.( 11)], with action we obtain the following expansion up to second order in the fields: It still remains to fix a gauge, and one choice is the harmonic gauge, which satisfies the following condition: This can be introduced into the Lagrangian through the term where Γ µ = g αβ Γ µ αβ .With this gauge choice, linearization of Einstein-Hilbert gravity leads to the familiar Fierz-Pauli Lagrangian [54], given by When working with Brans-Dicke theories in the Jordan frame, it is convenient to use a different gauge: one that maps to the harmonic gauge when performing the Weyl transformation to the Einstein frame. 1 This can be achieved by redefining the covariant derivative such that its action on a vector Y ν is as follows: where This modified covariant derivative will map to ∇ µ when going to the Einstein frame and satisfies the identity D ρ (F (X)g µν ) = 0 while preserving diffeomorphism invariance in the action, as shown in Ref. [37,[55][56][57].We can then define a scalar-harmonic gauge condition in terms of the new covariant derivative, namely This can be introduced into the Lagrangian as Expanding this gauge fixing term around a Minkowski background and adding it to the linearized gravitational sector from Eq. ( 20), we obtain Herein, we have recovered the usual kinetic energy terms of the graviton, as appear in the Fierz-Pauli Lagrangian (23), with the exception that non-minimal couplings to the field X appear through the overall factor of F (X). Notice that the Lagrangian (28) contains two additional terms relative to the Fierz-Pauli Lagrangian (23).The first contributes to the kinetic energy of the field X, which will have to be canonically normalized, and the second is a kinetic interaction between X and the trace of the graviton h.As we will show later, it this kinetic mixing that leads to additional interactions between the matter fields.On including the matter sector from the original action from Eq. ( 9), we get to the following Lagrangian after the linearization up to first order in 1/ F (X) where graviton self-interactions have been ignored and T µν is the energy-momentum tensor of the matter sector.The kinetic energy of the X field can be canonically normalized by defining where X 0 is taken to be zero for simplicity.Doing so leads to the Lagrangian where F (χ) ≡ F (X), F (χ) = ∂ F (χ)/∂χ and Û (χ) ≡ U (X).Now, we have only the graviton left to canonically normalise, since it is still non-minimally coupled to the function F (χ).However, as noted previously, the potential Û (χ) must lead to a non-vanishing vacuum expectation value for χ at late times so that the theory mimics Einstein gravity. 2 With this in mind, we shift χ → χ + v χ to obtain Figure 1: Series of diagrams contributing to the fifth force, and arising from the kinetic mixing between the graviton hµν and the scalar field χ.The ellipsis represents the series summing over all insertions of the kinetic mixing. where higher-order terms in the interactions between χ and h µν have been omitted in the ellipsis. The modification of gravity leads to a kinetic mixing between the trace of the graviton h and the χ field; the last term in the first line of Eq. (32).The example of the fifth force exchange described in the previous section then manifests in the Jordan frame through this mixing, as shown in Figure 1 (see Ref. [37]). We can remove this mixing by the following transformation of the graviton and χ field: where F (v χ ) = M 2 Pl has been substituted and σ corresponds to the canonically normalized scalar field.This amounts to a perturbative implementation of the Weyl transformation, as is clear when one considers the resulting Lagrangian where T µ µ is the trace of the energy-momentum tensor.The fifth force arising from the final term in Eq. (34) will depend on the trace of the energy-momentum tensor of the interacting particles, leading to at most derivative interactions with σ for scale-invariant sectors [58]. We have seen that working in the Jordan frame requires us to linearize the gravitational sector and to diagonalize the fields, while in the Einstein frame, we had to perform the Weyl transformation and various rescalings of the matter fields, losing the simplicity of the potentials in the process.Whichever approach we take, the overall message of this section is not a discussion on which frame is best for calculations, as it is a matter of preference, but the fact that deriving Feynman rules for scalar-tensor theories is a tedious and time-consuming task, even for the simplest models. This begs for a tool that helps us automate this process.In the rest of this paper, we will introduce the Mathematica package FeynMG within the FeynRules environment, which can efficiently perform manipulations on scalar-tensor theories of the types described in this section. Implementation FeynMG implements calculations of the type described in Section 2. The only necessary input is a model file compatible with FeynRules containing the matter Lagrangian and the description of all the existing fields and parameters.The user can then supplement this Lagrangian with their chosen scalar-tensor theory. Scalar-tensor theories will generally give rise to both mass and kinetic mixings between fields.While FeynRules can deal with mass mixing if pre-defined in the model file, it cannot deal with kinetic mixing or cases where the form of the mass mixing is not known a priori.This is because FeynRules will ignore terms higher than quadratic order and will assume that all fields are canonically normalized.The scope of FeynMG is to linearize gravity and perform the necessary redefinitions to the fields such that it can be consistently used by FeynRules and all compatible packages. We aim to make the code as easy as possible to use without losing the generality in the model files and desired gravitational actions.For example, for the input Lagrangian, it is possible to use an action defined in flat spacetime (i.e., reuse a FeynRules model file without modifying it).This is possible thanks to the function InsertCurv, which for every pair of contracted indices will insert a metric g µν or vierbein e µa , as appropriate, and promote partial derivatives to covariant derivatives. Once all the minimal curvature dependencies are inserted into the Lagrangian, we need to append a gravitational action, wherein, e.g., the Ricci scalar can be specified using RScalar (see Appendix C.1 for the list of defined curvature objects).As is the case for FeynRules, it is necessary to identify fields and parameters.These attributes can be assigned to variables by using the functions AddScalar[] and AddParameter[], respectively, allowing complete freedom when creating the gravitational sector, and any number of new scalar degrees of freedom and parameters to be defined.In principle, the package should be able to deal with any gravitational sector, but it becomes more complicated the further away we go from Brans-Dicke theories.The effective Planck mass can be extracted at any point in the calculation by using the function GiveMpl.Moreover, using InsertMpl will calculate the effective M Pl in the action and substitute it into the expression. As shown previously, in the particular case of Brans-Dicke gravity, we can perform a Weyl transformation such that the gravitational sector is of Einstein-Hilbert form and the matter action is instead dressed with additional scalar interactions.This is implemented in FeynMG by the function ToEinsteinFrame.However, more general scalar-tensor theories may not have an Einstein frame, forcing us to stay in the Jordan frame and proceed by linearizing gravity.The latter is implemented by the function LinearizeGravity, where the gravitational sector will be expanded up to second order, generating the kinetic energy for the graviton, and the matter sector will be expanded up to linear order in the interactions with the metric perturbation h µν .Moreover, the Jacobian √ −g will be automatically inserted, unless the option {Jacobian->0ff} is provided. As described in the previous section, in the case of Brans-Dicke-like theories, it can be convenient to use the scalar-harmonic gauge from Eq. (27).By specifying the option {SHGauge->0n}, LinearizeGravity will determine the scalar-harmonic gauge fixing term and append it to the Lagrangian, depending on the specific coupling function F (X). 4 .This gauge choice will likely leave CMod terms in the linearized Lagrangian, corresponding to the modification of the Christoffel symbols Notice that the F (X) 2F (X) prefactor will have to be expanded in terms of X.Once this expansion is truncated at some order in X, we can no longer make a nonlinear redefinition of the X field (such as X → X 2 ), since the ignored higherorder terms will give contributions at lower orders.To avoid this problem, CMod won't be expanded until all the kinetic energies of the scalar fields have been canonicalized. When dealing with tensor algebra, we are used to working with Einstein's index notation, for which the following holds: A µ A µ = A ρ A ρ .However, Mathematica will treat both terms A µ A µ and A ρ A ρ as distinct, since their indices are not represented by the same variable, leading to an overly complicated and long expression filled with repeated terms.The function IndexSimplify deals with this problem by replacing indices term by term from a user-supplied set of indices, so that the expression can be simplified using Mathematica's native functionality. From here, which frame we use is unimportant, since the package has all the tools to leave the Lagrangian ready to be readable by FeynRules.If we stay in the Jordan frame (as may be necessary for theories that do not have an Einstein frame), one first needs to normalize the fields canonically.For scalar fields, the canonical normalization is implemented by the function CanonScalar, which will find and normalize the lowest-order derivative term of every field.In the case where the lowest order is already very complicated, one can use the in-built Mathematica function Series to perform a series expansion. Similarly, we also need to normalize the graviton kinetic energy canonically.For that, depending upon the gravitational action, we might need to expand the fields around their vacuum expectation values, using VevExpand, which first calculates all the possible values for the vevs, and then shifts all the fields around the user's chosen branch of solutions.Once the graviton kinetic energy has a constant prefactor, we can then use CanonGravity, leaving all the fields canonically normalized with derivative interactions.As mentioned before, it will be at this point where all the CMod terms arising from the modified covariant derivatives will be expanded to make manifest their dependence on the additional scalar degree of freedom arising from the extended gravitational sector. The only thing left to do is to deal with any mass or kinetic mixings that have arisen between any of the metric and scalar degrees of freedom.As mentioned previously, FeynRules assumes that all fields are canonical and only works with terms higher than quadratic order, so any mixing terms in the Lagrangian would be ignored in the outcome.To deal with this, MassDiagMG or KineticDiagMG diagonalizes the scalar field masses or kinetic energies, respectively.When proceeding in the Jordan frame, as we saw in the last section, the dominant modifications to the dynamics arise through kinetic mixing between the additional scalar field and the trace of the graviton (cf., e.g., Figure 1).The function GravKinMixing will calculate and substitute into the Lagrangian the field redefinitions that diagonalizes this kinetic mixing the equivalent of Eq. (33).With this, the Lagrangian should be in a form ready to be used by FeynRules. Linearizing gravity and manipulating the Lagrangian into a form amenable to FeynRules can take significant computing time for extensive or complicated models.So that this process does not need to be repeated each time, the user can use the function OutputModelMG to create a new model file from the final form of the Lagrangian produced by FeynMG, which includes all the information about the redefined fields, the parameters of the extended model and the effective Lagrangian itself.This model file can then be used directly in FeynRules without the need to rerun the manipulations implemented by FeynMG. To summarize, the package FeynMG provides a set of tools to help the user to upgrade the original FeynRules model file to one that includes the degrees of freedom of a canonical or extended gravitational sector. Usage In this section, we provide the instructions for loading FeynMG and using it to perform the manipulations described in the preceding sections.We will work in the Jordan Frame, given that the same tools can be used for the Einstein frame.In Appendix C, we provide a summary of the tools provided by FeynMG. Installation FeynMG has dependencies on FeynRules, so both packages need to be loaded into Mathematica to make use of FeynMG.This can be done by running The next step is to load a model file that is compatible with FeynRules using their function LoadModel[] (for an extensive description on how to build these files, see Ref. [1]).As mentioned previously, this model file does not need to include gravity in the defined fields or Lagrangians; these can be appended through FeynMG, as described earlier in Section 3. Defining a gravitational action and transforming to the Einstein Frame Throughout this section, we will work with the same Lagrangian from Eq. ( 9), whose matter sector is defined via Note that the last term of the first line corresponds to a generic covariant gauge for the U(1) gauge field.The first thing to do is to introduce the minimal gravitational couplings of this matter Lagrangian.This amounts to inserting metrics or vierbeins, as appropriate, for each pair of contracted indies, and promoting all partial derivatives to covariant ones.To implement this in FeynMG, we run VUp[mu,v3] γ c2 .γd1 .γv3 i1,j1 + [13→19] ..... Since the expressions can be long, we will show only the main sections of the output that motivate the next step in the calculation and represent the rest of the terms in ellipsis.To allow the reader to connect the output presented explicitly with the fill output of the code, the positions of the first and last terms omitted are specified over each ellipsis; the number in brackets at the end of the output represents the total number of terms in the full expression, i.e., 19 terms in Out [2]. For this example, we will introduce a Brans-Dicke gravitational sector of the form of Eq. ( 11), such that where the χ field should not be confused with the one defined in Eq. (30).Before defining the gravitational part of the Lagrangian within FeynMG, we need to give appropriate attributes to the additional field χ and the additional parameters ({ω, µ χ , λ χ }).In principle, these can be directly added by updating the model file itself (which should be done before loading it into FeynRules).Alternatively, the FeynMG functions AddScalar[] and AddParameter[], 5 allow the new scalar fields and parameters to be defined after the model file has been loaded into FeynRules.For the specific case of Eq. 37, we need to execute the following: AddScalar[chi]; AddParameter[muC]; AddParameter[lamC]; AddParameter[w]; The full Lagrangian can then be defined via We note that the metric indices are raised by virtue of the specification Index[LorentzUp,mu] (for more information see Appendix C.1).Notice that we have not included the √ −g factor in the Lagrangian; this is because, for simplicity, FeynMG always assumes this term to be present. In the case of Brans-Dicke-type scalar-tensor theories, it may be convenient to transform to the Einstein frame (see Section 2.1).This is achieved in FeynMG by executing The output agrees with the result from Eq. ( 14), including the last term, which comes from the fermion spin-connection [Eq.( 13)].As mentioned before, the Jacobian factor √ −g is assumed in the calculation (although it can be omitted by specifying the option {Jacobian→ → →Off} (see Appendix C.2 for further details).The gravitational sector is now of canonical Einstein-Hilbert form, and we can take the flat spacetime (Minkowski) limit by calling 11→20] ..... [20] wherein the couplings of the additional scalar field to the matter fields are manifest.The remaining fields are, however, not canonically normalized, and further manipulations are needed in order to pass this Lagrangian back to FeynRules.These are the focus of the next subsection. Brans-Dicke theory for FeynRules in the Jordan frame The calculation in the Jordan frame repeats the same steps as in the last subsection up to and including In [4]: We first need to load a model file.We then insert the curvature dependence using InsertCurv[] with the Lagrangian as the argument and provide a gravitational sector for the theory.The next step is to expand the metric about a flat spacetime background.This can be done by using where LJordan was defined previously in In [4], and the provided options specify that the scalar-harmonic gauge from Eq. ( 27) is used and all covariant derivatives are updated to the modified form from Eq. ( 24).As mentioned previously, the Jacobian √ −g has been included when linearizing gravity by default, but it can be omitted using {Jacobian→ → →Off} (see Appendix C.3). As we can see in the second line, many of the terms are repeated, since Mathematica does not use Einstein's index notation, for which two repeated indices are summed over.As a result, various terms in the output will be equivalent, differing only in their index labels (e.g., A µ A µ = A ρ A ρ ).In order to force Mathematica to combine these terms, we have to use the same set of indices for all the terms.This problem is solved by the function IndexSimplify: The optional argument {mu,nu,rho} allows the user to choose a set of n indices from which the first n replacements will be chosen.The output of E2 contains significantly fewer terms than E1.Moreover, E2 already contains the expected graviton kinetic energy and its kinetic mixing with the scalar field chi, as in Eq. ( 29), thanks to the specification of the option {SHGauge->0n} in LinearizeGravity that implements the scalar-harmonic gauge and associated covariant derivatives from Eq. (35), which are convenient for the case of pure Brans-Dicke-type theories.This choice has led to the CMod[] terms in the Lagrangian, which needs to be series expanded around the chi field.However, the truncation to first order in chi does not commute with non-linear field redefinitions, so the CMod[] term will only be expanded once all the fields have their canonical kinetic energy. We can check that the kinetic energies appearing in E2 are not canonically normalized by running There are one or more non-canonical kinetic energies.Use CanonScalar. As the output indicates, we can execute In [10] The kinetic energies of the scalar fields are now canonically normalized, leading to the expansion of every CMod[] (where present).This expansion is performed in terms of the scalar field χ. At this stage, the kinetic energy of the graviton is composed of multiple terms.These could be simplified by means of Mathematica's FullSimplify command, but this will often prove time-consuming, and it is not necessary, except for aesthetic reasons.From here, the only thing left to do is to canonically normalize the graviton kinetic energy.To this end, we need to shift the fields around their vevs, so the graviton kinetic energy acquires a constant prefactor.This can be achieved by running In [11] , [8→10] ..... [10] Out Note that this function shows all the extrema of the potential.Since there may be multiple minima, the function allows the user to choose which vev (or set of vevs) will be used by a dialogue window prompt.(In this case, we choose option 7.) Notice that the v chi dependence already present from the expansion of the CMod functions have also been replaced by the user-selected vev in E3. Once we have a constant prefactor to the graviton kinetic energy, we can canonically normalize it using In [13] We have recovered the usual canonically normalized Fierz-Pauli kinetic energy terms from Eq. (23).We also see the expected kinetic mixing between the scalar field and the graviton, which can be identified by executing Notice that a Yukawa coupling between the fermion fields and the chi field has appeared in the fourth line, as expected.However, a closer look at this term shows that the coupling constant is four times larger than the result m ψ / 2M 2 Pl (2 + 3w) from Refs.[36,37].This is because of the last term in the expression, which will also contribute to the tree-level interactions between the fermion and the scalar field, leading then to the same results as in Refs.[36,37]. At this point, all the interactions up to second order in the fields have been canonically normalized and diagonalized, so there are no kinetic or mass mixings.Therefore, the updated Lagrangian for the matter fields with the additional scalar field couplings is now in a form that can be processed further by FeynRules and compatible packages for phenomenological studies. Outputting a model file FeynMG allows the user to create a new model file with the Lagrangian of their choice, in which all the introduced particles (such as the graviton and additional scalar field) and new parameters (such as M Pl ) will be incorporated and properly defined. 6This can be done by running In [18] where OldModelFile is the name of the original FeynRules model file that the user loaded, NewModelFile is the chosen name of the new model file, and Lagrangian is the final Lagrangian, as prepared with FeynMG. The upgraded model file can be read directly into FeynRules without needing to load or rerun FeynMG. Conclusions Modifying the gravitational sector of a Lagrangian can lead to new interactions between matter fields that need not be Planck-suppressed, but making these interactions manifest by hand on a model-by-model basis is tedious and time-consuming.In this paper, we have presented the Mathematica package FeynMG, which can manipulate scalar-tensor theories of gravity into a format that can be processed by FeynRules. Even for the the simplest toy models, it is necessary to perform transformations of the metric or linearize the gravitational action, redefine multiple fields, expand around the vacuum expectation values of the scalar fields and diagonalize mass and/or kinetic mixings, in particular those between additional scalar field and the trace of the graviton.FeynMG provides a set of functions that allow the user to recycle existing FeynRules model files that does not contain gravity and to implement these various steps. Once the user arrives at a canonically normalized Lagrangian, in which all kinetic and mass mixings have been diagonalized, it can be further processed by FeynRules and compatible packages to allow phenomenological studies of scalartensor theories of gravity.Moreover, instead of deriving the same Lagrangian every time one uses Mathematica, FeynMG allows the output of a new model file with all the updated fields, parameters and chosen Lagrangian.A summary list of functions can be found in Appendix C. In this paper, we have described the implementation of a minimal example in FeynMG: Brans-Dicke theory coupled to QED plus a real scalar protoype of the Standard Model Higgs.The inbuilt functions, however, may be used to manipulate more complicated gravitational sectors, such as multi-field scalar-tensor theories or Horndeski theories, and additional functionality is being developed for future release. where we have defined F (χ) ≡ F (X) and F (χ) ≡ ∂ F (χ)/∂χ.As described in Section 2.2, we now expand the scalar field around its vev, so that the graviton can also be canonically normalized [see Eq. (32), where the kinetic mixing between the graviton and χ is manifest].At this point, ∆ λµν has the form The kinetic mixing between the graviton and the scalar can be removed [see Eq. (33)] by means of the transformations in Eq. (33).With this, we obtain Eq. ( 34) and where F (v χ ) = M 2 Pl has been substituted and σ corresponds to the canonically normalized additional scalar field.We can now expand the denominator in the third line up to first order in M −1 Pl to give showing a perfect cancellation of the couplings to the additional scalar.Thus, after diagonalizing, the covariant derivative takes the following form which is nothing but the standard covariant derivative ∇ µ A ν from Einstein gravity.This is as we would expect, since the diagonalization is essentially a perturbative implementation of the Weyl transformation to the Einstein frame. We can obtain the same result without diagonalizing and instead summing over all insertions of the graviton-scalar kinetic mixing.Our calculations have shown that the following two series of diagrams cancel with each other: where the ellipsis contains the sum over the infinite series of insertions of mixings (where zero kinetic mixing is also included for the diagram on the right).Similarly, from the diagrams above, we can calculate the incoming graviton amplitude by inserting an additional kinetic mixing to the left of the χ propagators.Thus, we find that all the diagrams containing kinetic mixings will end up cancelling each other, leaving just the diagram with no kinetic mixings.Diagrammatically, this implies that + = , which corresponds to the Feynman diagram for the coupling between the gauge field and gravity through the usual Chistoffel symbols. In either case, we see that the role of the additional terms arising from C ρ µν in the updated covariant derivatives is to maintain the Weyl invariance of the Maxwell Lagrangian (at dimension four) once gauge fixing terms are included in the Jordan frame. Appendix B. Diagonalizing graviton-scalar kinetic mixing A convenient way to eliminate all the kinetic mixings is to find the matrix transformation that diagonalizes the kinetic terms.However, creating a kinetic mixing matrix between 2-forms (the graviton) and scalar fields is not straightforward.In this appendix, we describe a method for determining the transformation and diagonalizing the kinetic terms, which is implemented in FeynMG in the function GravKinMixing[]. The main obstacle is that the graviton kinetic term contains both h µν and its trace h.For example, we might have a Lagrangian of the form where both the graviton and the scalar field have already been canonically normalized, but there remains a kinetic mixing proportional to C (which for the calculation from Section 2.2 corresponds to C = F (v χ )/4).Since the graviton has two kinetic terms, it is unclear how to construct a matrix that encapsulates all the kinetic couplings between distinct fields.We proceed by redefining h µν so that its kinetic energy contains only one term.To do so, we perform an analytic continuation of the graviton into the complex plane, redefining The transformations for the fields are as follows: (F ρµν ) T KF ρµν = (F ρµν W −1 W ) T KW W −1 F ρµν = ( F ρµν ) T W T KW Fρµν , (B.7) since, by defining Fρµν = W −1 F ρµν , we would get a Lagrangian free of kinetic mixings. For the generic kinetic mixing, where K is defined by Eq. (B.4), the transformation matrix is The scalar fields transform through F ρµν = W Fρµν and therefore hµν → hµν − iC √ 1 + 4C The new model file will contain the same defined fields and parameters as the original file, with the addition of all the new particles and parameters created using AddScalar and AddParameter, together with the Lagrangian (L), the graviton (h µν ) and Planck Mass (M Pl ).By specifying the option {UpdateMass→True}, the masses of all scalar fields will be updated. [ 19 ] Herein, gUp[a,b] and gDown[a,b] are upper-and lower-indexed metrics, respectively, VUp[a,b] and VDown[a,b] are upper-and lower-indexed vierbeins, respectively, and D Grav a [] is the gravitational covariant derivative. h µν → hµν − 1 4 ( 1 + 2 ∂ 2 ∂ i) hη µν .ρ hµν ∂ ρ hµν + Ci∂ ρ h∂ ρ χ + 1 ρ χ∂ ρ χ, (B.3)which contains only one kinetic energy term for the graviton.The kinetic matrix is then defined straightforwardly as of the fields collected into the vectorF ρµν =   ∂ ρ hµν η µν ∂ ρ χ   , (B.5)such that the Lagrangian (B.1) can be written in the form L = (F ρµν ) T KF ρµν , where T denotes matrix transposition.We want a transformation W of the matrix K such thatW T KW = -Adds a new parameter named P into the loaded set of parameters, such that it can be recognized by FeynRules.Within the options (Opts), the user can choose its value by including {Value→X}.-Creates a new model file named NewF from an original FeynRules model file OldF.
10,435.6
2022-11-25T00:00:00.000
[ "Physics" ]
Host-specialized fibrinogen-binding by a bacterial surface protein promotes biofilm formation and innate immune evasion Fibrinogen is an essential part of the blood coagulation cascade and a major component of the extracellular matrix in mammals. The interface between fibrinogen and bacterial pathogens is an important determinant of the outcome of infection. Here, we demonstrate that a canine host-restricted skin pathogen, Staphylococcus pseudintermedius, produces a cell wall-associated protein (SpsL) that has evolved the capacity for high strength binding to canine fibrinogen, with reduced binding to fibrinogen of other mammalian species including humans. Binding occurs via the surface-expressed N2N3 subdomains, of the SpsL A-domain, to multiple sites in the fibrinogen α-chain C-domain by a mechanism analogous to the classical dock, lock, and latch binding model. Host-specific binding is dependent on a tandem repeat region of the fibrinogen α-chain, a region highly divergent between mammals. Of note, we discovered that the tandem repeat region is also polymorphic in different canine breeds suggesting a potential influence on canine host susceptibility to S. pseudintermedius infection. Importantly, the strong host-specific fibrinogen-binding interaction of SpsL to canine fibrinogen is essential for bacterial aggregation and biofilm formation, and promotes resistance to neutrophil phagocytosis, suggesting a key role for the interaction during pathogenesis. Taken together, we have dissected a bacterial surface protein-ligand interaction resulting from the co-evolution of host and pathogen that promotes host-specific innate immune evasion and may contribute to its host-restricted ecology. Introduction Many bacteria evolve strict mutualistic relationships with their host species with limited capacity to colonize and cause disease in other hosts. In contrast, other bacteria have the ability to expand into new host-species leading to the emergence of new pathogenic clones. Our understanding of the bacterial and host factors that underpin pathogen-host ecology is very limited. However, bacterial surface proteins are central mediators of host colonization, and tissue tropism and, as such, are likely to play a critical role in determining host ecology [1,2]. For example, the choline-binding protein A, of the major human pathogen Streptococcus pneumoniae, binds to polymeric immunoglobulin receptor, secretory component, secretory IgA, and factor H of complement from humans but not from other animal species tested [1]. In addition, the human host-restricted Streptococccus pyogenes expresses surface-anchored M protein that binds exclusively to human CD46 mediating binding and invasion of epithelial cells. Adaptive diversification of bacterial surface proteins can also have a major impact on tissue tropism and disease manifestation. For example, uropathogenic Escherichia coli virulence has arisen due to mutations in the fimbrial adhesin FimH, promoting high affinity binding to the urinary epithelium [3]. Similarly, a single non-synonymous mutation in a fibronectin-binding autolysin of Staphylococcus saprophyticus, associated with a selective sweep, has been linked to the pathogenesis of urinary tract infection in humans [4]. Additionally, single amino acid substitutions in the fibronectin-binding protein A (FnBPA) of Staphylococcus aureus, are associated with cardiac device infections and bacteremia in humans due to increased binding affinity for fibronectin [5][6][7]. Fibrinogen is a highly abundant protein in blood and is required for blood coagulation, thrombosis and host immune defense [8]. This large glycoprotein is composed of three chains, termed the α-, β-, and γ-chains, that form a dimer of trimers [8]. During coagulation, thrombin cleaves the fibrinogen α-and β-chains allowing fibrin formation, with the γ-chain binding directly to platelets to produce the blood clot [8]. Bacterial pathogens have evolved many mechanisms to bind to host fibrinogen to disrupt blood coagulation as well as promote host cell adherence, immune evasion and abscess formation [9,10]. The importance of this interaction is highlighted by the large number of fibrinogen-binding proteins of bacteria that have been identified, with S. aureus encoding at least 9 fibrinogen-binding proteins [9][10][11][12]. It is unclear if each of these proteins confer an exclusive function via distinct fibrinogen-binding sites, or if convergent evolution is driving a high redundancy for fibrinogen-binding. In S. aureus there are fibrinogen-binding proteins that exhibit host-specificity and those that exhibit a broader host tropism. In the case of clumping factor B (ClfB) a host-restrictive fibrinogenbinding phenotype is observed due to the interaction with a sequence unique to the human and human fibrinogen, with very limited binding to bovine and ovine fibrinogen ( Fig 1A). A S. pseudintermedius ED99 mutant deficient in expression of a fibrinogen-binding protein SpsD (ED99ΔspsD), cultured to early-exponential growth phase, demonstrated binding to fibrinogen that was equivocal to wild-type ED99 for canine, ovine and human fibrinogen but reduced for bovine fibrinogen (p<0.001) (Fig 1B). In contrast, a mutant deficient in SpsL (ED99ΔspsL), cultured to mid-exponential growth phase, exhibited highly reduced binding to fibrinogen from all host species (p<0.001) (Fig 1C), with complete loss of fibrinogen-binding by a mutant deficient in both SpsL and SpsD (ED99ΔspsLΔspsD), at both early-exponential ( Fig 1B) and mid-exponential growth phases (Fig 1C). Re-introduction of the deleted spsL gene restored fibrinogen-binding as did complementation of ED99ΔspsLΔspsD with a plasmid (pALC2073::spsL) encoding SpsL (Fig 1C and 1D). In summary, these data indicate that S. pseudintermedius ED99 has host-specific interactions with fibrinogen that are primarily mediated by SpsL. However, these adherence assays do not allow quantification of the strength of the binding interaction between SpsL and fibrinogen. SpsL demonstrates an enhanced binding strength for canine fibrinogen In order to compare the molecular forces driving the binding of SpsL to canine and human fibrinogen, we used atomic force microscopy (AFM) [24,25]. Firstly, for single-cell force SpsL exhibits host-specific binding to fibrinogen. Bacterial adherence assays using crystal violet staining were used to quantify fibrinogen-binding from 4 host species-bovine, canine, human, and ovine. (A) Adherence of S. pseudintermedius ED99 wild type (WT) when cultured to mid-exponential growth phase. Data points represent the mean ± SD (n = 3). Differences in binding to fibrinogen was analyzed at 10 μg ml -1 fibrinogen (p � 0.001, two-way ANOVA) (B) Adherence of WT, ED99ΔspsD, and ED99ΔspsLΔspsD to 20 μg ml -1 fibrinogen when cultured to early-exponential growth phase. (C) Adherence of WT, ED99ΔspsL, ED99ΔspsLΔspsD, and ED99ΔspsL Rep to 20 μg ml -1 fibrinogen when cultured to mid-exponential growth phase. Error bars represent SD (n = 9). Differences in fibrinogen-binding was analyzed (p � 0.001, t-test) (D) Adherence of ED99ΔspsLΔspsD expressing SpsL. Data points represent the mean ± SD (n = 9). Differences in binding to fibrinogen from multiple hosts was analyzed at 10 μg ml -1 (p � 0.001, two-way ANOVA). , Fig 2A), single bacteria were attached onto AFM cantilevers, and forcedistance curves were collected between the cell probes and fibrinogen-coated surfaces (Fig 2A). The adhesion forces obtained for three representative cells interacting with either human or canine fibrinogen are presented (Fig 2A). While there was substantial variation between cells, the binding probability was always higher for canine rather than for human fibrinogen (85% vs 56%; means ± 12 and 28, from a total of n = 1,139 and 1,176 curves). Also, binding forces were stronger for canine fibrinogen (355 ± 354 pN from n = 228 adhesive curves; 2,077 ± 1,157 pN (n = 388), and 1,024 ± 427 pN (n = 352), for cell #1, cell #2 and cell #3, respectively), than for human fibrinogen (149 ± 84 pN (n = 85), 744±467 pN (n = 362), and 541±266 pN (n = 216)). Next, we used single-molecule force spectroscopy (SMFS) with Atomic force microscopy analysis demonstrates increased binding strength of SpsL for canine fibrinogen. (A) Single-cell force spectroscopy (SCFS) measured the forces between single bacterial cells expressing SpsL A+SD and surfaces coated with either canine (blue) or human (purple) fibrinogen. Shown here are the adhesion force histograms with representative force curves obtained by recording force-distance curves in PBS for 3 different cells. All curves were obtained using a contact time of 100 ms, a maximum applied force of 250 pN, and approach and retraction speeds of 1,000 nm.s -1 . (B, C) Single-molecule force spectroscopy (SMFS) captured the localization and binding strength of single adhesins on living bacteria. Adhesion force histograms obtained by recording force curves in PBS across the surface of single bacteria with either canine (B) or human (C) fibrinogen tips. The insets show adhesion force maps (scale bars: 100 nm, color scales: 3,000 pN) and representative force curves. Each bright pixel represents the detection of single proteins, no binding is represented by a black pixel and strong binding is represented by a white pixel. All curves were obtained using a contact time of 100 ms, a maximum applied force of 250 pN, and approach and retraction speeds of 1,000 nm.s -1 . fibrinogen-coated AFM tips to quantify the strength of single bonds (Fig 2B and 2C). Canine fibrinogen ( Fig 2B) always showed very large forces (1,237 ± 754 pN from n = 258 adhesive curves; 1,554 ± 828 pN (n = 308), and 2,630 ± 1,393 pN (n = 137), for cell #1, cell #2 and cell #3, respectively). Of note, these high forces are in the range of values reported previously for the high-affinity "dock, lock and latch" binding of SdrG to fibrinogen [26]. For human fibrinogen, these strong forces were also observed but much less frequently (Fig 2C). Taken together these data are consistent with a high affinity dock, lock and latch-based mechanism for the binding of SpsL to canine fibrinogen. The N2N3 subdomains of SpsL expressed on the bacterial cell surface are required for fibrinogen-binding To investigate the region of SpsL required for fibrinogen-binding, an array of recombinant truncates of SpsL were generated and purified from E. coli as described in the Supplemental Methods section (S1A Fig). From structural and functional studies of related staphylococcal surface proteins, we predicted that the N2N3 subdomains of SpsL would be sufficient for fibrinogen-binding [27][28][29]. However, in ELISA-like assays, none of the purified recombinant truncates of SpsL exhibited binding to fibrinogen, with all peptides tested demonstrating binding equivocal to the negative control (fibronectin-binding domain of SpsD) (S1B-S1D Fig). In contrast, fibronectin-binding could be detected in the SpsL recombinant protein construct containing a single fibronectin-binding repeat (S1E Fig) [23]. Similarly, full-length or truncated SpsL A-domain fragments expressed and purified from S. pseudintermedius ED99ΔspsLΔspsD supernatant did not adhere to canine fibrinogen, in contrast to a positive control of recombinant SpsD N2N3 purified from E. coli (S1F Fig). However, heterologous overexpression of SpsL on the surface of a fibrinogen-binding deficient S. aureus strain (SH1000ΔclfAΔclfBΔfnbAΔfnbB) [30] promoted high levels of adherence to canine fibrinogen (S1G Fig). Taken together, these data suggest that SpsL requires bacterial cell surface attachment to mediate fibrinogen-binding. Accordingly, subsequent experiments employed SpsL constructs expressed on the surface of S. pseudintermedius ED99. SpsL fragments representing the A-domain and N2N3 subdomains were expressed on the surface of the S. pseudintermedius fibrinogen-binding deficient mutant ED99ΔspsLΔspsD ( Fig 3A). As reported for other bacterial cell wall-associated proteins, we considered that the C-terminal repeat region may be required to project the fibrinogen-binding domain from the cell surface, and that a small region of the N1 subdomain may be required for secretion and cell surface expression [31,32]. To address this issue, chimeric proteins were generated that replace the SpsL fibronectin-binding repeats with the ClfA SD repeats (that do not exhibit any known ligand-binding activity) [33] and that contain a 21 amino acid region of the N1 subdomain (N1 21 ; 181 VSKEENTQVMQSPQDVEQHVG 201) (Fig 3A). Analysis of the binding of these constructs to immobilized fibrinogen and expression analysis by Western blot indicated the requirement for the N1 21 peptide for cell surface expression, with the N2N3+SD and N2N3 constructs not expressed on the cell surface (Fig 3B and 3C). Importantly, both the chimeric A-domain (A+SD) and N2N3-subdomain (N1 21 +N2N3+SD) proteins exhibited binding to canine fibrinogen that was equivocal to the full length SpsL protein ( Fig 3B). The binding of the chimeric N1 21+ N2N3+SD protein to fibrinogen from multiple host species indicate that the SpsL N2N3 subdomains are sufficient for host-specific fibrinogen-binding, with the 21 amino acids of the N1 subdomain required for cell surface expression, (Fig 3D) suggesting that SpsL mediates ligand-binding in a manner analogous to the dock, lock and latch-binding mechanism described for other staphylococcal cell wall-associated proteins [34]. To investigate this further, we modelled the structure of SpsL N2N3 subdomains, based on the crystal +SD to fibrinogen from bovine, canine, human, and ovine hosts using crystal violet staining. Data points represent the mean ± SD (n = 9). (E) Adherence of SpsLΔlatch in comparison to full length SpsL to fibronectin, canine fibrinogen, and human fibrinogen coated at 20 μg ml -1 using crystal violet staining. The error bars represent SD (n = 9) (p � 0.001, t-test). https://doi.org/10.1371/journal.ppat.1007816.g003 structure of ClfA (pdb 1N67) [35]. The structural model predicted classical DE-variant IgG folds made up of β-sheets typical of staphylococcal fibrinogen-binding proteins (S2A Fig). From this model we identified a putative latch region, 502 NSASGSG 508, required for the dock, lock and latch binding mechanism (S2A Fig). Deletion of this putative latch region in a surface-expressed SpsL construct had no effect on surface expression or fibronectin-binding but abrogated adherence to both canine and human fibrinogen (p<0.001) (Fig 3E and S2B Fig). In addition to the AFM data, these results, suggest that the SpsL N2N3-subdomains expressed on the bacterial surface mediate fibrinogen-binding via a mechanism analogous to the dock, lock, and latch binding model. SpsL mediates enhanced binding to canine fibrinogen via a tandem repeat region of the fibrinogen α-chain Staphylococcal proteins have evolved the ability to bind fibrinogen through multiple distinct interactions with different regions of host fibrinogen [13,14,36]. Previously it has been identified that S. pseudintermedius strain 326 is capable of binding to the fibrinogen α-chain with binding to the β-, and γ-chains not investigated [37]. To identify the binding site of SpsL, recombinant versions of the α-, β-, and γ-chains of human fibrinogen were expressed in and purified from E. coli and employed in bacterial binding assays. Both S. pseudintermedius ED99 and ED99ΔspsD demonstrated specific adherence to the human α-chain, but not to the β-or γ-chains revealing the α-chain as the receptor for SpsL binding (Fig 4A). To further refine the location of the SpsL binding site, 6 overlapping fragments of the canine fibrinogen α-chain were synthesized (NCBI reference sequence: XP_532697.2), purified from E. coli, and analyzed for adherence to ED99ΔspsLΔspsD expressing full length SpsL (Fig 4B and 4C). SpsL demonstrated binding to two of the overlapping fragments (250-450 and 400-600 amino acids) that span the α-connector region of fibrinogen containing unordered tandem repeats (residues P283-S419) ( Fig 4B). Purification of equivalent fragments derived from the human α-chain revealed equivocal binding to the 400-600 fragment but reduced binding to the human 250-450 fragment ( Fig 4D). These data indicate that the canine fibrinogen α-chain contains strong (250-450) and weaker (400-600) SpsL binding sites while human fibrinogen contains just the weaker binding site (400-600). To confirm that the canine tandem repeat region is responsible for the host-specific interaction of SpsL with fibrinogen, we generated chimeric full-length proteins where the tandem repeat region from human and canine fibrinogen α-chains, respectively (P283-G421), were exchanged. The addition of the canine α-chain tandem repeats provides stronger binding to the human fibrinogen α-chain and in contrast the addition of the human αchain tandem repeats provides weaker binding to the canine fibrinogen α-chain ( Fig 4E). As a control, we examined the binding of ClfB expressed on the surface of SH1000ΔclfAΔclfBΔfn-bAΔfnbB. ClfB is a S. aureus fibrinogen-binding surface protein that binds specifically to repeat 5 of the human α-chain tandem repeats [13]. The host-specificity of ClfB was confirmed with specific binding observed to the human 250-450 fragment (S3A Fig), and the canine chimeric protein containing the human tandem repeat sequence (S3B Fig) but not to the canine 250-450 fragment or the canine alpha chain. These data demonstrate that the tandem repeat region of the fibrinogen α-chain is responsible for the host-specific interaction of SpsL. The canine tandem repeat region of the fibrinogen α-chain contains 7 repeats of 18 amino acids and a partial repeat of 11 amino acids (S3C Fig) [38]. The generation of recombinant fragments spanning the tandem repeats of the fibrinogen α-chain (S3D Fig), revealed that SpsL is capable of binding to multiple regions in the canine tandem repeat region (S3E Fig). In addition, deletion of the whole tandem repeat region, in the canine α-chain, confirmed the presence of a weaker binding site (Fig 4F), which we localized to the region adjacent to the tandem repeats (S423-E474) in both canine and human fibrinogen (S3F and S3G Fig). Overall, these data demonstrate that SpsL mediates binding to multiple locations in the fibrinogen αchain, and that the strong canine-specific interaction is dependent on the unique tandem repeat sequence present in the canine fibrinogen α-chain. The fibrinogen α-chain tandem repeat region exhibits canine breed-specific variation The number of tandem repeats in the bovine fibrinogen α-chain have been reported to differ between cattle breeds [39]. To investigate if this is also the case for dogs we investigated publicly available canine sequences of the fibrinogen α-chain but, due to the repetitive nature of the tandem repeat region, the paired-end short sequence reads were not sufficient to support assembly and robust analysis. To overcome this, we isolated genomic DNA from 11 different canine breeds and PCR-amplified DNA specific for the P283-E474 region of the fibrinogen αchain, followed by DNA sequencing. Sequence analysis, in comparison to NCBI reference sequence: XP_532697.2, revealed that the region of weaker binding (S423-E474) is conserved among the canine breeds examined (S4A Fig). In contrast, the French bulldog and Labrador retriever exhibited heterozygous alleles that contain an additional repeat unit in the stronger binding site (Fig 4G). This heterozygous allele, common to both breeds, contains a duplication of repeat 4 and amino acid substitutions that result in the replacement of repeats 6 and 7 with repeat 8 (XP_532697.2:p.[S347_S348insTRPGSTGPGSAGTWS;S373N;L394P]) ( Fig 4G). The unique French bulldog allele contains substitutions that convert repeat 5 to repeat 4, and repeat 7 to repeat 8 (XP_532697.2:p.[S347_T351del;G352R;T361A;L394P]), with a unique bulldog allele replacing repeat 6 with repeat 8 (XP_532697.2:p.S373N) ( Fig 4G). Overall, these analyses demonstrate that the canine-specific binding site of SpsL in the tandem repeat region of the canine fibrinogen α-chain has undergone genetic diversification during the evolution of different breeds of dog. SpsL promotes bacterial aggregation and biofilm formation in a hostrestricted manner The evolution of a strong canine-specific fibrinogen-binding interaction for SpsL suggests an important role in canine host-pathogen interactions. Accordingly, we investigated the impact of the interaction on phenotypes relevant to pathogenesis. Firstly, we considered if the interaction could promote inhibition of opsonophagocytosis as reported previously for fibrinogenbinding proteins of S. aureus ClfA and Efb [40,41]. Accordingly, FITC-labelled bacteria were opsonized with bovine, canine, human, or ovine fibrinogen or bovine fibronectin and analyzed for phagocytosis by human neutrophils. As expected, full length SpsL, but not the chimeric Adomain protein (A+SD), inhibited phagocytosis in the presence of fibronectin (p<0.001) ( Fig 5A). However, the ability of SpsL to inhibit neutrophil phagocytosis in the presence of fibrinogen was demonstrated to be host-specific with opsonophagocytosis inhibited in the presence of canine and human fibrinogen (p<0.001) but not in the presence of bovine or ovine fibrinogen ( Fig 5A). We next examined the role of SpsL-canine fibrinogen-binding on S. pseudintermedius aggregation and biofilm formation. The aggregation of S. aureus has been demonstrated to be important for the development of bloodstream infections [42, 43] and catheter-related infections [44]. In particular, it has been reported that fibrinogen-dependent S. aureus aggregation can stimulate the activation of virulence through a quorum-sensing dependent mechanism [45]. In order to examine the potential role of the canine-specific fibrinogen-binding in S. pseudintermedius aggregation, we attempted to block binding of S. pseudintermedius to the canine fibrinogen α-chain by including soluble fibrinogen in a bacterial adherence assay. Instead of blocking adherence, we found that soluble canine fibrinogen α-chain, but not human fibrinogen α-chain, supported the formation of surface bound aggregates (Fig 5B). Deletion of the weaker binding site (S423-E474) in the canine fibrinogen α-chain, had no effect on bacterial aggregation ( Fig 5B). However, deletion of the stronger binding site (the tandem repeat region), resulted in complete abrogation of bacterial aggregation (Fig 5B). This demonstrates that SpsL promotes surface bound bacterial aggregation in a host-restricted manner. To further investigate the impact of fibrinogen on the aggregation of S. pseudintermedius we performed static biofilm assays in the presence or absence of fibrinogen from different Bacterial host-specialized fibrinogen-binding host species (Fig 5C). Coating with canine fibrinogen supported enhanced biofilm formation among bacterial cells expressing SpsL than wells coated with either human or bovine fibrinogen demonstrating that the strong interaction of SpsL with canine fibrinogen promotes the initial attachment stage of biofilm formation (Fig 5C). Overall, these data demonstrate that the high strength interaction of SpsL with canine fibrinogen promotes bacterial aggregation and biofilm formation. In order to investigate if other staphylococcal fibrinogen-binding proteins exhibit similar host-specificity, we generated constructs expressing chimeric SpsL proteins that contain the fibrinogen-binding N2N3 subdomains of ClfB or FnBPA but maintain the SpsL promoter, signal peptide, fibronectin-binding repeats, and cell wall anchor (Fig 6A). The generation of chimeric proteins was favored over the expression of native ClfB or FnBPA proteins to limit variation in cell surface expression. The N2N3 subdomains of these proteins were selected because of the host-restriction of ClfB to human fibrinogen α-chain and the similar domain architecture of SpsL and FnBPA. As expected from our previous analysis, SpsL showed similar binding to both canine and human fibrinogen (Fig 6B). However, the high strength interaction of SpsL with canine fibrinogen is essential for bacterial aggregation ( Fig 6C) and biofilm formation (Fig 6H). SpsL-ClfB N2N3 demonstrated a similar binding curve to SpsL but exhibited specific binding to human fibrinogen ( Fig 6D) as previously predicted [13]. This human fibrinogen-binding was not sufficient to mediate bacterial aggregation (Fig 6E) or biofilm formation (Fig 6H) demonstrating that not all staphylococcal fibrinogen-binding proteins are capable of mediating these infection-related phenotypes. In contrast, SpsL-FnBPA N2N3 exhibited a binding pattern predicted from an interaction with the fibrinogen γ-chain with equivocal binding to bovine, canine, and human fibrinogen and reduced ovine fibrinogen-binding ( Fig 6F). SpsL-FnBPA N2N3 is capable of mediating bacterial aggregation ( Fig 6G) and biofilm formation (Fig 6H) in the presence of fibrinogen from all hosts tested suggesting that FnBPA does not have a host-restricted tropism. From this comparative analysis we can conclude that SpsL is unique in promoting bacterial aggregation and biofilm formation in a manner that corresponds to the host-restricted ecology of S. pseudintermedius. Discussion The factors underpinning bacterial host-tropism are not well understood but often involve surface proteins mediating interactions with host cells and the extracellular matrix [1]. The genus Staphylococcus includes species such as S. aureus which have a multi-host tropism with the capacity to switch between different host species. In contrast, some species such as S. pseudintermedius are highly host-restricted and although S. pseudintermedius can occasionally cause zoonotic infections of humans (typically through dog bite wounds), the capacity to spread in human populations has not been reported. The bacterial factors underpinning the host-restricted ecology of S. pseudintermedius are unknown. Previously, we demonstrated that SpsL contributed to abscess formation in a murine model of subcutaneous infection indicating that it is a virulence factor during the pathogenesis of skin infection [22]. The poor binding of SpsL to murine fibrinogen suggests that this effect is not mediated by the interaction of SpsL with murine fibrinogen [22]. Here we demonstrate that SpsL mediates high strength binding to canine fibrinogen in a host-specific manner and that this host-adaptation conferred the ability to mediate bacterial aggregation and biofilm formation. The role of SpsL-fibrinogen binding in canine pathogenesis cannot be formally tested in vivo by experimental infections of dogs in the UK due to ethical constraints. However, our ex vivo binding and cellular infection data reveal multiple pathogenic traits that depend on the host-specific interaction of SpsL and canine fibrinogen, suggesting a key role in the host ecology of S. pseudintermedius. Cell surface proteins of S. aureus have been reported to contribute to tissue or disease tropism in humans. For example, the fibrinogen-and loricrin-binding protein, ClfB, exhibits greater adherence to skin corneocytes taken from atopic dermatitis patients with low levels of natural moisturizing factor suggesting a role in niche adaptation [46,47]. ClfB interacts with the human tandem repeat region of the fibrinogen α-chain [13] but unlike SpsL, ClfB binds to a single site; namely repeat unit 5, and exclusively binds to human fibrinogen [13]. In addition to ClfB, the bone sialoprotein-binding protein (Bbp) and the extracellular fibrinogen-binding protein (Efb) also bind to the fibrinogen α-chain via distinct RGD-integrin-binding sites inhibiting thrombin-induced coagulation and platelet aggregation, respectively [36,48]. In contrast, SpsL interacts with multiple sites in the canine fibrinogen α-chain; namely within the tandem repeats and their flanking regions, (Fig 4). Similarly, the serine-rich repeat glycoproteins of Streptococcus agalactiae, Srr1 and Srr2, bind to repeat units 6, 7, and 8 of the tandem repeat region of the human fibrinogen α-chain via a variation of the dock, lock, and latch binding mechanism, with Srr2 displaying a stronger binding affinity than Srr1 [49]. The enhanced binding affinity of Srr2 was linked with increased adherence to endothelial cells, which may be important for Group B Streptococcus-associated meningitis [49]. The ability of the Srr and SpsL proteins to adhere to more than one site in the tandem repeat region of the fibrinogen α-chain may have evolved as a mechanism for overcoming extant genetic diversity in this region between individuals within a host species as observed in the current study for SpsL ( Fig 4G) [39, 50]. We were unable to detect binding of soluble SpsL proteins to canine fibrinogen by ELISA suggesting that immobilization and surface presentation is essential for SpsL functionality, even when full length SpsL is expressed as a recombinant protein (S1 Fig). To address this, we utilized AFM, demonstrating that bacterial surface-associated SpsL binds to fibrinogen via extremely strong binding forces (around 2000 pN) that are in the range of the strength measured for the dock, lock and latch interaction between fibrinogen and the structurally-related SdrG and ClfA [26,51]. Dock, lock and latch forces have been shown to originate from hydrogen bonds between the ligand peptide backbone and the adhesin [52, 53], and are activated by mechanical tension, as observed with catch bonds [54]. Of note, ClfB has much greater affinity for loricrin when expressed on the bacterial cell surface rather than as a recombinant protein with the C-terminal stalk enhancing binding affinity [25]. A similar mechanism may be required for SpsL adherence to fibrinogen with the C-terminal repeat domain enhancing the ligand-binding affinity of the N2N3 subdomains. It is increasingly being recognized that analysis of protein-protein interactions on the bacterial cell surface is more physiologically relevant than testing the interaction of recombinant polypeptides [25,55]. Our data reveal that the high strength canine-specific binding of SpsL facilitates several virulence phenotypes not previously reported for S. pseudintermedius including surface-bound bacterial aggregation. When S. aureus forms fibrinogen-dependent aggregates, agr-mediated quorum sensing is activated leading to the up-regulation of virulence gene expression [45]. Consequently, the inhibition of S. aureus aggregation in vivo has been linked with decreases in mortality from sepsis and protection from lethal lung injury [43,56]. We also discovered that SpsL facilitates fibrinogen-dependent biofilm formation, a phenotype not previously reported for S. pseudintermedius. Such fibrinogen-dependent biofilms are observed in S. aureus strains or chimeric SpsL-FnBPA N2N3 (F) to fibrinogen from bovine, canine, human, and ovine. Data points represent the mean ± SD (n = 9). Adherence of full length SpsL (C), chimeric SpsL-ClfB N2N3 (E), or chimeric SpsL-FnBPA N2N3 (G) to canine fibrinogen or human fibrinogen in the presence of soluble canine or human fibrinogen. Data points represent the mean ± SD (n = 9). (H) Biofilm formation by bacteria expressing SpsL, chimeric SpsL-ClfB N2N3, or chimeric SpsL-FnBPA N2N3 on surfaces coated with bovine, canine, or human fibrinogen. Error bars represent SD (p � 0.001, one-way ANOVA) (n = 3). https://doi.org/10.1371/journal.ppat.1007816.g006 isolated from skin infections [57], a phenomenon implicated in indwelling medical device infections [58]. In this regard, inhibition of fibrin formation reduced the development of S. aureus biofilms in a murine catheter infection model [58], and molecules targeting SpsL could be beneficial in preventing canine indwelling device infections caused by S. pseudintermedius. Finally, we have demonstrated that SpsL binding to soluble fibrinogen inhibits neutrophil phagocytosis, suggesting a role for SpsL in innate immune evasion. Taken together, we have dissected the host-dependent binding of a bacterial surface protein and demonstrated its importance for multiple pathogenic traits, providing new insights into the host-specific ecology of a major bacterial pathogen. Ethics statement Chicken immunization was performed using unembryonated hen's eggs at the Scottish national blood transfusion service (Pentland Science Park, Midlothian, UK). The procedures performed were carried out under the authority of the UK Home Office Project License PPL 60/4165 and Animals (Scientific Procedures) Act 1986 regulations. Human venous blood was taken from consenting adult healthy volunteers in accordance with a human subject protocol approved by the national research ethics service (NRES) committee London City and East under the research ethics committee reference 13/LO/1537. Passive volunteer recruitment was conducted at the Roslin Institute (University of Edinburgh). Written consent was taken from each volunteer before blood collection and after an outline of the risks was provided. All blood collection samples were anonymized. Bacterial strains and culture conditions The bacterial strains and plasmids used in this study are listed in S1 Table. S. pseudintermedius and S. aureus strains were routinely cultured in Brain Heart Infusion broth at 37˚C with shaking and supplemented with 10 μg ml -1 chloramphenicol as required. E. coli strains were cultured in Luria broth at 37˚C with shaking supplemented with 100 μg ml -1 ampicillin, 15 μg ml -1 tetracycline, or 25 μg ml -1 kanamycin as required. Source of extracellular matrix proteins Fibrinogen isolated from bovine, human, and ovine plasma (Sigma-Aldrich) and bovine fibronectin (EMD Millipore) was sourced commercially. Canine fibrinogen was purified from Beagle sodium citrate whole blood (Lampire Biological Products) using a previously described method [59]. All fibrinogen samples were purified to remove contaminating fibronectin using Gelatin-Sepharose 4B (GE Healthcare). Depletion of fibronectin was confirmed by Western blot analysis using 1 μg ml -1 rabbit anti-fibronectin IgG (abcam) and 0.2 μg ml -1 goat anti-rabbit IgG-HRP (abcam). Bacterial adherence assay Solid phase adherence assays were performed using S. pseudintermedius and S. aureus strains expressing pALC2073 or pCU1 constructs cultured to an OD 600nm of 0.6 and induced for protein expression with 3 μg ml -1 anhydrotetracycline for 2 h. Cells were washed and suspended in PBS to OD 600nm of 1.0. Wells were coated overnight at 4˚C with fibrinogen from multiple hosts or recombinant α-chain fragments in a 96-well MaxiSorp plate (Nunc). After blocking with 8% (w/v) milk-PBS, bacteria were applied to the wells for 2 h at 37˚C. After washing, adherent cells were fixed with 25% (v/v) formaldehyde (Sigma) for 30 min and stained with 0.5% (v/v) crystal violet (Sigma) for 3 min. The cell-associated stain was solubilized with 5% acetic acid (v/v) and analyzed using a Synergy HT plate reader (BioTek) at 590 nm wavelength. For aggregation experiments, the same procedure was followed as stated above with the addition of either soluble fibrinogen or recombinant fibrinogen α-chain to the bacteria using two-fold serial dilution and then incubation for 2 h at 37˚C. Cloning of expression constructs The primers used in this study are listed in S2 Table. Initial expression constructs of the human and canine fibrinogen α-chains were synthesized by Integrated Design Technologies (IDT) using the DNA sequence of a female Boxer (NCBI reference sequence: XP_532697.2) as highlighted in S1 Table. For typical restriction-ligation cloning procedures, the region of interest was amplified (PfuUltra II Fusion HS Polymerase-Agilent) and blunt cloned into pSC-B using the StrataClone Blunt PCR Cloning Kit (Agilent). Restriction digestion of the plasmid of interest (pQE30, pT7, or pALC2073) and the blunt cloned PCR product was performed at 37˚C for at least 2 h and purified using the Monarch Gel Extraction Kit (NEB). All digested plasmids were treated with Antarctic Phosphatase (NEB) before overnight ligation with T4 DNA Ligase (NEB) at a 3:1 molar ratio of insert:plasmid. Dialysis of the 20 μl ligation reactions was performed using 0.025 μm filter circular discs (Millipore) before electroporation into the appropriate E. coli strain-DC10B [60], DH5α (Invitrogen), or XL-1 Blue (Agilent). All plasmid constructs were verified by Sanger sequencing (Edinburgh Genomics, University of Edinburgh) before transformation into E. coli BL21 DE3 (Invitrogen), or appropriate S. pseudintermedius strain. Some expression constructs were also produced using sequence ligase independent cloning (SLIC) as described previously [61]. Briefly, primers were designed to amplify the gene of interest as well as sequence complementation to the expression plasmid. Primers were also designed to amplify the plasmid of interest, pQE30, pALC2073 or pCT using Platinum PCR Supermix (Invitrogen) or PfuUltra II Fusion HS Polymerase (Agilent). All PCR products were purified using Monarch PCR & DNA Cleanup kit or Monarch Gel Extraction kit (NEB). T4 DNA Polymerase (NEB) was used to generate DNA overhangs on both the insert and plasmid PCRs with step-wise temperature increments used to anneal the complementary DNA sequences. The heat annealed constructs were electro-transformed into E. coli DC10B [60] or DH5α (Invitrogen) and verified using Sanger sequencing (Edinburgh Genomics, University of Edinburgh). Preparation of Staphylococcus competent cells and electro-transformation S. pseudintermedius and S. aureus competent cells were produced using a method outlined previously [60]. Plasmids for electroporation were concentrated to 1 μg μl -1 using Pellet Paint coprecipitant (Novagen) and 5 μg used for the electro-transformation as previously described [23]. Recombinant protein induction and purification Recombinant hexa-Histidine-tagged proteins expressed in E. coli were cultured to OD 600nm of 0.6 and induced using 1 mM IPTG at either 37˚C for 4 h or 16˚C overnight. Recombinant αchain proteins were purified under denaturing conditions (8M urea, 100 mM monosodium phosphate, 10 mM Tris-HCl) using Ni-NTA agarose (Invitrogen) and gravity flow columns (Bio-Rad). Bacterial lysis was performed in pH 8.0 binding buffer at room temperature with tilting for at least 1 h. Lysates were pelleted at 16000 x g for 20 min and the supernatant filter sterilized. Lysates were tilted at room temperature with conditioned Ni-NTA agarose for 1 h. The column was washed with pH 6.3 wash buffer and the protein eluted with pH 4.5 elution buffer. After analysis by 4-20% Mini-PROTEAN TGX precast gel (Bio-Rad), protein quantification was performed using a BCA assay (Novagen). Release of surface proteins from S. pseudintermedius S. pseudintermedius cells were cultured to exponential phase (OD 600nm of 0.4-0.6). Cells were washed with PBS and suspended in lysis buffer (50 mM Tris-HCl, 20 mM MgCl 2 , pH 7.5) supplemented with 30% (w/v) raffinose and cOmplete protease inhibitor (Roche). Cell wall proteins were solubilized by incubation with 400 μg ml -1 lysostaphin at 37˚C for 20 min. Supernatant samples were collected after protoplast recovery by centrifugation at 6000 x g for 20 min. The production of cell lysate samples was generated by lysing cell pellets in PBS on the One-Shot cell disruptor (Constant Systems) with 2 passes at 40 Kpsi. Generation of anti-SpsL N2N3 IgY Antibody Recombinant His-tag SpsL N2N3 protein was used as antigen for chicken immunization and antibody generation at the Scottish national blood transfusion service (Pentland Science Park). The Eggspress IgY purification kit (Gallus Immunotech) was used to purify antibody from egg yolk. Further purification of the antibody was performed using CNBr-activated Sepharose 4B (GE Healthcare). This antibody was used in Western blot analysis to detect the expression of SpsL using 1 μg ml -1 chicken anti-SpsL N2N3 IgY and 0.5 μg ml -1 F(ab')2 rabbit anti-chicken IgG-HRP (Bethyl Laboratories). Functionalization of cantilevers with fibrinogen. Functionalized tips were obtained using PEG-benzaldehyde linkers [62]. Prior to functionalization, cantilevers were washed with chloroform and ethanol, placed in a UV-ozone cleaner for 30 min, immersed overnight in an ethanolamine solution (3.3 g ethanolamine-6 ml dimethyl sulfoxide [DMSO]), and then washed 3 times with DMSO and 2 times with ethanol and dried with N 2 . The ethanolaminecoated cantilevers were immersed for 2 h in a solution prepared by mixing 1 mg Acetal-PEG-NHS dissolved in 0.5 ml of chloroform with 10 μl triethylamine and then washed with chloroform and dried with N 2 . Cantilevers were further immersed for 5 min in a 1% citric acid solution, washed in Ultrapure water (ELGA LabWater), and then covered with a 200 μl droplet of PBS solution containing 200 μg ml -1 of the fibrinogen to which 2 μl of a 1 M NaCNBH 3 solution was added. After 50 min, cantilevers were incubated with 5 μl of a 1 M ethanolamine solution in order to passivate unreacted aldehyde groups and then washed with and stored in buffer. Single-cell force spectroscopy. For all atomic force microscopy (AFM) experiments, cells expressing chimeric SpsL A+SD were harvested after overnight incubation, washed in PBS, and diluted 1:100 in PBS. For SCFS, bacterial cell probes were obtained as previously described [63,64]. Briefly, colloidal probes were obtained by attaching a single silica microsphere (6.1 mm diameter; Bangs Laboratories) with a thin layer of UV-curable glue (NOA 63; Norland Edmund Optics) to triangle-shaped tipless cantilevers (NP-O10; Bruker) with a NanoWizard III AFM (JPK Instruments). The cantilevers were then immersed for 1 h in Tris-buffered saline (TBS; 50 mM Tris, 150 mM NaCl, pH 8.5) containing 4 mg ml -1 dopamine hydrochloride (Sigma), rinsed in TBS, and used directly for cell probe preparation. The nominal spring constant of the colloidal probe was determined by the thermal noise method. 50 μL of diluted bacterial suspension was deposited into a glass Petri dish containing fibrinogen (human and canine) coated substrates at a distinct location within the Petri dish, and 3 ml of PBS was added to the system. The colloidal probe was brought into contact with an isolated bacterium and retracted to attach the bacterial cell; proper attachment of the cell on the colloidal probe was checked using optical microscopy. Cell probes were used to measure cell-substrate interaction forces at room temperature, using an applied force of 250 pN, a constant approachretraction speed of 1 μm s -1 , and a contact time of 0 ms. Data were analyzed using the Data Processing software from JPK Instruments. Adhesion force and distance rupture histograms were obtained by calculating the maximum adhesion force and rupture distance of the last peak for each curve. Single-molecule force spectroscopy. For SMFS, measurements were performed at room temperature in PBS buffer using a Nanowizard III AFM (JPK Instruments) and oxide-sharpened micro-fabricated Si 3 Ni 4 cantilevers with a nominal spring constant of~0.01 N m −1 (MSCT) (Microlevers; Bruker Corporation). The spring constants of the cantilevers were measured using the thermal noise method. For the experiments, bacteria expressing chimeric SpsL A+SD were immobilized on polystyrene substrates. Adhesion maps were obtained by recording 16 x 16 force-distance curves on areas of 500 by 500 nm 2 with an applied force of 250 pN, a constant approach and retraction speed of 1 m s -1 , and a contact time of 0 ms. Adhesion force and rupture distance histograms were obtained by calculating the force and rupture distance of the last peak for each curve. Data were analyzed with the data processing software from JPK Instruments. Canine fibrinogen α-chain sequence analysis Genomic DNA was isolated from whole canine blood using the method described previously [65]. The region of interest in the fibrinogen α-chain was amplified using Q5 Hot Start high-fidelity DNA polymerase (NEB) and purified using Monarch PCR & DNA Cleanup kit (NEB). Purified PCR products were analyzed by Sanger sequencing (Eurofins) and DNAStar SeqMan Pro 14 (Lasergene). Sequence alignment was performed using MegAlign (Lasergene) and PRALINE [66]. Biofilm assay Biofilm assays were performed using S. pseudintermedius strains expressing pALC2073 constructs of full length SpsL or A-domain+SD. Strains were grown in TSB supplemented with 0.5% (v/v) glucose and 3% (v/v) NaCl. 96-well tissue culture plates were coated overnight at 4˚C with 100 nM bovine, canine, human, or ovine fibrinogen with some wells left uncoated. Overnight cultures were diluted to an OD 600nm of 0.05 and 100 μl applied to the plate and incubated at 37˚C for 24 h. The plates were washed three times with PBS and the bacteria fixed with 25% (v/v) formaldehyde (Sigma) for 30 min. After washing, the plates were stained with 0.5% (v/v) crystal violet (Sigma) for 3 min and then solubilized with 5% acetic acid (v/v). Plates were analyzed using a Synergy HT plate reader (BioTek) at 595 nm wavelength. Neutrophil phagocytosis by flow cytometry 50 ml of venous blood was drawn from healthy volunteers and mixed with 6 ml of acid-citratedextran (Sigma). Human neutrophils were isolated as outlined previously [67] and suspended to a final concentration of 2.5 x 10 6 cells ml -1 in RPMI-1640 (Gibco) containing 0.05% human serum albumin (Sigma). 2.5 x 10 6 CFU of bacteria, previously labelled with FITC using a method previously described [68], were opsonized with 50 nM of extracellular matrix protein at 37˚C for 15 min and diluted to 1 ml in RPMI-1640 containing 0.05% human serum albumin. 2.5 x 10 5 CFU were then opsonized with 10% human serum in 2 ml 96-well v-bottomed plates (Corning) at 37˚C for 15 min. 2.5 x 10 5 neutrophils were added to the opsonized bacteria (MOI of 1) and incubated at 37˚C for 15 min with shaking at 750 rpm. The samples were fixed with 1% (v/v) paraformaldehyde (Fisher Scientific) and incubated at 4˚C for at least 30 min. Phagocytosis was measured in comparison to serum-only controls using the BD LSRFortessa X20 cell analyzer. Statistical analysis Data is presented in Prism 6 (Graphpad) with statistical analysis performed using Minitab 16. All data was analyzed for normality, using the Anderson-Darling test, and equal variance before choosing the method of statistical analysis. Multiple comparisons were performed were appropriate. ELISA-type binding assays and bacterial adherence assays were analyzed at one protein concentration. For data displaying statistical significance, the following symbols are used, � p�0.05, �� p�0.01, and ��� p�0.001.
9,783.2
2019-06-01T00:00:00.000
[ "Biology" ]
A Comprehensive Assessment of Indoor Air Quality and Thermal Comfort in Educational Buildings in the Mediterranean Climate Maintaining good indoor air quality and thermal comfort is a challenge for naturally ventilated educational buildings, as it can be di ffi cult to achieve both aspects simultaneously. Nonetheless, most of the existing studies only focus on one aspect. To explore the potential of balancing indoor air quality and thermal comfort, both topics must be investigated concurrently. This study assessed indoor air quality and thermal comfort in 32 naturally ventilated classrooms of 16 primary and secondary schools in the Mediterranean climate, based on a large on-site measurement campaign lasting one year that gathered over 460 hours of data. The research investigated occupants ’ adaptive behaviors, analyzed the actual thermal comfort of around 600 students, and characterized the representative scenarios leading to good and poor indoor air quality and thermal comfort by clustering analysis. The results showed that poor indoor air quality was mainly due to closing windows and doors in winter, while thermal discomfort mainly occurred in summer because of the high indoor temperature. The fi ndings suggested that a proper ventilation protocol is the key to balancing indoor air quality and thermal comfort. Introduction Students spend around 70% of their time in the classroom on school days [1].The environmental quality of the classroom is influenced by many factors, but indoor air quality (IAQ) and thermal comfort (TC) are the main factors affecting students' health, well-being, and productivity [2].The negative impacts of poor IAQ and thermal comfort have been widely reported, such as the loss of concentration, decline in cognitive ability, headache, fatigue, allergy, and, in particular, a high infection risk of airborne diseases [3][4][5]. Long-term occupancy and high occupant density often lead to great challenges in maintaining a safe, comfortable environment in the classrooms.More importantly, children are more vulnerable than adults, and their adaptation in the classroom is passive and limited.They usually do not complain when they are not really satisfied with the indoor environment [6].For these reasons, the IAQ and thermal comfort of educational buildings have been a concern for relevant public authorities and researchers.Ventilation is the most common way of maintaining good IAQ in schools, and most schools only rely on natural ventilation that changes from time to time [7].A minimum air change rate per hour is required by relevant standards such as ASHRAE 62.1 [8] and EN 16798-1 [9].The estimation of the air change rate of the classroom is predominantly achieved by measuring occupant-released CO 2 as a tracer gas.Thus, the indoor CO 2 concentration is a commonly adopted surrogate indicator for the assessment of IAQ for educational buildings [3,10,11]. Maintaining good IAQ in schools is challenging.Díaz et al. [12] conducted a study in 8 primary schools in Chile.They found that the indoor CO 2 concentration exceeded the maximum threshold for around 70% of school hours in winter.In a large-scale survey of 100 primary and secondary school classrooms in Switzerland, Vassella et al. [13] demonstrated that approximately two-thirds of the classrooms failed to meet the limit set by the national standard.Cai et al. [14] carried out a study in 21 public schools in China and found that mechanically ventilated classrooms exceeded the CO 2 limit during 40% of the measurement time, compared to 61% in naturally ventilated classrooms.Monge-Barrio et al. [15] performed a measurement campaign in 9 secondary schools in Spain.They discovered that CO 2 concentration values did not meet the national regulation, as the exceedance was 2 times higher due to the lack of a proper ventilation protocol.From these studies, it can be extrapolated that the variability of IAQ of classrooms can be attributed to many factors, including seasons [12], occupancy [15], ventilation system [14], and ventilation strategy [13]. Unlike IAQ, both objective and subjective factors influence students' thermal comfort.Objective factors involve a range of thermal parameters such as temperature, relative humidity, and air velocity.In contrast, subjective parameters derive from the occupants' physical and psychological adaptation [2]. Achieving students' thermal comfort is also a challenge for schools.Firstly, the thermal sensation of children and teenagers is quite different from that of adults [16].Notably, the models established by ASHRAE 55 [17] and ISO 7730 [18] were developed for adults in offices.This means that students may not necessarily be comfortable even if the temperature in educational centers is set following the thermal requirements specified by the regulations.Korsavi et al. [6] evaluated 8 primary schools in the UK, where 15% and 14% of the children were overheated during nonheating seasons and heating seasons, respectively.Aparicio-Ruiz et al. [19] investigated 3 classrooms in a primary school in Southern Spain during the summer.They found that only half of the students felt comfortable, even though the mean indoor air temperature of classrooms was within the operating range of the national regulation.Secondly, students' thermal comfort varies due to many factors.Zomorodian et al. [5] indicated that students in various climates had different comfort temperatures.Yang et al. [20] assessed a primary school in Sweden and reported that students' thermal neutrality varied from season to season.Al-Khatri et al. [21] investigated 5 girls' secondary schools and 3 boys' secondary schools in Saudi Arabia.The results indicated that the comfort temperature difference between females and males was nearly 2 °C.Jiang et al. [22] analyzed 4 schools in northwest China during winter.In nonheated classrooms, students were more accepting of lower indoor temperatures.Shrestha et al. [23] carried out a survey of 8 schools in Nepal.In this case, the heavier clothing of students also led to a low comfort temperature.Considering the aforementioned aspects, students' thermal comfort can be affected by a wide range of factors such as climate [5], season [20], heating systems [22], gender [21], and level of clothing insulation [23]. IAQ and thermal comfort are associated because the outdoor air introduced into the classroom can lead to significant changes in indoor thermal conditions [2].Heracleous and Michael [24] evaluated a secondary school in Cyprus and found that both indoor air and outdoor temperatures can affect occupants' behavior of opening windows to ventilate the air in the space.Ma et al. [25] demonstrated that maintaining a comfortable thermal environment could reduce the ventilation rate, and consequently, a low level of IAQ could be detected in classrooms.Mohamed et al. [1] found that most of the classrooms experienced overheating for more than 40% of the day.At the same time, the classrooms failed to meet the IAQ requirement of the UK national standard for more than 60% of school hours. Concerning the Mediterranean area, only a few studies address both topics (IAQ and thermal comfort of schools), as listed in the following: one elementary school study in Greece during spring [26], one secondary school study in Portugal during spring [27], one secondary school study in Cyprus during winter [24] and one preschool study in Spain during winter [28].The above studies limited the scope to a single climate zone, season, and education level, which may be the shortcomings.In addition, none of the existing studies investigated the representative scenarios that often lead to good and poor IAQ and thermal comfort in classrooms.Hence, a comprehensive investigation is needed of IAQ and thermal comfort of primary and secondary schools in the Mediterranean climate. For this reason, this paper is aimed at conducting a comprehensive characterization of both IAQ and thermal comfort in educational buildings, based on a large on-site measurement campaign in primary and secondary schools in several regions with specific climate conditions in the Mediterranean climate. Following this introduction, Section 2 defines the methodology of this study, Section 3 describes the implementation of methodology and measurement campaigns, and Section 4 discusses the analyzed results.The conclusions and recommendations are summarized in Section 5. Methodology The research methodology of this study consists of four steps (Figure 1). Identification and Description of Educational Buildings. Educational buildings must be selected considering representativeness and avoiding potential biases caused by the building and occupants.In this context, a range of factors that may affect IAQ and thermal comfort should be taken into account, such as climate zones, geographic location, construction year, ventilation type, and cooling and heating modes. Educational centers are mainly used by children and teenagers.Their participation in the research must be based on the consent of all involved parties, such as government authorities, school management boards, teachers, and parents (who may ultimately restrict the availability of expected samples). Characterization of Indoor Air Quality and Thermal Comfort.For IAQ, EN 16798-1 [9] specifies 4 categories with corresponding CO 2 concentration limits.The IAQ requirement for the classrooms corresponds to category I, which requires the indoor CO 2 concentration to be within 550 ppm above the outdoor concentration. For thermal comfort, ISO 7730 [18] specifies the range of operative temperature and relative humidity (RH) for the classrooms with sedentary activity, given a typical clothing insulation value (Iclo) of 0.5 for summer and 1.0 for winter. 2 Indoor Air In summer, the recommended operative temperature is between 23 and 26 °C, and relative humidity is 60%.In winter, the operative temperature is 20 to 24 °C, while the relative humidity is 40%.According to Kumar et al. [29], the operative temperature (T op ( °C)) can be calculated by where T a denotes the air temperature, V a is the air velocity, and T r is the mean radiant temperature given by the measurement instrument.Table 1 summarizes the typical clothing insulation value indicated by ISO 7730 [18] and ASHRAE 55 [17]. Moreover, ISO 7730 [18] stipulates that the actual thermal comfort of occupants needs to be assessed using a thermal sensation vote (TSV) on a 7-point scale, which should be gathered 30 minutes after they have remained in a steady state in a stable thermal condition. It should be noted that apart from these international standards, relevant national standards and guides should also be considered.The one with stricter criteria should be followed to meet the requirements at both national and international levels. Development of the Protocol for the On-Site Measurement Campaign.To conduct the measurement campaign for data collection, a protocol needs to be developed and confirmed with the schools, which describes the measurement process, sensor deployment, and data collection methods.The measurement should follow the premise of avoiding interference in teaching activities in any case.Hence, background information about classrooms, students, and class schedules should be obtained in advance. The number of sensors depends on the size of the classroom.Mahyuddin and Awbi [31] concluded that one sensor is needed for a space with a floor area below 100 m 2 and three or more sensors for rooms of over 200 m 2 in area. For minimum accuracy of sensors, ASTM D6245-18 [32] and ASHRAE 55 [17] require a ±5% of the measurement range for CO 2 concentration, ±0.2 °C for air temperature, ±1 °C for mean radiant temperature, and ±5% for relative humidity.The calibration and pretest are recommended to prevent malfunction and reading drift. The deployment of the sensor should follow the criteria established by ASTM D6245-18, ASHRAE 55 [17], and ISO 7726 [33].ASHRAE 55 [17] specified that the sensor should be located at least 1 m inward of the center of each room's walls, while ASTM D6245-18 [32] recommended locating sensors preferably 2 m away from the following: (i) CO 2 sources (e.g., people in space), (ii) ventilated air with low CO 2 concentration (e.g., windows and doors), and (iii) heat sources (e.g., radiators and heaters). No recommendation was made by ASTM D6245-18 [32] regarding the height that the sensor should be placed.However, the experimental study by Mahyuddin et al. [34] indicated that the CO 2 sensor should be placed within the occupant's breathing zone, in a range of 0.75-1.80m above ground, while 1.00-1.20 m is preferred.For the measurement of thermal parameters, ISO 7726 [33] specifies the heights of 0.60 or 1.10 m, which correspond to the occupant's abdominal level when sitting and standing, respectively.To clarify open issues such as sensor location, height, and recording interval, a specific review of sensor deployment based on relevant case studies was conducted and is summarized in Table 2. For the collection of TSV, relevant research pointed out that children may have difficulties understanding the concept of thermal comfort and expressing their thermal sensations; thus, the TSV graph should be designed in the most understandable way possible for them [42,43]. 2.4.Analysis of the Measurement Results.Firstly, IAQ and thermal comfort should be characterized, respectively, referring to the requirements of relevant standards.Statistical analysis needs to be performed to examine the correlation between relevant influential factors and IAQ/thermal comfort, such as season, climate, education level, geographic location of the building, year of construction, occupancy, ventilation strategy, and heating/cooling mode of the 3 Indoor Air classroom.The measurement data usually has a hierarchical structure, as several classrooms or schools are measured in the same educational level, climate zone, and seasons.Hence, the hierarchical linear model should be applied for the statistical analysis.This model classifies the measurement from the same schools, educational level, climate zone, and/or seasons into identical groups and analyzes the statistical differences within and between groups.Relevant influential factors should be defined as independent variables, while indoor CO 2 concentration, operative temperature, and relative humidity are dependent variables. Then, a simultaneous analysis of IAQ and thermal comfort must be conducted.Both aspects should be analyzed concurrently following the specified requirements.In addition, the representative scenarios that often lead to good/ poor IAQ and thermal comfort need to be characterized based on the identified influential factors.Clustering analysis can extract key information from massive data by assigning the samples that share similarities into the same clusters, and highlighting their main features [44], which was applied to identify representative scenarios.Notably, to improve the readability and interpretability of the clustering results, numerical variables should be converted to categorical variables.K-mode clustering was applied in this study, since it is a widely used technique to cluster categorical data.It can identify K-representative clusters with the main features rep-resented by the centroids [45], while the number of clusters k can be identified by the Elbow method [46]. Implementation This section elaborates on the implementation of this study in detail, following the proposed methodology (Section 2).The research characterized and assessed IAQ and thermal comfort in primary and secondary schools in Catalonia, Spain. Identification and Description of Educational Buildings. The sample schools were identified and contacted with the help of the Catalan government (Generalitat de Catalunya), but the participation of schools and students in this research completely depended on their willingness.Catalonia is primarily in a Mediterranean climate but has 3 specific climatic zones: Coastal Mediterranean, Continental Mediterranean, and Mountain.The coastal area has typical characteristics of a Mediterranean climate, with warm summers, moderately cold winters, and little rain.The continental region has cold winters and hot weather in summers.In mountainous areas, summers have mild temperatures, but there are high rainfall and snow in winters [47].In the Coastal Mediterranean climate, Barcelona Metropolitan Area has a temperate climate (Csa in the Köppen climate classification), while Tarragona has a humid subtropical climate with hot summers (Cfa) [48]. In this study, a total of 9 primary and 7 secondary schools were selected, which are located in the aforementioned 4 climate zones and 3 geographic locations (city center, suburb, and rural area).These schools were built between 1953 and 2016, while 5 of them were built before 1979 when the first national standard NBE-CT-79 [49] regulating building thermal conditions was developed.Another 5 schools were constructed between 1979 and 2006, complying with the NBE-CT-79 standard but completely relying on natural ventilation.The remaining 6 schools were built after the establishment of the Spanish Technical Building Code in 2006.Table 3 summarizes the U-values of construction elements of sample schools.These schools are designed with mechanical systems, but it was found that they did not work during the measurement campaign.To distinguish them from the naturally ventilated schools, their ventilation type is labeled as "free-running."In addition, all schools are equipped with radiators but without any cooling system. To avoid bias in sample selection, 2 classrooms were selected in each school, corresponding to different age groups.In primary schools, classrooms with 5-and 9-yearold students were selected, while in secondary schools, 12-and 16-year-old students' classrooms were selected.One primary school only agreed to measure two classes that both have 9-year-old students.The volume of these classrooms ranges from 114.3 to 249.3 m 3 , with an average of 157.7 m 3 .The total area of windows and doors varies greatly, from 0.3 to 9.4 m 2 and 1.4 to 3.9 m 2 , with an average of 4.5 and 2.1 m 2 , respectively.Table 4 summarizes the characteristics of selected schools and classrooms.6 Indoor Air Characterization of Indoor Air Quality and Thermal Comfort.Following the defined methodology (Section 2.2), the Spanish standards and guides were reviewed and considered. For IAQ, compared with the international standard EN 16798-1 [9], the RITE standard [52] specified a lower CO 2 concentration threshold for the classrooms.The Ventilation Guide for Indoor Spaces recently proposed by the Spanish Institute of Environmental Assessment and Water Research [53] even indicated a stricter limit to prevent massive exposure to the SARS-CoV-2 virus in the schools.Table 5 summarizes the IAQ levels with the corresponding CO 2 concentrations applied in this study, assuming an outdoor CO 2 concentration of 420 ppm as recommended by IDAEA [53].The IDA2 level is the minimum IAQ requirement for the classrooms stipulated by the RITE standard [52], and the safe level represents the optimum requirement by IDAEA [53]. For thermal comfort, Royal Decree 486/2004 [54] established the minimum acceptable requirements for typical sedentary workplaces, where the operative temperature must be between 17 and 27 °C and relative humidity must be within 30 to 70%.The RITE standard [52] proposed the optimum thermal requirement with stricter comfort zones given the same assumptions made by ISO 7730 [18].As both standards do not specify the requirements for the mild season (i.e., spring), it is assumed that the lower and upper limits of operative temperature and relative humidity for summer and winter establish the comfort zone for spring. The minimum and optimum IAQ and thermal requirements applied in this study are summarized in Table 6. Implementation of the Measurement Protocol in the On-Site Measurement Campaign.The measurement campaign was conducted from April 2022 to January 2023 (Figure 2), following the protocol defined in Section 2.3. The technical specifications of the measurement instrument are summarized in Figure 3.The sensor was calibrated by the manufacturer and pretested by researchers in advance.All readings were recorded in a 1-minute interval. The measurement lasted all day long during school hours, generally from 9:00 to 15:00 in spring and winter, while the school usually began and ended one hour earlier in summer.The measurement instrument was deployed in the classroom for 10 minutes before the beginning of the first class and was always preferentially placed in the center of the classroom at a 1.1 m height (whenever feasible).In classrooms with high occupancy where the desks and seats could not be moved, the sensor was located at the closest point to the center.A distance of 2 m was ensured from any disturbance (students, windows, doors, walls, and radiators). The location of the equipment was confirmed with the teachers before the class to avoid affecting teaching activities and the movement of students. To protect the privacy of students during the measurement campaign, the Catalan government prohibited the researchers to conduct written surveys and to take photos and video records.In this context, researchers collected information about students' gender and clothing and recorded the change in occupancy (students and teachers) and the behavior of opening windows and doors in the classroom through observation and notes during the entire survey. In each measurement day, the TSV was collected by the teachers one time in each classroom, usually at the end of the class to ensure that the students had been in a sedentary state for 30 minutes.Teachers explained the concept of thermal sensation and showed TSV graphs (Figure 4) in advance, to ensure that all students understood correctly.The TSV graphs are specifically designed for this study based on the opinions of native Spanish speakers and teachers. Analysis of the Measurement Results . Following the methodology defined in Section 2.4, IAQ and thermal comfort were assessed, respectively, following the thresholds of CO 2 concentration, operative temperature, and relative humidity indicated in Tables 5 and 6.Relevant influential factors of IAQ and thermal comfort were analyzed statistically.The collected measurement data has a 4-level hierarchical structure, season, climate, educational level, and school, which was defined in the model. Then, the simultaneous analysis of IAQ and thermal comfort was performed.Depending on the satisfaction of the minimum and optimum requirements (Table 6), IAQ and thermal comfort were classified into 3 categories: (1) good (the optimum requirement is achieved), ( 2) acceptable (the minimum requirement is accomplished), and (3) bad (both requirements are not satisfied).IAQ and thermal comfort of the classrooms were characterized concurrently according to these 3 categories given the measured time. The representative scenarios within each category were identified with K-mode clustering analysis.The occupancy ratio of the classroom and opening area of windows and doors were categorized to improve the readability and interpretability of the clustering results, as shown in Table 7.The categorization was based on the characteristics of the measured data (i.e., the range of occupancy ratio and opening areas), due to the lack of reference values. The analysis was performed on the Google Colab platform using Python 3.7.3[55].Python packages of NumPy [56], Pandas [57], and Statsmodels [58] were adopted for data processing and statistical analysis.Kmodes [59] and Kneed [60] packages were used for clustering analysis and Indoor Air [61] and Seaborn [62] were used for data visualization. Results This section presents the assessment of results regarding IAQ, thermal comfort, and simultaneous analysis of both aspects. Statistical Summary of Measured Indoor Environmental Parameters.Table 8 summarizes the statistical details of measured indoor environmental parameters in investigated classrooms by season.The mean indoor CO 2 concentration in summer was 593 ppm, which met the safe level requirement (700 ppm) by IDAEA [53].In spring, the value was 774 ppm and achieved the minimum acceptable IAQ requirement-IDA2 level (920 ppm) specified by the national regulation RITE standard [52].In contrast, the mean indoor CO 2 concentration in winter reached 1194 ppm, suggesting a potential of poor IAQ in classrooms.The mean air velocity in summer (0.064 m/s) was much higher than in spring (0.025 m/s) and winter (0.021 m/s).Due to the use of heating systems, the classrooms had similar mean operative temperatures in winter (21.24 °C) and spring (22.53 °C); both were within the comfort range specified by the RITE standard.However, the average operative temperature in summer reached 28.18 °C, which was even higher than the maximum acceptable temperature limit (27 °C) specified by Royal Decree 486 [54], indicating a high risk of thermal discomfort.The average indoor relative humidity ranged from 44.9% to 50.2%, which were all within the comfort range specified by the RITE standard.For the achievement of the optimum IAQ requirement (safe level).In general, all the classrooms ensured a safe IAQ level 53% of the time, while 14 classrooms had a level above average.Over half of the classrooms met the optimum requirement for over 50% of the time in the measurement. It is noteworthy that the initial CO 2 concentration of 81% of the measurements was below the threshold of safe level (700 ppm), but 8% exceeded the IDA2 level (920 ppm), which depends on whether the classroom was adequately ventilated at the end of the class in the previous day. Influential Factor Analysis of Indoor Air Quality. Table 10 summarizes the statistical analysis results.For IAQ, correlated factors were found to be educational level, occupancy ratio, and opening area of windows and doors (ventilation strategy). The most relevant factors are occupancy and ventilation, which determine the generation and removal of CO 2 in space.The results of statistical analysis indicated a positive correlation between the indoor CO 2 concentration and the occupancy ratio (person/m 3 ) and a negative correlation with the opening of windows and doors in the classroom.Both correlations are statistically significant with p values of less than 0.001.During the measurement campaign, classrooms 12 Indoor Air 13 Indoor Air were occupied by students for around 70% of the time.Figure 6 shows the IAQ level during unoccupied and occupied periods.As expected, the proportion of the safe level significantly decreased during the occupied period, which implies an increased infection risk due to the presence of the students. Natural ventilation enables the renewal of indoor air but is manually controlled by opening windows and doors.During the measurement campaign, the researchers did not intervene in the opening of windows and doors in the classrooms.Hence, the occupants' ventilation behavior in schools was observed.The outcomes showed that classrooms had cross ventilation up to 54% of the time.The ventilation was carried out only by opening doors 19% of the time, which is slightly higher than only by windows (15%), while for the rest of the time, the windows and doors were completely closed (no ventilation).Cross ventilation is the most effective strategy for improving indoor air quality in the classroom.As seen in Figure 7, cross ventilation maintained the IAQ above the IDA2 level in 90% of the observations and at a safe level in 70% of the observations.In comparison, ventilation by windows had better effects than doors, which is consistent with the findings of other studies [6,63]. The statistical analysis results show that the CO 2 concentration in the classroom is statistically different in winter than in spring and summer.As seen in Figure 8, classrooms had better IAQ in spring and summer than in winter.In summer, the IAQ was above the IDA2 level more than 90% of the time, compared with less than 50% in winter.The average indoor CO 2 concentration in winter was 1194 ppm, which is significantly higher than that of spring (744 ppm) and summer (593 ppm).There is no significant difference in terms of occupancy for each season.Therefore, such a discrepancy was mainly due to different ventilation practices in schools.In summer and spring, the classrooms had cross ventilation for nearly 78% and 69% of the time, respectively, compared to less than 29% in winter.In winter, the windows and doors were completely closed for 23% of the time, and ventilation was carried out mainly by opening doors, which is consistent with the fact that the classroom occupants declined to open the window due to the low outdoor temperatures. Moreover, there is a statistically significant difference in CO 2 concentration between educational levels (with p values < 0.001), while the rest of the factors are not correlated.In general, primary schools had better IAQ than secondary schools (Figure 9).The average CO 2 concentration of primary schools was 744 ppm, while that of secondary schools was 1083 ppm.Such a discrepancy is believed caused by occupancy, generation ratio, and ventilation.The average occupancy ratio of primary classrooms was 20% lower than that of secondary classrooms, while primary students generate around 28% less CO 2 than secondary students [64].Besides, primary classrooms had more cross ventilation than secondary classrooms by 10% on average.In general, the investigated classrooms met the minimum thermal requirement (Table 6) 74% of the time, while 14 classrooms were above average.More than 90% of the classrooms achieved the minimum requirement at least 50% of the measured time.In contrast, the optimum thermal requirement was met only 19% of the measured time, while 8 classrooms did not meet the optimum requirement in all the measurements. Table 11 summarizes the accomplishment of minimum and optimum thermal requirements in terms of operative temperature and relative humidity.Regarding the satisfaction of the minimum thermal requirement, the relative humidity was within the required range for 97% of the measured time, but the operative temperature exceeded the upper limit for nearly 23% of the time.For the optimum thermal requirement, the relative humidity was within the required range 53% of the time, but the optimum temperature was achieved only 36% of the time. Concerning the initial thermal conditions of the classrooms during the measurement campaign, only 17% of the measurements achieved the optimum requirement, while 62% met the minimum requirement.Notably, 21% of the measurements initially failed to meet the minimum thermal requirements due to a high indoor temperature of above 27 °C in summer. Influential Factor Analysis of Thermal Comfort. The statistical analysis results (Table 10) indicated that season, occupancy ratio, ventilation strategy, and heating mode of the classroom are influential factors of thermal comfort. The results demonstrate that operative temperature is statistically correlated with the season with a p value < 0.001, while relative humidity is independent of the season.In Indoor Air addition, both indoor operative temperature and relative humidity are not correlated with the climate and geographic location of the building, which is mainly attributed to the fact that the indoor thermal condition of the classrooms was regulated by the adaptive behavior of the occupants and the heating systems.The average operative temperature in spring was 22.53 °C, slightly higher than that in winter (21.24 °C); both were within the required range of optimum temperature.In spring and winter, the minimum temperature requirement was achieved more than 95% of the measured time, whereas the satisfaction of the optimum temperature requirement was 14% higher in spring than in winter (Figure 10).In comparison, the average operative temperature in summer was 28.18 °C.The indoor operative temperature exceeded the upper limit of the minimum acceptable value (27 °C) during 67% of the measured time and exceeded the optimum temperature limit (25 °C) nearly 93% of the time.During the summer measurement campaign, teachers and students frequently complained to the researchers that it was too hot to withstand, particularly in the afternoon.Both operative temperature and relative humidity are statistically correlated with the occupancy, ventilation state, and heating mode of the classroom (with p values < 0.001).Temperature and relative humidity are positively correlated with the occupancy ratio, indicating that an increase in occupancy may lead to higher indoor operative temperature and relative humidity.Natural ventilation had a negative impact on the indoor thermal condition in general.As seen in Figure 11, ventilation brought in cool, dry air from outside in spring.On the contrary, it introduced a lot of heat from outdoor air into the classrooms in summer, leading to a significant reduction in the satisfaction of the optimum temperature requirement.Since heating systems were turned on in winter, ventilation had almost no impact on the indoor temperature, but it positively affects the indoor humidity as it removed the moisture from indoor air, which reduces 15 Indoor Air the condensation risk that may lead to the growth of mold.During the winter measurement, radiators in 3 classrooms were completely turned off during the measurement day.The analysis found that the overall satisfaction of the optimum temperature increased owing to the heating, but an overheating problem was detected (i.e., the temperature was above the optimum limit 13.3% of the time).This can be attributed to the lack of a thermostat that controls the heating system in almost all the classrooms.In addition, although heating systems evaporated the moisture in the air, the satisfaction of the optimum humidity requirement slightly dropped in general. Although the indoor thermal condition is not statistically correlated with the building construction year, the analysis of winter data found that the schools built after 2006 had a higher proportion above the optimum temperature limit of 23 °C by nearly 10% of the time on average (Figure 12), which suggests an overheating problem and potential waste of energy for heating. Thermal Comfort of Students. During the measurement campaign, students' activity state, clothing, and actual thermal sensation were investigated.Measurement data revealed that students remained in a sedentary state for over 80% of the time in the classroom and performed light activities (such as having breakfast and doing craft projects) and medium activities (walking) for around 10% of the time, respectively. The clothing insulation of students in each season is summarized in Table 12.It was observed that students wore fewer clothes than adults in general.The total students' clothing insulation value was lower than the recommended value of ISO 7730 [18] in all seasons.In addition, gender was a relevant factor of divergence since the average clothing insulation value was greater in female students than in male students.Furthermore, students' actual thermal sensation votes were collected and analyzed.The TSV frequency values were 596 in spring, 599 in summer, and 592 in winter.In terms of educational centers, 55% of the TSV corresponded to primary schools and 45% were from secondary schools.Regarding gender, 49% collected were from female students and 51% were from male students.Figure 13 shows the distribution of students' TSV in each season.As shown, thermal neutrality reached the highest level in winter, while most of the students felt hot (from +1 to +3) in summer and felt between neutral (0) and a little bit hot (+1) in spring.The average values of students' TSV in spring, summer, and winter were 0.76, 1.26, and -0.04, respectively.Male students felt hotter than female students.In spring, summer, and winter, the average TSV of female students were 0.61, 1.08, and -0.26, while those of male students were 0.91, 1.41, and 0.16, respectively. Linear regressions were established between the mean thermal sensation vote (MTSV) of students and the operative temperature at the time of TSV (Figure 14).The neutral temperature of primary schools was found to be lower than that of secondary schools in spring and winter, but they were As seen, the optimum requirements of both IAQ and thermal comfort were achieved in 7.5% of the measured time.In contrast, only 0.3% of observations were labeled completely as bad, which means a failure to meet the minimum requirements of both aspects.For nearly 30% of the observations, one aspect reached a good level, while the other achieved an acceptable level.Then, for 9% of the observations, both aspects only reached the acceptable level.Overall, the investigated classrooms achieved acceptable and good levels of both IAQ and thermal comfort aspects for over 46% of the measured time. Figures 16(a) and 16(b) summarize the identification results of representative scenarios under each IAQ and thermal comfort category with clustering analysis.The results showed that good IAQ and thermal comfort could hardly be achieved simultaneously in summer.According to the measurement results, only 7% of the observations in the good IAQ and good TC category were in summer, while the figure was 48% and 45% for spring and winter, respec-tively.For a good IAQ and acceptable TC category, the summer's observation was also less than 29%.Besides, it was found that almost all observations in the good IAQ and bad TC category were from summer.In comparison, spring and winter create favorable conditions for ensuring good and acceptable IAQ and thermal comfort in the classrooms. For the categories involving bad IAQ, secondary schools accounted for a high proportion.The main reason is that, as previously mentioned, the occupancy ratio of secondary schools is usually higher than that of primary schools, while these students also generate more CO 2 than children.Therefore, it is necessary to limit the number of students in secondary school classrooms to guarantee a satisfactory IAQ level.Furthermore, the ventilation strategy is critical to maintaining a good IAQ.Most of the observations in categories with bad IAQ are related to a small opening area of windows and doors.Cross ventilation with a sufficient total opening area (>3 m 2 ) can guarantee good or acceptable IAQ in most cases, which should be adopted by schools, as strongly recommended by IDAEA [53]. Temperature was the main factor leading to poor thermal comfort in schools.For categories involving bad TC, 87% of the observations exceeded the upper limit of minimum acceptable temperature.Only in a few cases in winter, the temperature was below the acceptable limit.In contrast, relative humidity usually caused a decline in thermal comfort level, particularly in winter.When ventilation and heating occurred at the same time, the relative humidity fell below the lower acceptable limit, leading to a bad TC (cluster 5 in good IAQ and bad TC category).When there was a lack of sufficient ventilation, the relative humidity was often higher than the optimum limit, which reduces the possibility of achieving good thermal comfort in the classrooms (clusters 2 and 5 in bad IAQ and acceptable TC category). The results of representative scenarios suggest that clustering analysis is an effective and efficient way to analyze large measurement databases.4.5.Discussion of Results.Maintaining good IAQ in the classroom is not a simple and easy task.As observed in many studies, classrooms did not meet the IAQ requirement of relevant standards over 50% of the time [12,14,25].This is often caused by a lack of adequate ventilation, particularly in winter, because occupants usually have less willingness to Indoor Air open windows and doors in cold weather to ensure thermal comfort needs.In this study, classrooms failed to achieve the acceptable IAQ level nearly 49% of the time in winter, mainly due to a substantial reduction in cross ventilation.These findings are consistent with previous studies, which suggest the need of a proper ventilation protocol in schools.Monge-Barrio et al. [15] found that after adopting a clear ventilation protocol, IAQ in classrooms was significantly improved and the average CO 2 concentration dropped by 1400 ppm.Miranda et al. [66] discovered that when a venti-lation protocol was enforced, IAQ in classrooms fully met the requirement 100% of the time, with a CO 2 concentration maintained below 800 ppm.These studies all pointed out that students' thermal comfort was inevitably compromised due to the enforced ventilation protocol in winter.Therefore, more attention should be given to the balance of IAQ and thermal comfort when developing ventilation protocols [67].But till now, there is a lack of reference in relevant standards combining both aspects [68].Accordingly, the representative scenarios of good and poor IAQ and thermal Ensuring the thermal comfort of students is also a challenging issue.Due to the difference in thermal sensation, children usually have a lower comfort temperature than adults.In this study, students' neutral temperatures were found to range from 21.0 to 25.3 °C, which is close to the values observed in relevant studies under similar climatic conditions [5,19,27].These values are generally lower than the university students' comfortable temperatures summarized in existing research [16,69].In this context, indoor temperature values specified by existing building codes and standards may not properly fit the needs of primary and secondary students.Moreover, children's adaption in classrooms is often passive and limited, while teachers have the initiative to control indoor thermal conditions.Kumar et al. [70] investigated the adaptive behaviors of university students and identified diverse adaptive opportunities such as turning on fans/air conditioners, operating windows and doors, changing clothing, changing postures, and walking indoors or outdoors.These options are usually not applicable to children because they have to ask for the teacher's permission [5,20].Accordingly, these factors may ultimately lead to lower satisfaction with the indoor thermal environment in classrooms, as observed in this and other relevant studies [6,19].These issues deserve more in-depth explorations in further research to guarantee a comfortable indoor environment for students. Conclusions and Recommendations This research conducted a comprehensive assessment of IAQ and thermal comfort of educational buildings, based on an on-site measurement campaign involving around 600 students in 32 classrooms of primary and secondary schools in the Mediterranean climate. For IAQ, the investigated classrooms met the minimum IAQ requirement for 71% of the time and maintained a safe level avoiding massive exposure to the COVID pandemic 53% of the time.Occupancy and ventilation were found to be the most significant influential factors that cause the discrepancy in indoor CO 2 concentration across the seasons.The classrooms had cross ventilation for more than half of the measured time in general, and occupants preferred ventilating the space by opening doors, especially in winter. Concerning thermal comfort, the measured classrooms satisfied the minimum thermal requirement for 74% of the time, but the optimum requirement was achieved less than 19% of the time.Poor thermal comfort was given by a high indoor air temperature in summer.The analysis found that indoor thermal conditions can be affected by factors such as season, occupancy, ventilation, and the heating mode of the classroom.The average value of the clothing insulation of students was lower than that specified by ISO 7730 [18].TSV analysis confirmed that female students are more sensitive to colder temperatures.In addition, students' neutral temperature was found to be very close to the upper and lower limits defined by the RITE standard [52]. When IAQ and thermal comfort aspects were assessed simultaneously, the minimum requirements of both aspects were achieved 46% of the time, but the optimum requirements were satisfied only 7.5% of the time.It was found that good IAQ and thermal comfort can hardly be achieved simultaneously in summer, while spring and winter render favorable conditions.Inadequate ventilation in winter not only results in a bad IAQ in the classrooms but also leads to relatively high humidity, which reduces the potential of achieving good thermal comfort.Besides, secondary schools should limit the number of students in the classrooms, and cross ventilation should be performed with a sufficient total opening area (>3 m 2 ). Based on the findings of this research, it was concluded that good IAQ can be maintained by developing a proper ventilation protocol for schools, but the impact of ventilation on indoor thermal conditions must be taken into account.Future research steps could investigate the adaptive thermal comfort of students. Identifcation and description of educational buildings 1 2 3 4 Characterization of indoor air quality and thermal comfortDevelopment of the protocol for the on-site measurement campaign Analysis of the measurement results temperature; T g : globe temperature; T r : mean radiant temperature; RH: relative humidity; V a : air velocity; W solar : solar radiation intensity.Interior perimeter * : along the wall. 4. 2 . Indoor Air Quality Analysis.Section 4.2.1 discusses the assessment results of measured classrooms, and Section 4.2.2 analyzes relevant influential factors. ⁎Figure 5 : Figure 5: Indoor air quality assessment results of the investigated classrooms. 4. 3 . Thermal Comfort Analysis.Section 4.3.1 discusses the assessment results of thermal comfort, Section 4.3.2analyzes relevant influential factors, and Section 4.3.3assesses the actual thermal comfort of students.4.3.1.Thermal Comfort Assessment.Table 9 summarizes the statistical details of operative temperature and relative humidity in classrooms, andFigure 5(b) shows the assessment results of thermal comfort in the measured classrooms. Figure 6 : Figure 6: IAQ levels according to the occupancy state. Figure 7 : Figure 7: IAQ levels according to the ventilation strategy. Figure 8 : Figure 8: IAQ levels and ventilation by season. Figure 9 : Figure 9: IAQ levels and ventilation by educational level. Figure 10 : Figure 10: Thermal comfort assessment results of the investigated classrooms. Figure 11 : Figure 11: Satisfaction of optimum temperature (a) and humidity (b) requirements according to the ventilation state and heating mode. Figure 12 : Figure 12: Satisfaction of optimum (a) and minimum (b) temperature requirements by building construction year. Figure 15 : Figure 15: Indoor air quality and thermal comfort of the investigated classrooms. The number of representative clusters k identified by the Elbow method comfort identified in this study lay the foundation for the development of such a proper ventilation protocol. Table 2 : Case study review on measurement protocol. Table 3 : U-value of construction elements of investigated schools. Table 4 : Characteristics of selected schools and classrooms. Table 6 : IAQ and thermal comfort requirements. Table 5 : IAQ levels with corresponding indoor CO 2 concentration limit. Table 7 : The categorization of occupancy ratio and total opening area. Table 8 : Summary of measured indoor thermal parameters in each season. Table 9 : Descriptive statistics of the measurement results.Te classroom code corresponds to the combination of school and room codes in Table4. ⁎ Table 11 : [65]sfaction of minimum and optimum thermal requirements in terms of operative temperature and relative humidity.°C,25.3°C,and21.0 °C, respectively.These neutral temperatures are very close to the upper and lower limits of the optimum temperature range specified by the RITE standard[52].The regressions had R 2 values of over 0.8 in summer, which indicates that over 80% of the variance in MTSV is attributed to changes in operative temperature.Both schools had lower R 2 values in spring and winter, which implies a greater influence of occupants' adaptive behaviors such as opening windows and changing clothes[65]. Table 12 : The average clothing insulation value of students in each season.
10,249.4
2023-11-17T00:00:00.000
[ "Environmental Science", "Engineering" ]
Non-Intrusive Electric Load identification using Wavelet Transform This paper shows the development of a decision tree for the classification of loads in a non-intrusive load monitoring (NILM) system implemented in a simple board computer (Raspberry Pi 3). The decision tree uses the total energy value of the power signal of an equipment, which is generated using a discrete wavelet transform and Parseval’s theorem. The power consumption data of different types of equipment were obtained from a public access database for NILM applications. The best split point for the design of the decision tree was determined using the weighted average Gini index. The tree was validated using loads available in the same public access database. Introduction Nowadays, the World is facing several challenges regarding energy usage, such as energy sources availability, carbon emission, sustainability, among others (Aiad & Lee, 2016b).Building energy management is becoming a major issue worldwide; it is estimated that nearly 40% of all the electric power is consumed in buildings (Ma et al., 2016).Several countries, like China (Zhou et al., 2015), European Un-ion (Tsai & Lin, 2012), and Mexico (Honorable Congreso de la Unión, 2012), have developed public policies to mitigate these challenges Hoyo-Montaño, León-ortega, VaLencia-PaLoMo, gaLaz-BustaMante, esPejeL-BLanco, and Vázquez-PaLMa public database for NILM applications.Finally, the results obtained from the validation of the design of the decision tree for the classification of loads, as well as the conclusions drawn, are presented.The implementation of energy saving and/or efficiency actions, in particular regarding domestic installations, requires information on how the energy is used.Currently, energy meters provide information about total energy consumption through monthly bills, and they do not allow to determine individual equipment consumption (Aiad & Lee, 2016b). There are studies that show a relationship between the knowledge of the amount of energy consumed by the equipment and the implementation of changes in the operating habits of the equipment by the users that promote energy savings, which may vary between 9% to 20% (Aiad & Lee, 2016a).A suitable monitoring system is required to know the operation conditions of electrical appliances.The monitoring of these operations would facilitate the implementation of energy efficiency measures.The monitoring of loads in a general way is a process that seeks to identify and acquire the measurement of energy consumption of a particular load (I.Abubakar, Khalid, Mustafa, Shareef, & Mustapha, 2017). Traditionally, the monitoring of the operation of connected equipment in a facility is based on the installation and operation of a large number of sensors.Each power outlet or load has a sensor, and the system is called intrusive monitoring (He, Stankovic, Liao, & Stankovic, 2016), having disadvantages of a high cost, complex installation and difficult maintenance issues (I.Abubakar et al., 2017). As an alternative to the inconvenience of intrusive monitoring, some alternatives have been developed with a reduced number of sensors, forming a Non-Intrusive Load Monitoring (NILM) system.In this type of system, the goal is to disaggregate the individual load consumption from the total, and the application of voltage and current waveform analysis at a single point located at the service's point of entry (I.Abubakar et al., 2017).Figure 1 shows a general structure of a NILM system (H.H. Chang, Chen, Tsai, & Lee, 2012).This paper presents an implementation of a NILM system for power consumption signature detection based on discrete wavelet transform, Parseval's theorem, and decision trees suitable to be executed in a Single Board Computer (SBC), just like Raspberry Pi 3. The system developed is part of a smart power-meter; its hardware is based on a Raspberry Pi 3 and an acquisition stage, as shown in Figures 2 and 3. In the next section, a brief review of several approaches developed for load identification in NILM systems is presented.Wavelet transform structures suitable for NILM applications are presented afterward followed by the design process of the decision tree using data available in a Non-Intrusive Load Identification A NILM system analyzes voltage and current waveforms trying to identify a power consumption signature that can be associated to the nature and operating state of individual devices.These power consumption signatures can be classified as steady state, transient and non-traditional signatures (I.Abubakar et al., 2017). Steady state signature is obtained when the device has completed its starting stage, and it has an steady operation, this identification uses parameters such as active power, reactive power, RMS voltage and current, power factor, and harmonic components (I.Abubakar et al., 2017). Transient signature is drawn from the analysis performed to period between the turn-on and steady states, or between the steady and turn-off state of a device, because during these periods some characteristic power consumption behaviors can be associated with specific loads (I.Abubakar et al., 2017). Non-traditional signatures, on the other hand, can be obtained using the values of non-electric variables in the load identification process.Values of temperature, lighting, time of day, start-up time, among others, are used to give context to the device usage.Information from these variables can be mixed with previous signatures to improve identification (I.Abubakar et al., 2017). To help in the identification process of the devices operating in an installation using NILM systems, a classification has been proposed (Bernard, Wohland, Klaassen, & Vom Bogel, 2016;Hart, 1992;Zoha, Gluhak, Imran, & Rajasegarar, 2012): 1. Type I. Turn-on/Turn-off There are only two possible operating states (turn-on and turn-off); a typical example is a lamp. Type II. Finite State Machines They present several levels of defined consumption, and a cyclic operation.An example of these devices is the washing machine. Type III. Continous Variable Consumption They have an infinite number of operating points when turned-on, examples of these devices are light dimmers and power tools.They are a great challenge for identification due to their power consumption nature. Type IV. Continuous Consumption They operate during long periods of time, days or weeks, wireless phones, and any remote controlled appliance, are perfect examples of this type of devices. Each one of these categories present its own complexity for the identification of individual devices. Total aggregate power consumption of devices operating inside an electric installation can be described as (Hart, 1992): Where ) (t P is the power consumption over time, ) (t a i is the activation vector of device i, with values 0 and 1 when device is off or on during time t; i P is the power vector of device i, ) (t e is the error or noise term.It is required to gather information of the steady and transient states from the power waveforms of the device.A high frequency sampling stage captures information regarding transient events; meanwhile, a low frequency sampling stage gathers steady state information of the device. Data processing Data must be conditioned and processed in order to give meaningful information.This stage includes noise filtering, harmonic components separation, signal synchronicity, etc. Event detection Processing and storage of all the information is inefficient and impractical process, so it is important to detect the activation and deactivation of the device.It is Hoyo-Montaño, León-ortega, VaLencia-PaLoMo, gaLaz-BustaMante, esPejeL-BLanco, and Vázquez-PaLMa necessary to establish a threshold crossing detection mechanism for the detection of transient. Characteristic Extraction Electric parameters, such as active power, reactive power, harmonic components and transient waveforms, can be extracted from the event detection and data processing stages.The identified characteristics are depending of the disaggregation method used for load identification. Load classification or disaggregation Using the characteristic information gathered from the processed data, along with a known pattern, the device disaggregation can be performed from the total energy consumption, that is, the device can be identified. Energy calculation By identifying an individual device, its operation pattern and energy consumption can be estimated. Using active power (P), reactive power (Q), RMS current and harmonic components has given good results in identifying type I and II devices; however, it has a poor performance with low power devices.High frequency alternatives have been developed to improve steady state analysis by including harmonic content.Because the transient behavior of the device turns out to be distinctive in many cases, implementing this type of analysis can facilitate the identification process, and requires the implementation of a high frequency sampling scheme (Isiyaku Abubakar, Khalid, Mustafa, Shareef, & Mustapha, 2015; Zoha et al., 2012). There are some reports of performance improvement of steady-state analysis using voltage and current waveforms as a way to identify unique features of loads, such as peak and RMS values, phase difference and power factor (Zoha et al., 2012).These identifying methods can be complemented using harmonic analysis along with real and reactive power features to improve device detection of the algorithms, but its uses require high sampling rates of waveforms (Abubakar et al., 2015;Zoha et al., 2012). Most devices have a distinctive transient behavior that can be suitable for device identification, using a high sampling rate is possible to capture the transient behavior (Abubakar et al., 2015;Zoha et al., 2012).Features such as transient shape and turn-on energy calculation have been used to identify individual devices (Zoha et al., 2012). Several Fourier Transform usage for spectral analysis of power consumption has been proven useful to detect variable loads.To detect the operation of a device, and estimate its energy consumption, Short-time Fourier Transform, and active and reactive power calculation has been combined (Zoha et al., 2012).This mathematic tool performs the transformation of a time domain function into a frequency domain function (Marcu & Cernazanu, 2012).Figure 5 shows a spectral analysis reported by (Liang, Ng, Kendall, et al., 2010) for different devices, it can be noticed that both TV and Air Conditioners have a strong presence of low-order harmonics, meanwhile devices such as induction pots present a high content of high-order harmonics. Markov models have become an interesting alternative to implement NILM systems due to their simplicity to model basic functions.The general scheme to implement Markov Models, in particular Hidden Markov Models (HMM), is based in the fact that a device behavior can be represented as a latent state and an observable output, usually active power.A system with trained Markov models can perform inferences regarding the most probable state sequence of a device, based in the set of measures processed.HMM have been proven useful in predicting in a precise way the behavior of devices using measurements gathered with low-frequency sampling (<1Hz).The goal of a HMM-based NILM system is to generate energy consumption profiles and to determine time of use of each devices operating in an installation.Usually this information is considered as non-critical, and its processing is performed off-line (Mueller & Kimball, 2016). Markov modeling requires a periodic acquisition of T measurements, where each measurement is assumed to be associated with a state Q from de process, and each state of the process can assume one of N possible values.HMM have a three component structure.A Transition A matrix containing the state-transition probability values; each state probability φ, and a vector with the values of the initial state occupation probabilities π. Figure 6 shows the structure of a HMM for a device with only three states (Mueller & Kimball, 2016).Identification of each device in an aggregate total power consumption requires firstly the determination of the sequence of states that can be used to compose the observation sequence.Using Matrix A and φ, the inferred state sequence is calculated, it represents the most probable behavior of all the devices represented as a unity. The state sequence Q can be used to determine which state sequence has the highest probability of occurrence for each individual device (Mueller & Kimball, 2016). Wavelet Transform is another tool that has been used to perform transient analysis of a device (Zoha et al., 2012).Analysis based on Wavelet Transform performs an extraction of the desired waveform applying a function translation and dilation process (Chen, Chang, & Chen, 2013).A more detailed discussion about Wavelet Transform is presented next. Wavelet Transform and Parseval's Theorem Wavelet Transform can be implemented in two ways: Continuous Wavelet Transform (CWT), and Discrete Wavelet Transform (DWT).DWT has a structure more suitable for digital signal analysis.DWT is derived from CWT definition, and can be expressed as (Chen et al., 2013): x is the signal analyzed, ψ is the mother wavelet applied, 0 a is the scaling factor, and b 0 is the shift factor. Equation (2) can be transformed in: Setting, and, (3) becomes: DWT performs two operations, dilation (applying scaling factors), and translation (applying shifting factors).These operations are performed to decompose a signal into a series of short duration waveforms called Mother Wavelets.Mother Wavelet has characteristics suitable for transient events analysis (H. H. Chang et al., 2012).Multi-Resolution Analysis (MRA) is based on the application of DWT.MRA decompose a complex waveform or signal into several sets of simpler waveforms, this is performed by a set of low-pass g[n] and high-pass h[n] filters (Chen et al., 2013).Figure 7 shows a three-level DWT filter structure.This type of structure provides a multilayer decomposition scheme, where the last low-pass filter g [n] gives an approximation value (level 3 in Figure 7), meanwhile, highpass filters h[n] provides detail values (Figure 7 shows three values).An increase in the number of levels will provide an increase in the number of detail values, but only one approximation value will be obtained (Chen et al., 2013). Parseval's Theorem is used to calculate the energy dissipated by a 1Ω resistor when a discrete current f [n] flows through it.The Theorem uses the Fourier Transform coefficients (Kocaman & Özdemir, 2009) ( ) where N is the sampling period, and a k are the Fourier Transform coefficients. In order to apply (5) to DWT, it is transformed in: The first term in the right part of (6) represents the energy levels of the approximation component of DWT, the second component represents the energy levels of the detail components.The total energy of the DWT is transformed in: Hoyo-Montaño, León-ortega, VaLencia-PaLoMo, gaLaz-BustaMante, esPejeL-BLanco, and Vázquez-PaLMa Where ||d j || is the norm of the expansion coefficients, and N J is the number of samples used at level J. Load identification Load identification from total energy of the decomposition with DWT was performed using a Decision Tree (DT).In the NILM system implemented, six types of loads were defined for identification: Air Conditioning (class 0), Compact Fluorescent Lamp (class 1), Fan (class 2), Refrigerator (class 3), Vacuum cleaner (class 4), and Washing Machine (class 5). In a NILM system, each type of load has hidden information, when developing a DT, the main goal is to develop a classification tree that contains an optimal entry node capable to measure impurities in the tree nodes, this can be performed using the Gini Index (Alshareef & Morsi, 2015; J. M. Gillis et al., 2016;J. Gillis & Morsi, 2016) ( ) ( ) Where C is the number of classes, and f(c|σ) is the probability that σ belongs to class c. The design procedure for a DT suitable for classification can be seen in Figure 8. Considering that there are six types or classes of devices, and that an eight-level DWT analysis was applied using a Daubechies 3 mother wavelet, the total energy values of DWT for one level of approximation (A8) and eight levels of detail (D1 to D8) were obtained.The number of levels is directly related to the harmonic spectrum covered within the sampling frequency; hence the eight-level analysis was chosen to cover a frequency analysis from 30kHz (sampling frequency) to 117,18Hz.The sampling frequency for the analysis is set in the PLAID public database (Gao, Giri, Kara, & Bergés, 2014).To calculate the Total Energy Vector (TEV) from the DWT, a Python function was developed and its results were compared with the results from a MatLab Wavelet Toolbook function.The Python function has a quadratic average error of 0,22%, this accuracy in the calculation of the TEV helps to differentiate appliances with close energy signatures, the code of the Python function is shown in Figure 9. Table 1 shows the DWT total energy values of 48 devices taken form PLAID. Below is the procedure to find the best split point of the classification DT. Total Energy List sorting The first step required to find the best split point is to sort in ascending way the values of the Total Energy of DWT List before the calculation of Gini Index.The sorted List is partially shown in Column 1 of Table 2. Column 2 shows the mid-point between two adjacent values of Total Energy of DWT. Split point calculation based on mid-points Columns 3 and 4 from Table 2 show the number of devices of each class with values lesser (column 3) or greater/ equal (column 4) to the mid-point value of column 2. It can be seen in Table 2, for instance, that the total number of devices with Total Energy values greater or equal to 1 465,72 is 34, and they are dispersed as: six class 5, eight class 4, eight class 3, one class 2, none class 1, and eleven class 0. Gini Index and its weighted average The Gini Index shows a measurement of the impurity of a node, when its value gets to a minimum, the point of best split is found.Since there are two columns of membership (columns 3 and 4), a value that includes them must be obtained, this is done by calculating the weighted average of the Gini Index using: Where S1 and S2 are the number of devices with Total Energy values lesser than, and greater/equal than mid-point value; Gini(σ)a is the total number of devices; and Gini(σ)b are the Gini Index for devices with Total Energy values lesser than, and greater/equal than mid-point value.The results of this calculation are shown in column 6 of Table 2. Best split point identification Because the Gini Index measures the impurity of a node, the minimum value of this index corresponds to the best split point, and identifying the entry point of the DT.By inspection of Table 2, it is found that the minimum value of Gini WA (σ) is 0,7242; which corresponds to an average point of total energy of 1 565,42.The tree developed from this entry point is shown in Figure 10. Simulation Results The classification DT was tested using 30 devices from the PLAID database (Gao et al., 2014), these devices were not included in the design process.Table 3 shows the total energy values used for testing, the identification result from de DT, the device class, and if there was an error in the identification process. As it can be seen in Table 3, only three devices were wrongly identified, this means that the proposed classification DT has a 90% success rate identifying load class.Some HMM solutions had reported a success rate between 51,66% and 87% (Aiad & Lee, 2016a, 2016b) using REDD database; using DWT as part of the analysis process.Alshareef (2015) using a Daubechies 3 mother wavelet reached a 95,83% using 1 000 decision trees; Chang (2014) reported a DWT with an Artificial Neural Network reaching identification levels between 86,16 and 96,82%; Gillis (2016) reported a DWT plus Decision Tree, using a Daubechies 3 mother wavelet and six levels of decomposition, having a 96,18% success in load identification. Figure 4 . Figure 4. Aggregated power consumption from different devices.Source: Hart (1992) Figure 4 shows an example of aggregated power consumption from different devices.Identification of Power consumption signature has been implemented in different schemes in literature.In general, NILM identification process requires implementing six stages of analysis (Basu, Debusschere, Douzal-Chouakria, & Bacha, 2015; Liang, Ng, Kendal, & Cheng, 2010): Figure 6 . Figure 6.Structure of a HMM for a three state device.Source:Mueller & Kimball (2016) Figure 8 . Figure 8. Procedure for best split point identification for Classification DT.Source: Authors Figure 9 . Figure 9. Python Code of DWT and Total Energy Vector Calculations.Source: Authors Figure 10 . Figure 10.Classification DT based on the values of Table2.Source: AuthorsThe Classification DT developed presents an unbalanced structure; this is not rare when the Gini Index is used.There are eight nodes at the left (Energy lesser than), and seventeen to its right (Energy greater/equal than) of the entry node.The implementation of the Classification DT as a Python function is shown in figure11. Figure 11 . Figure 11.Python function code for the Classification DT.Source: Authors Table 1 . Total Energy of DWT analysis Table 2 . Sorted List of Total Energy of DWT Table 3 . Simulation Results from Classification DT for Load ID
4,713.8
2018-05-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Expanding the repertoire of Antibody Drug Conjugate (ADC) targets with improved tumor selectivity and range of potent payloads through in-silico analysis Antibody-Drug Conjugates (ADCs) have emerged as a promising class of targeted cancer therapeutics. Further refinements are essential to unlock their full potential, which is currently limited by a lack of validated targets and payloads. Essential aspects of developing effective ADCs involve the identification of surface antigens, ideally distinguishing target tumor cells from healthy types, uniformly expressed, accompanied by a high potency payload capable of selective targeting. In this study, we integrated transcriptomics, proteomics, immunohistochemistry and cell surface membrane datasets from Human Protein Atlas, Xenabrowser and Gene Expression Omnibus utilizing Lantern Pharma’s proprietary AI platform Response Algorithm for Drug positioning and Rescue (RADR®). We used this in combination with evidence based filtering to identify ADC targets with improved tumor selectivity. Our analysis identified a set of 82 targets and a total of 290 target indication combinations for effective tumor targeting. We evaluated the impact of tumor mutations on target expression levels by querying 416 genes in the TCGA mutation database against 22 tumor subtypes. Additionally, we assembled a catalog of compounds to identify potential payloads using the NCI-Developmental Therapeutics Program. Our payload mining strategy classified 729 compounds into three subclasses based on GI50 values spanning from pM to 10 nM range, in combination with sensitivity patterns across 9 different cancer indications. Our results identified a diverse range of both targets and payloads, that can serve to facilitate multiple choices for precise ADC targeting. We propose an initial approach to identify suitable target-indication-payload combinations, serving as a valuable starting point for development of future ADC candidates. Introduction Antibody-drug conjugates (ADCs) offer a promising approach towards targeted cancer treatments.The approval of 12 ADCs for treatment of hematological and solid tumors, along with more than 170 novel ADCs in clinical development, serves as compelling evidence of the growing acceptance of this therapeutic approach in treating cancers [1]. ADCs leverage the specificity of antibodies and increasingly innovative linker-payload technologies to deliver potent cytotoxic agents selectively to tumor cells, while minimizing the adverse effects to healthy cells.The efficacy and safety of ADCs is determined by interplay of each of its three essential components: an antibody, a cytotoxic payload, and a chemical linker [2]. While ADCs have demonstrated remarkable success as targeted therapeutics, there are still challenges to be addressed.For selective targeting and improved efficacy of ADCs it is highly desired to: 1) Optimize target selection which plays a pivotal role in the establishment of a therapeutic window, 2) identification of highly potent payloads with diverse mechanisms of action capable of selective targeting [3] and a 3) linker to effectively transport the payload, either by releasing or retaining it [4]. The range of targets currently undergoing clinical investigation is narrow with, notable focus on a few antigens such as HER2, Trop-2, CLDN18.2 and EGFR [5], frequently leading to clinical benefit for a limited set of cancers.The optimal target for ADC development should exhibit both high and uniform expression in tumor cells, while excluding expression in normal cells [6].ADC targets currently under development represent a wide-ranging expression profile in both tumor and normal cells.In addition, expression of the target antigens is often modulated in accordance with the mutation profile of tumor cells [7,8].Therefore, in the pursuit of next-generation ADCs, it is crucial to take into account the uniformity of target expression among patients who are positive for the target, along with the exploration of novel targets [9]. The payload is another key component of ADC, which is frequently composed of highly potent cytotoxic agents with IC 50 values ranging from picomolar to low nanomolar ranges [10].Microtubule targeting agents and DNA damaging agents are among the most commonly used payloads representing 57% and 17% of clinically tested ADCs, emphasizing the scarcity of diversity in terms of mechanism of action [11].Furthermore, these payloads frequently encounter issues related to toxic side effects, emergence of drug resistance, and efficacy against a limited range of tumor targets [10,12].There is a need for the proficient alignment of the payload's mechanism of action with the biological characteristics of the target tumor biology [13].Identification of payloads with high potency, selective targeting, and diverse mechanisms of action capable of evading drug resistance is highly desired for enhancing ADC's effectiveness [10]. In our present study, we aimed to uncover ADC targets and payloads with improved tumor selectivity.To select target candidates for ADCs, we implemented the initial steps outlined in the approach presented by Razzaghdoust A et al. [14] for ADC target identification.Our present work distinguishes in the subsequent research methodology and steps by including a comparative analysis of expression levels using datasets from IHC staining, RNAseq followed by GEO study (GSE42519) [15] and mutational profiles.We utilized Lantern Pharma's proprietary AI platform RADR 1 (Response Algorithm for Drug positioning and Rescue) and the Human Protein Atlas (HPA) database version 22.0 (https://v22.proteinatlas.org/) to integrate transcriptomics, proteomics, immunohistochemistry (IHC) from 20 tumor types and 44 normal tissues, as well as cell surface membrane based datasets [16].Elevated levels of the target antigen on blood cell types can impede the accumulation of ADCs at the tumor site [8].Therefore, in the subsequent stage, we utilized the data from the GEO study (GSE42519) [15] to eliminate the targets that display high expression across various blood cell types, such as hematopoietic stem cells (HSCs) and multipotent progenitor cells (MMPs).Furthermore, we employed the TCGA mutation database to explore the impact of altered genes in several tumor types on the expression levels of targets, aiming to improve precision targeting of ADCs for specific patient populations. To identify potential payloads with selective tumor targeting, we employed the NCI-DTP data, which has screened over 50,000 molecules utilizing a 60 tumor cell line screening platform over the span of 20 years [17].We primarily focused on the compounds exhibiting activity at picomolar (< = 1nM) and low nanomolar (>1nM -10nM) range in 9 cancer indications covered by NCI60 cell lines.In the current study, we report a strategy to compile a list of compounds that demonstrate specific or heightened sensitivity towards the desired cancer type.This approach can potentially aid in identification of novel payloads, as well as the possibility of the repurposing of existing cytotoxic agents in a tumor selective manner. Notably, a recent article published by bosi et al., 2023 [6] made valuable contributions by investigating ADC targets and potential predictors of treatment response across multiple cancer types.In considering the comparison, it becomes evident that while their work focused on clinically developed targets and payloads, our research contributes towards identification of novel targets and unexplored potential payloads as well. We examined an initial approach to explore target, indication and payload combination.This may serve as a good starting point for further investigations and refinements in the complex process of ADC design. Identification of potential ADC target candidates Derived from methods used by Razzaghdoust et al. [14] and delineated in the methods section, and in Fig 1 , we initially identified 5543 membrane protein coding genes out of a total of 20,090 genes using HPA database version 22.0.For further analysis, 4875 genes based on evidence at protein level were retained.It is worth mentioning that the same gene, which has a membrane protein annotation, may also have the intracellular localization for it's isoforms.This is seen for many clinically validated target antigens, such as CD276 and ERBB2, which carry two annotations-membrane protein and intracellular in the protein atlas database.Such antigens are retained in our approach.By relying on annotation used in the protein atlas database, we have exclusively filtered out proteins which lacked any membrane annotations for further evaluation. In order to minimize possible side effects of ADC targeting on healthy cells, we considered the removal of genes with high expression levels in 13 critical normal tissues as used in [14]; lung, oral mucosa, esophagus, stomach, duodenum, small intestine, colon, rectum, liver, kidney, heart muscle, skin, bone marrow.This step resulted in 1731 genes for subsequent investigation.We prioritized potential targets exhibiting high expression levels on tumor cells; hence excluded any genes with low quasi H-score (<150) in any of the cancer types.Using this criteria, we retained 763 genes with a > 150 quasi H-score in at least one out of 20 tumor types.As a subsequent step, we filtered out genes which did not show cell surface localization using the annotation provided by in silico human surfaceome [16] publicly available database (http:// wlab.ethz.ch/surfaceome). Considering the diversity of data types, which included RNAseq, immunohistochemistry, HPA webportal data, calculated quasi H-score, we implemented two stringent filtering steps to identify potential ADC target candidates and excluded: 1) Any gene that didn't exhibit consistency with both mRNA and IHC data for normal tissue and 2) any gene which did not show consistency with mRNA and calculated quasi H-score data for tumor types in TCGA. Following this methodological filtration process, we derived a list of 123 genes out of which we considered 122 genes for further analysis, excluding one gene due to its absence in the GEO study (GSE42519) data [15]. Increased levels of the target expression on various blood cell types can limit the accumulation of ADCs at target tumor sites [8].The lack of targeted antigens on hematopoietic stem cells (HSCs) provides an advantage, allowing normal blood cells to recover from HSCs following temporary depletion caused by ADCs [3].Consequently, we eliminated the 28 targets that display high expression on blood cell types, such as HSCs and multipotent progenitor cells (MMPs), by using the data from GEO study GSE42519 [15].This led to retention of 94 genes, which included 67 genes with medium and 27 with low expression levels on HSCs and MMPs, which is given in the S1 File. In the final step, we applied five criteria to prioritize the targets and kept the ones which met at least one of these criteria.1) Literature: targets for which there is existing literature evidence elucidating their potential role in tumor biology, 2) Antibody: targets against which antibodies have been generated, 3) Protein family targets: belong to a protein family, where other proteins isoforms within this family have been employed for the advancement of ADC in either clinical or preclinical setting, 4) Preclinical: targets tested in preclinical setting and 5) Clinical: targets tested in clinical setting.Total 82 prioritized targets navigated through the entire validation process are listed Fig 2A and 2B.Data of both figures are given in S2 and S3 Files.40 out of these 82 prioritized targets show either no detection levels, low, or medium expression across all 44 normal tissues.15 targets; AQP5, ATP2B2, CLCNKB, CSPG5, EDNRB, ENPP5, FLT1, GPBAR1, GRIN1, HEPACAM, MSLN MUC16, PODXL, PTPRZ1 and SLC2A9, exhibited low / not detected expression levels across all 13 critical normal tissues.From our list of 82 prioritized targets, 22 have already been tested as ADCs in the preclinical or clinical settings, including HER2, NECTIN4 and EGFR, demonstrating the validity and potential of our approach. We identified 60 additional targets which to our knowledge have not been used for ADC development.Our list included insulin-like growth factor-2 receptor (IGF2R) and SORT1, which have been explored for radioimmunoconjugate [18] and peptide-drug conjugate targeting, respectively [19].The list included 19 targets against which antibodies have been generated in either oncology or non-oncology space, i.e, colony-stimulating factor-1 receptor CSF1R/CD115 against which monoclonal antibody emactuzumab is under clinical investigation [20].The colony-stimulating factor 1 receptor (CSF-1R) functions as a transmembrane receptor tyrosine kinase, which is a receptor for colony-stimulating factor 1 (CSF-1) [21].Intratumoral CSF-1/CSF-1R signaling has been reported to play a key role in triggering the recruitment of tumor-associated macrophages leading to tumor growth and facilitating metastasis [22][23][24]. Among the 60 remaining targets, as mentioned above, 22 belong to a protein family which has been employed for ADC development, i.e,One member of the ectonucleotide pyrophosphatase/phosphodiesterase 1 protein family, ENPP5, has been identified as a potential ADC target.Another protein from this family, ENPP3, underwent clinical trials for ADC development targeting renal cell carcinoma (RCC) [25].Our analysis suggests that such targets may hold potential to be explored as ADC targets.An additional 28 out of these 60 targets or their protein families have not been explored for generation of ADCs or antibodies.However, there is existing literature evidence elucidating their potential role in tumor biology, i.e, UGT8 is one such target encoding a protein belonging to the UDP-galactose:ceramide galactosyltransferase family.UGT8 is an enzyme responsible for catalyzing the transfer of galactose molecules from UDP-galactose to ceramide, leading to the formation of galactosylceramide [26].Elevated expression of UGT8 is reported in multiple malignancies such as breast, lung and prostate cancers [26][27][28]. Among our list of new potential ADC targets, there are a few intriguing candidates pertaining to a protein family that is being utilized as targets for ADC development, antibodies have been generated against them, and they have a well understood role in tumor biology.Examples include NOTCH2, against which monoclonal antibody tarextumab was generated [29], which has been tested in phase II clinical trials [30].While an ADC against the protein family member NOTCH3 was subjected to clinical investigation [31] however, NOTCH2 has not been investigated as an ADC target.The biological significance of an ADC target is underscored by its overexpression in cancer cells, its key role in disease development, ability to facilitate ADC internalization, support from both preclinical and clinical research, and its restricted expression in normal tissues [32].Further investigation is necessary to evaluate the internalization potential of these additional targets. We found 16 targets from our list were able to target more than 7 indications, with >150 quasi H-score (Fig 3), possessing substantial literature evidence indicating their potential role in tumor biology.This list includes CD276 or B7-H3 which is already under clinical investigation for ADC development.Another intriguing potential target candidate in this list is from non-oncology space, OSMR-receptor for Oncostatin M (OSMR), which exhibited overexpression across 10 cancer indications in our analysis.Fully human monoclonal antibody against OSMR has been generated and is in clinical trials for pruritus in prurigo nodularis [33].Adequate preclinical data is present, substantiating that overexpression of OSMR results in unfavorable outcomes across a broad spectrum of tumor types [34][35][36][37][38][39][40][41][42][43]. It would be of interest to further evaluate the role of these targets in additional tumor indications, as well as their potential to serve as ADC targets. Exploring the impact of mutated genes on the expression levels of prioritized ADC targets The process of payload internalization, retention and ADC efficacy is significantly influenced by target expression on the tumor tissue [44].ADC targets under development often show heterogeneous expression profiles on tumor tissues [6].A key aspect of tumor heterogeneity comes from genomic instability and the mutational landscape.Therefore, we employed TCGA mutation database to determine correlation between expression levels of targets and 416 mutated genes across 22 tumor types for 82 prioritized targets.We found that 336 out of 416 mutated query genes significantly altered the expression of 46 out of 82 targets.To identify a strong correlation, we exclusively considered targets showing a log2 fold change greater than or equal to 1, in conjunction with the cancer subtype exhibiting a population change of 5% or more due to the specified mutation.Our analysis showed that the KRAS mutation altered the expression of 23 targets across 4 tumor subtypes, while the p53 mutation affected the expression of 16 targets across 10 tumor subtypes.TCGA tumor type abbreviations are given in S4 File. RAS, comprising 3 genes, H-RAS, K-RAS and N-RAS, that encode proteins that play critical roles in key cell signaling pathways, and is the second most prevalent gene driver mutation across diverse human cancers, manifesting in 20% to 30% of all human malignancies [45].Notably, K-RAS is the most frequently mutated of the three RAS genes, with the oncogenic variant being detected in approximately 88% of pancreatic cancer cases [46].The results of our mutation analysis revealed upregulation of 10 targets AQP5, CDCP1, CLDN1, ERBB2, MSLN, MUC16, NECTIN4, SCNN1A, SLC44A4, and TSPAN15 in KRAS mutated pancreatic adenocarcinoma (PAAD), unlocking their potential to provide clinical benefit in this subset of patient population (Fig 5A ). Recent investigations report an elevated occurrence of EGFR mutations, in low-grade gliomas (LGGs) reaching up to 23% [47].EGFR-mutated LGGs display a poorer overall survival outcome [48].Our analysis revealed that alteration in the EGFR gene can lead to upregulation of 2 clinically tested ADC targets, FGFR3 and MMP-14, and one new potential target OSMR in LGGs (Fig 5B).Developing ADCs using targets overexpressed in EGFR-mutated LGGs holds the potential clinical advantages. MSLN showed a 4.37 and 2.61 absolute fold upregulation in STK11 and KEAP1 mutated lung adenocarcinoma (LUAD) patient population, respectively (Fig 5C).Our analysis suggests that ADCs targeting MSLN may be particularly beneficial in lung cancer patients harboring dual mutations in STK11 and KEAP1 genes.We observed that BRAF mutations led to change in the expression level of 7 targets most prominently in thyroid carcinoma (THCA) (Fig 5D).This included upregulation of NEC-TIN4, a target used in the approved ADC, Enfortumab Vedotin.Another study group reported that more than 50% of patients with THCA had BRAF mutant samples [49] which might provide a possible explanation for our observations.It's important to highlight that mutations in a gene can exert varied impact on the target expression level, depending on the tumor type.For example, our analysis highlighted that mutation in tumor suppressor protein p53 coding gene TP53, correlates with the upregulation of MSLN in breast invasive carcinoma (BRCA) and PAAD.Conversely, it correlates with the downregulation of MSLN in cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC).Additionally, we observed TP53 mutation in BRCA correlates with 4.50 fold downregulation in SLC39A6/LIV-1 expression, corroborating with the results published by fang et al [8]. Similarly, a single target expression can significantly vary depending on the combination of tumor type and gene mutation.For example, expression of MSLN was upregulated in 35-tumor type gene mutation combinations, while it was downregulated in another 33-tumor type gene mutation combinations as shown in Fig 6A .Another insight which can be extracted from our analysis is related to FDA approved ADC target NECTIN4, which was upregulated in 4 tumor type gene mutation combinations including OV/TP53, THCA/BRAF, PAAD/ KRAS, PAAD/SMAD4, and was downregulated in 25 tumor type gene mutation combination.21 of these mutations resulted in the downregulation of the NECTIN4 expression, specifically in the Uterine Corpus Endometrial Carcinoma (UCEC), as illustrated in Fig 6B .Tumor heterogeneity can impact ADC target expression leading to uneven binding and reduced efficacy.This may result in resistant tumor subpopulations, limiting the overall therapeutic response [50].Understanding the impact of mutations on heterogeneous target expression patterns in cancers can help improve treatment response and provide an approach for further personalized oncology using ADCs. Identification of potent tumor selective payload candidates We analyzed more than 50K compound data from the NCI-DTP portal.Following procedures outlined in the methodology section and shown in Fig 7, we categorized 47,310 unique compounds based on their sensitivity level into two groups: a) compounds exhibiting picomolar (<1nM) range and b) compounds exhibiting low nanomolar (1nM -10nM) range sensitivity.Subsequently, compounds that have > = 50% response in at-least 1/9 indications in NCI60 were retained, leading to a total of 209 compounds in the picomolar group and 2413 compounds in the low nanomolar group.In the next step, compounds which failed NCI60 screening were eliminated, which led to the removal of 93 and 1616 compounds from picomolar and low nanomolar groups, respectively.This resulted in a total of 729 compounds.33 compounds grouped in the picomolar group, 631 compounds in the low nanomolar group and 65 compounds exhibited activity in both picomolar as well as low nanomolar range across NCI60 9 cancer indications. In the resulting picomolar group, 1 compound is FDA approved and 1 reached clinical trial stage, while in the low nanomolar group, 27 compounds are FDA approved and 41 reached clinical trial based on NCI60 annotations.Among the compounds common between both subgroups, there are 4 FDA-approved and 1 compound which reached clinical trials.Using a hierarchical clustering method, in order to identify similar or contrasting sensitivity patterns, we subdivided the 33 compounds from the picomolar group into 5 clusters.631 compounds from low nanomolar range and 65 overlapping compounds from both sensitivity groups (picomolar and low nanomolar) were subdivided into 10 clusters each.[51].This drug resistance is predominantly caused by increased expression of multidrug transporters, like P-glycoprotein (MDR1/ABCB1) [52]. Therefore, it becomes imperative to identify potential payloads which can elude multidrug resistance (MDR) mechanisms.Cryptophycins are one of these potential payloads which are active against MDR cancer cell lines [53].It failed to show single-agent efficacy in clinical trials but has re-attracted interest as a promising ADC payload [51].Another compound identified from our compilation is a colchicine analog, mivobulin isethionate, which is also a microtubule targeting agent, that demonstrated broad range antitumor activity in cell lines exhibiting MDR in preclinical evaluation [54].It failed to show efficacy as a single-agent in earlier clinical trials [54][55][56].However, it may be of interest to explore the possibility to repurpose such compounds or their analogs as ADC payloads in a tumor selective manner. By employing our strategy, it becomes feasible to identify compounds that exhibit distinct activity either in solid tumors or hematological malignancies.For example, our clustering results found nogamycin, an anthracycline, to show limited activity in hematological cancer cell lines at the picomolar level, while it showed 100% activity in prostate cancer, followed by 60% activity in breast cancer cell lines.Similarly, our compilation indicated that vedelianin exhibits differential activity, with blood cancers showing greater sensitivity.A recent review emphasized the potential of exploring Golgi apparatus targeting compounds to create innovative therapeutic agents against cancer cells [57].Vedelianin could potentially hold intriguing characteristics due to its disruptive effects on Golgi apparatus [58].It has been reported to show antiproliferative activity at low nanomolar concentrations in tumors.Notably a path for a fully synthetic process for this molecule has been published [59]. Our analysis may help identify novel potential payloads with diverse mechanisms of action and selective tumor targeting.One of the compounds identified in our screening are Illudins, a class of natural compounds, derived from Jack-o'-Lantern Mushrooms [60].Illudins have demonstrated antitumor efficacy at nanomolar levels and have already been explored as potential payload using docking simulation by other reports [61].Illudin derivatives may offer selective targeting due to reliance on enzyme Prostaglandin Reductase 1 (PTGR1) to the desired tumor types, which can lead to optimal results by controlling off-target toxicity in ADCs [62]. It may be possible to select suitable payloads to pair with tumor types, such as kidney and ovarian cancers, which have shown maximum variability in their sensitivity pattern towards listed compounds in this subclass.We found mTOR and dual PI3K/mTOR inhibitors, such as sapanisertib, everolimus and omipalisib, to show significant and increased specific activity against kidney cancer cell lines, which is consistent with other reports [63].This emphasizes the significance of employing payloads, which can effectively target mTOR signaling pathways, in designing ADC targeting strategies against kidney cancer.Similarly, another compound identified in our screening is BRD-K58304294-001-01-5, which is a potential piperidine derivative which exhibited specific activity against ovarian cancer cell lines in the 1nM to ≦ 10nM sensitivity range. It is worth understanding that effectiveness of ADCs may be influenced by the physicochemical characteristics of payloads.For example, MMAF, with limited cell permeability, relies on high tumor antigen expression for efficacy but lacks bystander killing [64].On the other hand, as a free drug, MMAE is more potent than MMAF due to its increased cell permeability, allowing it to diffuse out of the target cell and cause bystander killing in surrounding cells [65,66].This distinction emphasizes the trade-offs between cell-specific targeting and broader cytotoxicity in the design and effectiveness of ADCs.The novel payloads identified in our screening method deserve additional evaluation to determine their chemical characteristics and suitability for conjugation. Screening of potential payloads with overlapping sensitivity in picomolar to low nanomolar (using ≦ 10nM as cutoff) Fig 9 is a representative heatmap of the 65 compounds showing overlap with both of the subclasses with broad sensitivity ranging between picomolar to ≦ 10nM.Complete table of these 65 compounds is provided in the S6 File.This group included eribulin mesylate, which is under active clinical investigation as an ADC payload [67]. Fig 10 shows a heatmap of ADC payloads, which are under clinical development and exhibit sensitivity in picomolar to ≦10nM (using ≦10nM as cutoff) range.As an illustration, our clustering analysis revealed that MMAE exhibited only moderate activity in the context of renal cancer cell lines, which aligns with another study highlighting the intratumoral disposition of MMAE can potentially contribute to its moderate activity in RCC [68].It is worth noting that many ADCs inactivated/discontinued for RCC were using maytansinoid/MMAE derivatives.Although an interplay of all three key components of ADCs and tumor specific characteristics might have contributed to the deactivation of these assets for RCC positioning, a discernible pattern aligns with our analysis. Identification of target-indication-payload combination The effective design of an ADC requires that the target antigen and corresponding payload work synergistically.We aligned 9 tumor indications from NCI60 with the target antigen and selected clinically tested payloads Dxd, exatecan mesylate, maytansine, monomethyl auristatin E, maytansine derivative, eribulin mesylate as described in methods section.Subsequently, we incorporated mutation data with the target-indication-payload combinations.The corresponding data is provided in the S7 File. Potential of ADC for glioma patients lacks clarity.Previous attempts using auristatin based payload did not enhance overall survival in newly diagnosed glioblastoma as a monotherapy [69] while AMG-595 employing maytansine based payload DM1 showed promise in glioma [70].Our approach identified eribulin mesylate, DXd and maytansine as suitable payloads for pairing with target antigen EGFR, while excluding auristatin based payloads.Preclinical investigations validate the capability of eribulin to penetrate brain tumor tissue [71] and is reported to demonstrate efficacy in controlling brain metastasis in breast cancer [72].Similarly, results from the phase II trial HERTHENA-Lung01, demonstrated a 33.3% central nervous system (CNS) response rate in patients with brain metastases treated with ADC patritumab deruxtecan, which utilizes Dxd as a payload [73].These validation strengthens our study methodologies, providing valuable insights for future research in the field. Patients with STK11/KEAP1-mutant lung adenocarcinoma may experience limited benefit from checkpoint blockade therapies highlighting unmet need for improved treatment strategies [74,75].Our analysis suggests that designing a MSLN-directed ADC carrying eribulin mesylate as a payload may be beneficial for STK11/KEAP1-mutant lung adenocarcinoma patients.It's worth understanding that these insights need further investigation of target, linker and payload combination selection, along with considerations of stage and characteristics of tumor specific biology. Discussion Through our thorough analysis, we pinpointed a set of 82 prioritized ADC targets and 290 target indication combinations for precise targeting of tumors.Among these, 22 ADC targets have already undergone evaluation in clinical trials or preclinical contexts, including ERBB2 and NECTIN4 demonstrating the validity of our approach.We have identified 60 additional novel targets that meet our filtering criteria and have not yet been investigated for ADC development.One of the novel targets identified by our approach is OSMR-receptor for Oncostatin M (OSMR), which exhibited overexpression across 10 cancer indications.OSMR is a member of the GP130 cytokine receptor family, which upon OSM ligand binding can lead to activation of signaling pathways such as the JAK/STAT, MAPK, and PI3K/AKT [34].Fully human mAb that blocks OSMR beta are in clinical trials for pruritus in prurigo nodularis [33].Despite of ample preclinical data available about OSMR 's association with poor outcome in cancers including ovarian, synovial sarcoma, pancreatic, gastric, glioblastoma, breast, cervical and bladder cancer [34][35][36][37][38][39][40][41][42][43], its clinical exploration within the field of oncology has not yet taken place.These targets could hold potential for application in the development of ADCs targeting [35] cancers.Our results suggesting modulation in target expression based on mutational profile of tumors emphasize that selection of ADCs should not solely be determined by the tumor type, but should also consider the specific genomic profile of these tumors.Knowing that specific tumor mutations can impact target expression can be valuable in early clinical trials, correlating with response depth.As additional ADC treatment options emerge, such data may eventually aid in selecting the most effective ADC based on the genomic mutational context of the tumor. We acknowledge that the disposition of ADCs can be influenced by a multitude of factors beyond the scope covered in our work.The optimization of ADC design includes ensuring efficient internalization rates and gaining understanding of the mechanisms of elimination [4,76,77].Future ADC design may incorporate strategies to further enhance therapeutic efficacy and minimize off-target effects.The present study has certain limitations.We opted for HPA datasets because they offer data from IHC, which can be more accurate than mRNA expression data.However, there are limitations due to low sample sizes of IHC data for each cancer type.To ensure robust potential results, we focused solely on common target antigen with high expression in both TCGA mRNA data as well as IHC datasets.Our selection was guided by the Surfaceome list provided by the literature [16].Moving forward, we intend to investigate additional databases to further validate our findings.Some of the targets were omitted during our screening process, examples include TROP2, HER3 and CLDN18.2.Potential reasons for this could involve: (1) Utilization of a high quasi H score considering 150 as cutoff (ranging from 0-300) which eliminates several targets, (2) Our selection ensures that none of the resulting targets are highly expressed in 13 normal critical tissues to minimize toxicity and (3) In certain cases IHC data was missing from HPA dataset and we computed target levels using corresponding mRNA expression levels.While our analysis does not cover the gene fusions and additional omics data, such as copy number variation, it is comprehensive and covers a range of gene alterations, including point mutations, frameshift mutations, deletions and splice site mutations.One of the targets identified is a type I transmembrane protein, PODXL, which is reported to be expressed by kidneys, hematopoietic and vascular cells [78].However, our database did not mark this as one with high expression in any of the critical tissues.It is worth mentioning that expression data are relative and the expression level marked as not detected represents the lowest relative expression.PODXL showed upregulation in endometrial cancer in our analysis.Another study reported the generation of a monoclonal antibody against PODO447, predominantly binding to a glycoepitope on PODXL.PODO447 not only exhibited specificity against PODXL tumor cell lines, but also demonstrated no reactivity against normal primary human tissues, including PODXL kidney podocytes.Notably, ADC based on PODO447 demonstrated specific efficacy in vitro for killing tumor cells [78] indicating its potential to be used for ADC target development. Our payload mining approach serves as a valuable starting point, presenting a compilation of compounds exhibiting tumor selective responsiveness for use as potential ADCs payloads in precision medicine approach.It's worth highlighting that many highly potent cytotoxic agents or their analogs were previously set aside, primarily those obstructed by toxicity constraints directly as sole therapeutic agents.The avenue of ADCs holds promise as a means to salvage these agents, valuable as payloads, due to their intrinsic attributes such as elevated cytotoxicity and mode of action [51].Our compilation could contribute to the repurposing of existing cytotoxic agents, such as cryptophycins and illudins, to expand the arsenal of ADC-payloads in a tumor selective manner. Some of the limitations associated with our payload mining strategy are as follows: (1) Potency of free cytotoxic agents is not the sole determinant for its suitability as ADC payload and our current work does not consider physicochemical characteristics of payloads [4], (2) We exclusively focused on compounds displaying GI 50 up to 10nM.However, there's a possibility that certain compounds slightly surpassed the 10nM cutoff and were excluded from our analysis, (3) We employed a 50% cutoff to retain compounds demonstrating a minimum of 50% activity in 1/9 cancer indications.The outcomes could differ based on the variation in cutoffs, (4) While examining a specific compound, there could be instances where it exhibits considerable sensitivity in certain indications; however, our analysis might reveal a comparatively lower sensitivity.For instance our analysis indicated a diminished level of sensitivity of Dxd against breast cancer, whereas Dxd is a clinically approved ADC payload against breast cancer.NCI-DTP covers 5 breast cancer cell line data and in our selected range Dxd shows sensitivity in 2 out of 5 cell lines, while 3 of those are falling outside our cutoff.Understanding the genetic and mutational profile can help uncover further specificity of these payloads, (5) Another limitation is posed by the availability of fewer cell lines.For example in case of prostate cancer there is availability of data for only two cell lines in the NCI-DTP data, making it difficult to draw definitive conclusions. Constraints, such as a small sample size and limited indications, can be addressed by using large datasets like CCLE and GDSC covering ~300 drugs, >1,000 cell lines and >20 indications.By employing a strategy to use additional datasets it will be possible to generate more information regarding the genomic context of payload response, which will further refine the selective payload targeting.Furthermore, any novel payloads identified using our strategy will need to be evaluated for additional chemical features to ascertain their amenability for conjugation in ADC format. It is crucial to note that in silico models may not encompass all biological intricacies.Thus, integrating these predictions with experimental validation is paramount.Validation of novel ADC targets and payloads typically includes cytotoxicity studies, binding affinity assays, and internalization assays, followed by animal models to assess tumor inhibition and safety profiles. Our approach to identify the optimal target-indication-payload combination serves as a promising foundation for developing future insights, albeit requiring additional considerations related to the tumor microenvironment, tumor biology, linker and payload characteristics [79].Building upon these insights and by leveraging additional data our future work will focus on identifying most effective combinations of target, linker and payload against a specific cancer type. Conclusions We presented a list of clinically validated, as well as novel targets, for ADC development against a wide array of cancer indications.The findings underscore the significance of taking the mutational and genomic profile of target tumor type into consideration in order to provide precise and clinically effective targeting of ADCs.We extended our analysis to compile a list of potential payloads and initial exploration of target-indication-payload combination, which can provide guidance towards the development of ADC in a tumor targeted manner.The insights provided in our study can potentially improve the targeting of ADCs for specific patient populations and aid in guiding more effective clinical treatment responses. Materials and methods In this section, the data acquisition and processing steps are described in detail. Identification of potential ADC target candidates All protein coding genes (n = 20,090) were queried using the Human Protein Atlas (HPA) database version 22.0 with the goal to identify the membrane protein coding genes (n = 5543) as an initial filter (https://v22.proteinatlas.org/search/protein_class:Predicted+membrane+proteins).Subsequently, we utilized the HPA annotation to further narrow down the genes list.This led to exclusion of 668 genes with no evidence at protein level retaining 4875 genes exhibiting evidence at protein level.In the 3rd filter we retained genes (n = 1731) that did not show high expression in critical normal tissues (we considered a total of 13 tissues as critical normal tissue which is shown in Fig 2A) using the normal tissue data downloaded from the HPA download page (https://v22.proteinatlas.org/about/download).We calculated the percentage of samples with low, medium and high protein expression using the HPA IHC pathology dataset.And then as a proxy of protein expression levels, a quasi H-score (ranging between 0-300) was calculated using the following formula for remaining genes across 20 TCGA tumor types.Quasi H-score = (percentage of patients with low protein expression x 1) + (percentage of patients with medium protein expression x 2) (percentage of patients with high protein expression x 3).In order to keep the genes that show high expression in at-least 1 indication, we used 150 as a quasi H-score cutoff, which resulted in 763 genes.In the subsequent filtration stage, using the annotation provided by in silico human surfaceome [16] publicly available database (http://wlab.ethz.ch/surfaceome),only the 348 genes responsible for encoding the surface protein were considered for further analysis.We derived these initial steps as described by Razzaghdoust et al [14]. These 348 genes were further checked for consistency with other data types in the 6 th filtering step, which involves two sub-level filtering processes; 6a) Consistency between mRNA levels and IHC (Immunohistochemistry) data.TCGA Pan-Cancer (PANCAN) data from Xenabrowser [80] having FPKM mRNA expression levels across different TCGA cohorts, and RNA HPA as well as RNA GTEx tissue gene data from HPA (https://v22.proteinatlas.org/about/download) were used for this step.We verified consistency through two methods one using direct mRNA expression levels and another using description mentioned in the HPA database.In order to check the consistency using mRNA expression levels, we used quartiles to classify expression levels into four categories (not detected, low, medium and high) to match with IHC annotation.Expression levels of zero are categorized as not detected, expression levels between zero and first quartile are categorized as low, expression levels higher than first quartile but lower than third quartile as medium and expression higher than third quartile as high.The targets for which the expression levels are aligned in both datasets mRNA expression based calculated categories and IHC based expression levels from HPA database were considered consistent.6b) Correlation of protein expression derived quasi H-score and TCGA mRNA expression derived quasi H-score.For this step, similar to quasi H-score calculation using protein expression data, we calculated quasi H-score using mRNA expression FPKM values.Samples with expression level less than first quartile were considered to be low expression, while samples with expression level higher than third quartile were considered to be high and samples with expression levels between first and third quartile were considered to be medium expression levels.Based on this quasi H-score was calculated using mRNA FPKM values.Genes scoring higher than 150 quasi H-score in both datasets (protein expression derived and mRNA expression derived) were chosen for further analysis. Only 123 genes passed through this filtering process.In the subsequent step we used data from the GSE42519 study [15] in order to identify and remove the genes that are highly expressed in the HSCs and MPPs.The GSE42519 study covers microarray expression profiling data on normal cell landscape for the myeloid arm of the hematopoietic system.We used entire gene expression data to identify the first and third quartile in order to classify the samples expressing high, medium and low levels.In the last step, we annotated the genes using five criterias for evidence based filtering, 1) Literature: targets for which there is existing literature evidence elucidating their potential role in tumor biology.2) Antibody: targets against which antibodies have been generated 3) Protein family targets, belong to a protein family where other proteins isoforms of which have been employed for the advancement of ADC in either clinical or preclinical setting 4) Preclinical: targets tested in preclinical setting 5) Clinical: targets tested in clinical setting.We filtered out genes without any annotations / evidence for any of the five criteria, resulting in 82 prioritized ADC targets.The overview of the entire approach is shown in Fig 1. Exploring the impact of mutated genes on the expression levels of prioritized ADC targets We used the TCGA pan cancer mutation data downloaded from the Xenabrowser hub cohort named "TCGA Pan-Cancer (PANCAN)" [80].The mutation data was generated under the MC3 project [81].For the expression, TPM values were downloaded from the same Pan-Cancer cohort.We annotated the data using annotation files given in the above mentioned cohort from Xenabrowser.The names of the cancer types from the HPA data analysis were matched with the Pancan mutation data, considering 22 TCGA tumor subtypes.We used 416 mutated genes [8,49] to query the expression level of 82 prioritized ADC targets identified using our screening method across 22 tumor subtypes.For the comparison of mutation vs wildtype group, we used wilcoxon test (non-parametric) and considered p value of 0.05 to find significant differences.In order to identify the strong association, we considered only the target with > = 1 log2 fold change and the cancer subtype having > = 5% population change by given mutation. Identification of potent tumor selective payload candidates Developmental Therapeutics Program (DTP) from NCI60 has sensitivity data on more than 50K compounds.We downloaded the data from the NCI-DTP portal [82] covering 56,920 compounds with unique NSC ID numbers.There were many compounds having unique NSC ID, but mapping to the same compound name, therefore we removed the duplicate names and ended up with 47,310 total compounds.First we grouped these compounds into 2 categories, a) compounds having sensitivity in the picomolar (<1nM) range and b) compounds having sensitivity in the low nanomolar range (1nM -10nM).Each category was passed through further filtering where we only retained the compounds having >50% response in at-least 1/9 indications in the NCI60 dataset.Subsequently the compounds tagged with failed NCI60 screening were eliminated resulting in 116 compounds in picomolar range and 797 compounds in low nanomolar range category.At this point, we established three distinct groups-1) compounds (n = 33) exhibiting sensitivity in only picomolar range b) compounds (n = 631) exhibiting sensitivity only low nanomolar range and c) compounds (n = 65) exhibiting overlapping sensitivity with both picomolar as well as low nanomolar range across 9 cancer indications covered by NCI60.Our analysis led to a total of 729 unique compounds.Additional annotations of these compounds were done for mechanism of action (MoA) and their clinical utilization as ADC payload.We further applied hierarchical clustering to identify similar or contrasting sensitivity patterns within these groups of compounds. Identification of target-indication-payload combination In order to identify the prioritized suitable target-indication-payload combination, we first aligned prioritized target-indication data with payload-indication data derived from NCI60.In the next step, we mapped indications which exhibited 100% sensitivity against selected clinically tested ADC payloads (Dxd, exatecan mesylate, maytansine, monomethyl auristatin E, maytansine deriv, eribulin mesylate).We expanded the analysis by incorporating the impact of mutations on any of those resultant target antigens.The method outlined in the preceding section was used to find any significant (0.05 p value as cutoff) association with target antigen expression levels and gene mutations. Fig 2 . Fig 2. Expression of 82 prioritized ADC targets across normal and tumor tissues along with evidence based filtering annotations using five criterias*.A) A heatmap depicting expression levels of potential ADC targets across 44 normal tissue types.B) A heatmap depicting expression levels of potential ADC targets across 20 tumor types based on their quasi H score. *1) Literature: targets for which there is existing literature evidence elucidating their potential role in tumor biology.2) Antibody: targets against which antibodies have been generated 3) Protein family: targets belong to a protein family where other proteins isoforms of which have been employed for the advancement of ADC in either clinical or preclinical setting 4) Preclinical: targets tested in preclinical setting 5) Clinical: targets tested in clinical setting.https://doi.org/10.1371/journal.pone.0308604.g002 Fig 4 . Fig 4. Scoring of 82 prioritized ADC targets based on five evidence based filtering criterias.A) Radar plot generated using five criterias mentioned in the method section to give scores between 1 to 5 in order to rank potential ADC targets.It shows 82 prioritized targets in a circular fashion and each point on the plot represents a corresponding score for the aligned target.B) A wordcloud representing potential ADC targets based on the five criteria annotations.Wordcloud is a representation of a score for each of the 82 prioritized targets by color and size of the word.The targets with the same score are represented by the same color and font size with 5 being the highest score and 1 being the lowest score.https://doi.org/10.1371/journal.pone.0308604.g004 Fig 5 . Fig 5. Impact of mutations on expression levels of ADC targets identified in our analysis across tumor subtypes.A) Impact of KRAS mutation on expression levels of multiple targets in Pancreatic Adenocarcinoma (PAAD) B) Impact of EGFR mutation on expression levels of multiple targets in Low Grade Glioma (LGG) C) Impact of STK11 and KEAP1 mutation on expression level of MSLN in Lung Adenocarcinoma (LUAD) D) Impact of BRAF mutation on multiple targets in Thyroid Carcinoma (THCA).The annotations are given as "Mut" for mutated gene and "Wild" for wild type gene.https://doi.org/10.1371/journal.pone.0308604.g005 Fig 6 . Fig 6.Mutations impacting MSLN and NECTIN4 target expression level across tumor subtypes.A) Radar plot shows log2 fold change of MSLN target expression level across multiple tumor subtypes and mutations.B) Radar plot shows log2 fold change of NECTIN4 target expression level across multiple tumor subtypes and mutations.https://doi.org/10.1371/journal.pone.0308604.g006 Fig 8 . Fig 8. Heatmap depicting compounds with sensitivity ranging between pM to 1nM.Heatmap depicting clustering of 33 compounds based on sensitivity patterns across 9 NCI60 cancer indications.This figure represents a narrowed-down list of compounds that demonstrate specific or heightened sensitivity towards the desired cancer type.The trend of sensitivity of cancer indications towards compounds is ascending as we move in the direction of the arrowhead.It is feasible to identify compounds that exhibit distinct activity either in solid tumors or hematological malignancies, as shown in green box compounds, such as Nogamycin and Vengicide exhibit activity against prostate and breast cancer, while no activity was seen against heme malignancies at this sensitivity range.Red box highlights compounds, such as Vedelianin and Trichloroplatinum which exhibit differential activity, with blood cancers showing greater sensitivity.https://doi.org/10.1371/journal.pone.0308604.g008 Fig 10 . Fig 10.Heatmap representing clinically tested ADC payload.Heatmap depicting clustering of 6 compounds identified in our screening based on sensitivity patterns across 9 NCI60 cancer indications.As highlighted by green boxes utilization of MMAE, expressed moderate activity against renal cell carcinoma, a discernible pattern reported by other studies, which aligns with our analysis [68].https://doi.org/10.1371/journal.pone.0308604.g010 The entire analysis was done in IDE RStudio version 1.4.1106using R version 4.1.0NCI60 with their annotations.This data is used to generate figure provided in S1 Fig. (XLS) S6 File.Heatmap of the compounds with sensitivity ranging between picomolar to ≦ 10nM.Compounds with sensitivity ranging between picomolar to ≦ 10nM (compounds exhibiting overlapping sensitivity with both picomolar as well as low nanomolar range across 9 NCI60 cancer indications).This data is used to generate Fig 9. (XLS) S7 File.Target-indication-payload combination coupled with mutation association.Details regarding combination of potential target antigens, indications and clinically tested ADC payloads along with impact of gene mutation on expression level of target antigens.(XLSX)
10,659.6
2024-08-26T00:00:00.000
[ "Medicine", "Chemistry", "Biology" ]
A Near-Infrared Luminescent Cr(III) N-Heterocyclic Carbene Complex Photoluminescent coordination complexes of Cr(III) are of interest as near-infrared spin-flip emitters. Here, we explore the preparation, electrochemistry, and photophysical properties of the first two examples of homoleptic N-heterocyclic carbene complexes of Cr(III), featuring 2,6-bis(imidazolyl)pyridine (ImPyIm) and 2-imidazolylpyridine (ImPy) ligands. The complex [Cr(ImPy)3]3+ displays luminescence at 803 nm on the microsecond time scale (13.7 μs) from a spin-flip doublet excited state, which transient absorption spectroscopy reveals to be populated within several picoseconds following photoexcitation. Conversely, [Cr(ImPyIm)2]3+ is nonemissive and has a ca. 500 ps excited-state lifetime. Contents General methods and instrumentation S2 Synthetic procedures and characterisation S4 X-Ray crystallography instrumentation and methods S6 Symmetry-related positional disorder in single crystals of complex 2 S7 Table S1 Summary of X-Ray Crystallographic data for complexes 1 and 2 S7 Table S2 Selected bond lengths and angles for crystal structures of 1 and 2 S8 Figure S6 Photoluminescence spectrum for complex 2 recorded at 77K S12 Excitation spectrum recorded for an aerated MeCN solution of 2 S12 Figure S8 Flash photolysis data collected for an aerated MeCN solution of 2 S13 Figure S9 Transient absorption spectra recorded for an aerated MeCN solution of 1 S14 Figure S10 Transient absorption spectra recorded for an aerated MeCN solution of 2 S15 Computational methods and details S16 TDDFT-calculated electronic absorption spectrum for 1 S17 Figure S12 TDDFT-calculated electronic absorption spectrum for 2 S17 Table S3 Natural Transition Orbitals for selected electronic transitions within 1 S18 Table S4 Natural Transition Orbitals for selected electronic transitions within 2 S22 Coordinates for optimised ground state geometry of 1 S26 Coordinates for optimised ground state geometry of 2 S27 References S28 General Methods and Instrumentation Reagents and Synthesis: All reagents were purchased from Sigma-Aldrich, Acros Organics and Fluorochem and used as received.Anhydrous THF and MeCN were obtained by distillation from CaH2, purged with dry N2 for a period of at least 15 minutes and stored over 4Å molecular sieves under an atmosphere of dry N2.All synthetic manipulations involving Cr(II) salts were carried out under an atmosphere of dry N2 using standard Schlenk line techniques.The reagent CrCl2 was handled and stored within an Argon-filled glovebox.Size-exclusion chromatography was performed under gravity using a fritted column of 35 mm diameter and 1000 mm length filled with Sephadex LH-20 resin which had previously been left to swell in 3:2 (v/v) MeOH/MeCN solution overnight before use. Structural and Magnetic Characterisation: NMR spectra were acquired on a Bruker Ascend 400 MHz spectrometer, with chemical shifts being reported relative to the residual solvent signal (CD3OD: 1 H δ 3.31, 13 C δ 49.00; CD3CN: 1 H δ 1.94, 13 C δ 1.32, 118.26). 1 High-resolution mass spectrometry data were collected on an Agilent 6210 TOF instrument with a dual electrospray ionisation source.Infra-Red spectra were recorded on a Shimadzu IRSpirit FTIR spectrometer equipped with a QATR-S ATR accessory.Elemental microanalysis was performed at London Metropolitan University.Magnetic susceptibility measurements were performed by Evans' method, 2 using a co-axial NMR tube containing the paramagnetic analyte in a solution of d 3 -MeCN (580 µL) and t BuOH (20 µL). Photophysical and Electrochemical Analysis: UV-Visible electronic absorption spectra were recorded on an Agilent Cary-60 spectrometer with luminescence spectra recorded on Horiba Fluoromax-4 or Agilent Eclipse spectrometers.For data acquired on the Agilent Eclipse instrument, spectra were collected over 15 accumulations, applying 10-point adjacent averaging to reduce signal noise. Luminescence quantum yields are reported relative to [Ru(bpy)3] 2+ in aerated MeCN solution (Ф = 1.8%), with all complexes being excited at a single wavelength of common optical density.Quantum yields are thus determined from the ratio of integrated peak areas, with an assumed experimental uncertainty of ±10 %.Luminescence lifetimes were determined by time-correlated single photon counting (TCSPC) on an Edinburgh Instruments mini-τ, equipped with a ps diode laser (404 nm, 56 ps).Cyclic voltammetry measurements were conducted for 1.5 mmoldm -3 solutions in dry MeCN under an atmosphere of N2 using a glassy carbon working electrode, a Pt wire counter and Ag/AgCl reference. Solutions contained 0.2 moldm -3 n Bu4NPF6 as a supporting electrolyte, with all potentials referenced against the Fc + /Fc couple.Spectroelectrochemistry measurements were recorded on an Agilent Cary-60 spectrometer using a quartz cuvette with a path length of 0.5 mm (BASi).Inserted into the cuvette were a platinum gauze working electrode (0.5 mm thickness), a platinum counter and an Ag/AgCl reference electrode.Solutions, of typical concentration 60-70 µM, were prepared using dry MeCN and contained 0.2 moldm -3 n Bu4NPF6.Solutions were sparged with dry N2 via a plastic microcapillary and performed under an atmosphere of dry N2.All potentials are referenced against the Fc + /Fc couple. During measurements, the applied potential was incrementally increased only when no further spectral changes were apparent.For reversible couples, the applied potential was incrementally reversed to ensure the complete recovery of spectra. Transient Absorption Spectroscopy Transient absorption experiments were performed at the Lord Porter Laser Laboratory at the University of Sheffield using a Helios system (HE-VIS-NIR-3200, Ultrafast Systems).A Ti:Sapphire regenerative amplifier (Spitfire ACE PA-40, Spectra-Physics) provides 800 nm pulses (40 fs FWHM, 10 kHz, 1.2 mJ).400 nm pump pulses (2.5 kHz, 0.2µJ) were generated through frequency doubling of the amplifier fundamental.The pump was focused onto the sample to a beam diameter of approximately 190 µm. The white light probe continuum was generated using a sapphire crystal and a portion of the amplifier fundamental.The intensity of the probe light transmitted through the sample was measured using a CMOS camera, with a resolution of 1.5 nm.Prior to generation of the white light, the 800 nm pulses were passed through a computer controlled optical delay line (DDS300, Thorlabs), which provides up to 7 ns of pump-probe delay.The instrument response function was approximated to be 100 fs (FWHM), based on the temporal duration of the coherent artifact signal from neat acetonitrile. Flash Photolysis Samples in solution were excited at 355 nm using a nanosecond pulsed LOTIS TII laser.A Xe lamp was used to continuously probe the absorption of the sample before and after excitation.The light passing through the sample was focused through a monochromator, and then a photomultiplier and detector to compare the relative absorption before and after excitation at each wavelength.The initial voltage on the detector was normalised at each wavelength to account for the emission spectrum of the lamp and absorption spectra of the sample. The suspension was filtered, and the solids washed twice with tetrahydrofuran (10 mL) before drying in vacuo to afford a brown solid.The crude solids were then suspended in warm ethanol (15 mL) and stirred vigorously for 5 minutes.The suspension was filtered and the collected solids washed twice with ethanol (5 mL) before being dried in vacuo to yield the title compound as a white powder (2.20 g, 65%). Synthesis of 3-Methyl-1-(2-pyridyl)imidazolium hexafluorophosphate (PyIm-H) Following a procedure adapted from the literature 4 : A mixture of 2-bromopyridine (4.00 g, 25.32 mmol) and 1-methylimidazole (2.29 g, 27.85 mmol) was heated to 160°C under an inert atmosphere for 40 h in a screw-capped thick-walled pressure tube.After cooling, dichloromethane (10 mL) was added to the residue.Addition of excess diethyl ether afforded a precipitate which was collected by filtration and washed twice with tetrahydrofuran (10 mL).The resulting brown solid was dissolved in water and precipitated as a hexafluorophosphate salt through addition of solid ammonium hexafluorophosphate (4.54 g, 27.85 mmol), being collected by filtration and washed twice with water (5 mL).The solids were then dissolved in 9:1 (v/v) dichloromethane:acetonitrile (5 mL) and re-precipitated through addition of diethyl ether to afford the title compound as a white solid (2.08 g, 27%). Single Crystal X-Ray Diffraction: Single crystals of 1 were obtained from the slow vapour diffusion of diisopropylether into a concentrated MeCN solution containing a small quantity of NH4BF4.Diffraction data were collected under a stream of cold N2 at 150 K on a Bruker D8 Venture diffractometer equipped with a graphite monochromated Mo(kα) radiation source.Solutions were generated using Patterson heavy atom or direct methods and fully refined by full-matrix least-squares on F 2 data using SHELXS-97 and SHELXL software respectively. 5Absorption corrections were applied based upon multiple and symmetry-equivalent measurements using SADABS. 68][9] Structure solution was achieved by direct methods and the crystal structure was refined using full-matrix least-squares on F 2 data using SHELXL 10 within Olex2. 11Non-hydrogen atoms were refined anisotropically.Hydrogen atoms were placed in calculated positions, refined to idealized geometries (riding model) and assigned a fixed isotropic displacement parameter.(CCDC2296861) The asymmetric unit of the structure solution consists of half of complex 1, resulting in a 2-fold rotation axis intersecting the central Cr atom and the centre of the chemical bond between pyridyl and NHC moieties within a ligand (the C14-N5 bond).This results in one of the three ligands being disordered due to the 2-fold rotation, with the other two ligands being symmetry related (Figure S1).The symmetry related positional disorder of one of the ligands was resolved by fixing the occupancy of the two positions to 50%, applying a part -1 function, constraining the rings to idealised geometries and applying restraints to normalise the thermal displacement of the atoms.Disorder of one of the PF6 counter-ions was also observed.This was modelled with conventional two-part disorder with occupancies of 51.5 (18) % and 48.5(18) % of part 1 and part 2, respectively.Summary details of the solution are outlined in Table S1.It is noted that excitation of 1 at either 400 nm or 375 nm produced the same results, with only those resulting from 400 nm excitation being shown here.It is noted that excitation of 2 at either 400 nm or 375 nm produced the same results, with only those resulting from 400 nm excitation being shown here. Natural Transition Orbitals (NTOs) Table S3 Natural transition donor (left) and acceptor (right) orbitals for selected optical transitions for complex 1. Table S4 Natural transition donor (left) and acceptor (right) orbitals for selected optical transitions for complex 2. Photon 100 CMOS (Complementary Metal Oxide Sensor) detector with shutterless capability.Data were corrected for absorption using empirical methods (SADABS) based on symmetry-equivalent Figure S1 Figure S1Image of the asymmetric unit (a) and the grown structure (b) demonstrating the symmetry related positional disorder of one of the three ligands.Images were created in Olex 2. Figure S2 2 moldm - 3 n Figure S2 Changes in UV-Visible electronic absorption spectra accompanying the first (a), second (b) and third (c) electrochemical reduction processes of 1 in deaearted MeCN solution containing 0.2 moldm -3 n Bu4NPF6 at r.t.All potentials are quoted relative to the Fc + /Fc couple. Figure S3 Figure S3Changes in UV-Visible electronic absorption spectra accompanying the first (a) and second (b) electrochemical reduction processes of 2 in deaearted MeCN solution containing 0.2 mol dm -3 n Bu4NPF6 at r.t.All potentials are quoted relative to the Fc + /Fc couple.(Due to the cathodic nature of the third electrochemical couple we were unable to record satisfactory spectra associated with this process) Figure S7UV-Visible electronic absorption spectrum (black) and excitation spectrum (red) for luminescence at λem = 803 nm recorded for a solution of 2 in aerated MeCN at r.t. Figure S8 Figure S8Representative flash photolysis data collected at 460 nm (a), 550 nm (b), 600 nm (c) and 615 nm (d) for an aerated MeCN solution of 2 following excitation at 355 nm.Decay traces are fitted with a geometric mean average lifetime which was found to be 13.40 ± 0.45 µs across the entire spectral range of 380-695 nm.This lifetime is in excellent agreement with the photoluminescence lifetime of 13.7 µs as determined by time correlated single photon counting, confirming that the longlived species captured by both transient absorption spectroscopy and flash photolysis corresponds to the emissive 2 T1/ 2 E metal-centred states.Fitting of flash photolysis decay traces required a second, very short component, with a mean average lifetime across the entire spectral range within the instrumental response function (20 ns) and so could not be satisfactorily resolved. Figure S9 a ) Figure S9 a) Transient absorption spectra recorded for 1 in aerated acetonitrile solution (λex = 400 nm), showing detail of transients recorded from 0.2 ps to 1 ns after excitation; b) decay-associated spectra (DAS) extracted from global analysis with time-constants of 3.99 and 476 ps; c) selected singlepoint kinetic traces obtained from global analysis; d) schematic of the branched kinetic model employed in the analysis of transient data and associated time-constants. Figure S10 a ) Figure S10 a) Transient absorption spectra recorded for 2 in aerated acetonitrile solution (λex = 400 nm), showing detail of transients recorded from 0.2 ps to 5 ns after excitation; b) detail of transient absorption spectra recorded over early-times from 80 fs to 800 fs; c) selected single-point kinetic traces obtained from global analysis; d) decay associated spectra (DAS) extracted from global analysis.A sequential model of kinetic analysis yields four time constants: <100 fs (unresolved), 0.376±0.004ps (τ1), 2.32±0.03ps (τ2) and >7ns (τ3, modelled as constant).The later component was independently determined to have a lifetime of 13.40 ± 0.45 µs by laser flash photolysis (see Figure S8). Figure S11 Figure S11Calculated optical absorption spectrum for complex 1 showing positions of transitions and their oscillator strengths (green lines) and normalised convolution with 0.2 eV FWHM line broadening (blue trace). Figure S12 Figure S12Calculated optical absorption spectrum for complex 2 showing positions of transitions and their oscillator strengths (green lines) and normalised convolution with 0.2 eV FWHM line broadening (blue trace). Table S1 Summary of crystallographic data for 1 and 2.
3,120.6
2024-05-02T00:00:00.000
[ "Chemistry" ]
Tryptophan Oxidation in the UQCRC1 Subunit of Mitochondrial Complex III (Ubiquinol-Cytochrome C Reductase) in a Mouse Model of Myodegeneration Causes Large Structural Changes in the Complex: A Molecular Dynamics Simulation Study Muscle diseases display mitochondrial dysfunction and oxidative damage. Our previous study in a cardiotoxin model of myodegeneration correlated muscle damage with mitochondrial dysfunction, which in turn entailed altered mitochondrial proteome and oxidative damage of mitochondrial proteins. Proteomic identification of oxidized proteins in muscle biopsies from muscular dystrophy patients and cardiotoxin model revealed specific mitochondrial proteins to be targeted for oxidation. These included respiratory complexes which displayed oxidative modification of Trp residues in different subunits. Among these, Ubiquinol-Cytochrome C Reductase Core protein 1 (UQCRC1), a subunit of Ubiquinol-Cytochrome C Reductase Complex or Cytochrome b-c1 Complex or Respiratory Complex III displayed oxidation of Trp395, which could be correlated with the lowered activity of Complex III. We hypothesized that Trp395 oxidation might contribute to altered local conformation and overall structure of Complex III, thereby potentially leading to altered protein activity. To address this, we performed molecular dynamics simulation of Complex III (oxidized at Trp395 of UQCRC1 vs. non-oxidized control). Molecular dynamic simulation analyses revealed local structural changes in the Trp395 site. Intriguingly, oxidized Trp395 contributed to decreased plasticity of Complex III due to significant cross-talk among the subunits in the matrix-facing region and subunits in the intermembrane space, thereby leading to impaired electron flow from cytochrome C. muscle mitochondria along with oxidative damage. Mitochondrial proteomics in the CTX model demonstrated down-regulation of critical proteins contributing to energy metabolism, including respiratory complexes and Krebs cycle among others. Muscle biopsy from human muscle pathologies: Dysferlinopathy (Dysfy) [representing Muscular Dystrophy (MD)], Distal Myopathy with Rimmed Vacuoles (DMRV) and Polymyositis (representing inflammatory myopathies), and Lipid Storage Disease (LSDs) (representing metabolic disorders) with varied pathology, disease severity and clinical outcome, revealed morphological and biochemical changes in the mitochondria and differential expression of mitochondrial proteins, as revealed by proteomics 3,4 . Mitochondrial dysfunction in muscle pathologies is associated with oxidative stress and oxidative post-translational modification of proteins. Our previous study 5 demonstrated that protein oxidation directly correlated with the severity of muscle pathology with Duchenne muscular dystrophy (DMD) displaying highest carbonylation of cellular proteins. Protein oxidation was also observed in muscle biopsies from DMRV, PM, and Dysfy patients 3 . Proteomic identification of oxidized proteins in DMD human muscle revealed specific mitochondrial proteins targeted for protein oxidation 2 confirming that mitochondrial dysfunction and chronic oxidative damage could contribute to muscle diseases. Post-translational oxidative modification of cellular proteins is linked with aging and disease 6 . Several amino acids are vulnerable to oxidation with Cys and Trp being the most frequently oxidized amino acids. Trp oxidation among mitochondrial proteins has been documented based on the mining of mass spectrometry (MS) data in cardiac tissue 7 . Oxidative modification of Trp leads to three kinds of oxidized residues: oxindolyl alanine or 2-oxy Trp (with increased mass of +16 Da over Trp), N-formyl kynurenine (+32 Da) and kynurenine (+4 Da). Screening for Trp oxidation events in the MS data from the CTX model and muscle disease studies 2,3 revealed several mitochondrial proteins with oxidized Trp, including Aconitase, Voltage-dependent Anion Channels (VDAC), and subunits of mitochondrial complexes I, III and V among others, indicating that Trp oxidation might contribute to altered mitochondrial dynamics. Proteomics data in the CTX model 2 revealed that Ubiquinol-Cytochrome C Reductase Core protein 1 (UQCRC1; PDB Id: 1SQB), a subunit of Ubiquinol-Cytochrome C Reductase Complex or Cytochrome b-c1 Complex or mitochondrial Respiratory Complex III (CIII) 8 contained one oxidized Trp395 (W395; +16 Da). Molecular modelling of UQCRC1 revealed that oxidized W395 could potentially cause a steric clash with the nearby L392. Since we observed lowered enzyme activity of CIII both in the CTX model and human MDs 2,3 , we hypothesize that oxidation of W395 could potentially contribute to altered local conformation which may subsequently impinge on the structure of the complex and enzyme activity. To address this, we performed molecular dynamics simulation studies on the entire CIII complex and revealed that a single modification on W395 in the core protein of CIII causes significant structural changes which may be responsible for hampering CIII function. In order to account for all the structural scenarios of CIII organization, we investigated oxidation-dependent structural changes in inhibitor-bound, substrate-bound, and unbound (apo-form) states of CIII. Results and Discussion CIII is composed of 11 subunits 8-10 that are arranged as a dimer, embedded in the inner mitochondrial membrane (Fig. 1A). Among these, ten are nuclear encoded, while one is mitochondrial encoded (Table 1). Apart from the core proteins, UQCRC1 and UQCRC2, and the core embedded subunit UQCRFS1, all the other subunits have transmembrane domains. Each subunit of CIII has a specific function that contributes to the electron transfer process of the complex. There exists crosstalk among the CIII subunits, which integrate transfer of electrons from ubiquinone to cytochrome C and proton pumping from the mitochondrial matrix into the inter-membrane space. CIII is also probably involved in mitochondrial precursor peptidase (MPP) activity and superoxide generation 11 , although these require experimental validation in vivo. Our recent studies in the mouse model of myodegeneration revealed lowered CIII activity and oxidation of Trp395 (to oxindolylalanine) in UQCRC1 2,3 . UQCRC1 and UQCRC2 have a bilobed structure with a hollow inner core exposed to the matrix. The membrane spanning region of CIII consists of cytochrome b (MT-CYB), ubiquinone-binding protein (UQCRQ), 7.2 kDa transmembrane protein (UQCR10) and 6.4 kDa transmembrane protein (UQCR11) subunits, in addition to the tail region of the iron-sulfur cluster containing protein (UQCRFS1) and cytochrome c1 (CYC1). Through computational studies, we investigated the structural effects of Trp395 oxidation on the entire complex (Fig. 1B), in the inhibitor-bound, substrate-bound, and unbound/apo-form states. All the structures selected for the analysis of the three states were significantly indifferent from each other. The molecular dynamics simulation (MDS) was carried out for 50 ns, using Desmond 12 . The CIII structure of all three states [inhibitor-bound form (PDB ID: 1SQB 13 ), substrate-bound form (PDB ID: 1NTZ 14 ) and apo-form (PDB ID: 1NTM 14 )], in which Trp395 was modified to oxindolylalanine, was subjected to MDS and compared with the MDS of the corresponding unmodified CIII (control). Detailed structural calculations were performed to analyze the inter-subunit contacts and changes in the secondary structures. Description of the methods and data are provided for 1SQB and 1NTM, while data analysis of 1NTZ is not shown. Interpretation of backbone related parameters. The structural parameters investigated in this study are represented by root-mean square deviation (RMSD), root-mean square fluctuation (RMSF) and radius of gyration (Rg), in addition to visual inspection of trajectory data. The RMSD of the oxidized CIII and the constituent structures exhibited significant differences through the simulation time of 50 ns, compared with control. The backbone of the oxidized CIII exhibited higher RMSD from 25 ns, whereas the control backbone of unmodified CIII retained the stability throughout the simulation (Fig. S1A). The subunit-wise assessment revealed significant RMSF in MT-CYB, CYC1, and UQCRFS1 in the oxidized form compared to control. On the other hand, slight variations in RMSF were observed in UQCRQ, UQCR10 AND UQCR11 (described below) (Fig. S1B). The overall backbone of CIII showed substantial fluctuation of Rg www.nature.com/scientificreports www.nature.com/scientificreports/ throughout the simulation (Fig. S1C). The Rg data was also calculated for the control and oxidized forms based on the secondary structure. Most of the α-helices are present in the transmembrane region of CIII, while fewer are in the core subunits UQCRC1 and UQCRC2. Major part of the β-sheets is positioned in the matrix and intermembrane region of CIII. The Rg values of α-helices were relatively higher in the oxidized form compared to the control (Fig. S1D). However, the relaxation pattern of α-helices followed the same pattern in both. The Rg values for the β-strands White sticks represent the carbon atoms of the unmodified residue, blue sticks represent the carbon atoms of the oxidized residue and dark blue and red atoms commonly indicate nitrogen and oxygen atoms, respectively. Subunit Gene Name www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ were relatively higher in the oxidized form compared to control (Fig. S1E), suggesting a decrease in the compactness in oxidized CIII. Consequently, the structure of CYC1 and UQCRFS1 subunits could be affected, since β-sheets form the significant part of these subunits. The Rg data for the loops showed high fluctuation both in control and oxidized CIII, which may predominantly contribute to the high fluctuation of Rg data of the entire protein backbone of CIII in both control and oxidized forms (Fig. S1C). Structural changes in CIII subunits. UQCRC1 and UQCRC2. UQCRC1 resides on the matrix face of CIII ( Fig. 2A) with the oxidized W395 reported in our previous studies 3,5 lying on a short helix perpendicular to the membrane. This residue lies on a helix which is a part of the continuous helix-loop-helix structure made up by five helices constituted in-between the β-sheet fold. The sequence alignment of UQCRC1 from three mammalian species (bovine, human, and mouse) revealed that Trp395 and the surrounding residues are conserved across species (Fig. S2). MDS data revealed altered conformation of the side-chains of oxidized W395 and the neighboring residue W262, in the oxidized form (Fig. 2B). These conformational changes may significantly alter the neighboring loop structure, which is exposed to the solvent region (Fig. 2C). The oxidation of W395 alters its side chain conformation, thereby preventing hydrophilic interactions with W262. The RMSD values of the backbone atoms of the oxidized form of UQCRC1 were elevated post the initial 5 ns of simulation (Fig. S3A). These values remained unchanged at ~2 Å for the most part of the simulation time (until ~45 ns). Although there was a minor dip, the higher RMSD value was maintained until the end of the simulation. UQCRC2 complexes with UQCRC1 and participates in MPP activity (although it requires experimental validation in vivo) during the processing and assembly of CIII 15 . RMSD values of UQCRC2 did not reveal any significant differences between the oxidized and control CIII ( MT-CYB. This subunit forms the major bulk of the transmembrane region of CIII and houses two potential electron-transferring heme groups (marked b H and b L ) positioned on the matrix side and inter-membrane side, respectively. MT-CYB possesses a "docking crater" (Fig. 2D) on its intermembrane surface that enables accommodation of the head domain of UQCRFS1 during electron transfer 15 . MT-CYB subunit is also critical for the proton pumping function 16 . The Rg data indicates an increase in the girth of MT-CYB to maxima of the collected dataset at ~10 ns of the simulation (Fig. 2E). Although the Rg values drop post 10 ns, they are relatively higher compared to the control. The Rg values of MT-CYB are elevated to accommodate the head domain of UQCRFS1, which pushes inward into the "docking crater" on MT-CYB. The RMSF data shows an increase in the per-residue fluctuation in and around the cd helix and ef loop of MT-CYB in oxidized CIII (Fig. 2F), thereby indicating their involvement in MT-CYB structure under the oxidized conditions. The RMSD of the MT-CYB backbone exhibits an increase in the initial 10 ns (vs. control), after which it was stable (Fig. S3C). The Rg analysis correlates well with the RMSF data which shows an increase in the fluctuation near the cd helix of MT-CYB (Figs 2E and 2F). A similar analysis was performed on the apo-form (PDB ID: 1NTM 14 ) and the substrate-bound state of CIII (PDB ID: 1NTZ 14 ; data not shown). For the apo-form, the Rg data exhibits reduced the radius of gyration in both the control and the oxidized state, which may be due to the absence of substrate (Fig. S6A). Although the overall increase in RMSF adds proof to the binding site variation, the specific pattern of differences in the per-residue fluctuation in control and oxidized states, near the cd helix and ef loops in 1NTM, showed consistency with the inhibitor-bound structure (1SQB) analysis (Fig. S6B). A similar consistency in the structural differences between the oxidized and unmodified forms was also observed in the substrate-bound form (1NTZ; data not shown). UQCRFS1. This subunit has highly flexible unstructured loops and few β strands which arrange themselves as β-meanders or β-helix-β super-secondary structures. The oxidized CIII form revealed significant changes in UQCRFS1 structure (Fig. 3A,B), with lowered Rg values for the oxidized structure compared to control (Fig. 3C). RMSF analysis suggested that the fluctuations in the oxidized form are significantly lowered (Fig. 3D). Intriguingly, the loop connecting S107-E131, which is positioned on the left lateral region of the head domain of UQCRFS1, moves inwards post-oxidation (not shown). It occurs due to increased compactness of the head domain accompanied by elevated rigidity of the subunit. Moreover, the neck region, connecting the head domain and the transmembrane tail region is relatively widened compared to control. RMSD analysis shows relatively higher stability, right from the initial stage extending throughout the simulation (Fig. S3D). This data substantiates the fixed nature of UQCRFS1 post oxidation. Angle calculation at the neck region provided evidence for increased stiffness of the mobile neck region of UQCRFS1 (Fig. S5C). Rg analysis of the apo-structure (1NTM) revealed a similar pattern of variation between the control and oxidized forms (Fig. S7A). The RMSF analysis of the apo-structure also exhibited similar restriction pattern of the loop region, as seen in 1SQB (Fig. S7B). A similar trend was also noted in the substrate-bound form (1NTZ; not shown). www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ CYC1. CYC1 is the destination subunit that receives electrons via the bifurcated mechanism and passes them to the extrinsically bound Cytochrome C. Cytochrome C, in turn, detaches and carries the electrons to the next complex of the respiratory chain. CYC1 forms an integral part of CIII because of its dominant presence in the inter-membrane space along with UQCRFS1 (Fig. 3E). CYC1 possesses a globular head domain that interacts with UQCRFS1 head region on the matrix exposed face and extends its tail domain which spans the membrane once before it overlays on UQCRC1 through charged interactions as discussed below. Rg analysis showed relatively high values throughout the simulation in the oxidized form which suggests increased girth of CYC1 (Fig. 3F). The RMSD data indicates increased instability of the oxidized form post 35 ns compared to control (Fig. S4A). The RMSF data revealed that the loop extending from T57 to K86 and P92 to L109 had increased fluctuation in the post oxidation state (Fig. 3G). These alterations correlated well with Rg and RMSD data. Further, it has direct implications for electron transfer since the region 63-81 contribute to the binding with cytochrome C 17 , which may be affected in the oxidized form (Fig. S5D). Rg analysis showed higher values for the oxidized state than the control (Fig. S7C). The apo-structure (1NTM) revealed a similar pattern of variation as in 1SQB. RMSF analysis of the CYC1 domain in the apo-form also revealed the distortion at cytochrome C binding site (Fig. S7D). A similar trend was also noted in the substrate-bound form (1NTZ; not shown). The UQCRC1 Interactors. Interactions between UQCRC1 and UQCRC2/CYC1. UQCRC1 interacts directly with UQCRC2, UQCRFS1, CYC1, UQCRQ, UQCR10 and UQCR11 (Fig. 4A). The N-terminal of all these subunits except CYC1 interacts with the solvent accessible surface of UQCRC1. The C-terminal of CYC1 interacts with the UQCRC1 domain on the matrix region. Hydrogen bond analysis revealed that the individual contacts established by UQCRC1 with UQCRFS1, UQCRQ and UQCR11 were relatively increased in the oxidized state compared to the control (described below). The hydrogen bond contacts of UQCRC1 with UQCRC2, CYC1, and UQCR10, respectively have decreased drastically in the oxidized CIII form (described below). UQCRC1 makes a stable complex with UQCRC2 through extensive intermolecular hydrophilic interactions. A closer look into the hydrogen bond interactions at the UQCRC1-UQCRC2 interface reveals a hemispherical sealing effect (not shown). Interestingly, the number of hydrogen bonds was significantly reduced in the oxidized form compared to control (Fig. 4B). It was associated with loss of interactions between the N-terminal flexible region of UQCRC1 and UQCRC2, which in turn directs the movement of the helix preceding the N-terminal tail in UQCRC1 to push away from UQCRC2 (not shown). The C-terminal region of CYC1 interacts with the matrix-facing globular domain of UQCRC1 (Fig. 4A). The residues, Asn235, His243 and Glu139 in UQCRC1 were found to interact with Lys226, Arg238 and Lys241, respectively, in unmodified CIII (Fig. 4C). Hydrogen-bond analysis between UQCRC1 and CYC1 showed (B) -H-bond analysis between UQCRC1 and UQCRC2 shows ~10 bond decrease in the oxidized form (red) compared to control (green) state. (C) -Interaction between the C-terminal regions of the CYC1 with UQCRC1 is mediated by Glu139, His243 and Asn235 in the former (represented in yellow sticks) with the residues Lys241, Arg238 and Lys226 in the latter (represented in pink sticks). Post-oxidation, the residues Arg238 and Lys226 of CYC1 were found to drift away from UQCRC1 breaking the bonds. (D) -H-bond analysis between UQCRC1 and CYC1 shows two-thirds decrease in the hydrogen bond contacts between the subunits explained due to loss of interactions as in C. www.nature.com/scientificreports www.nature.com/scientificreports/ decreased hydrogen bonds from three to one in the oxidized form (Fig. 4D). The interaction between the side chain of Glu139 (UQCRC1) and Lys241 (CYC1) was sustained both in the oxidized form and control. However, the loss of contacts is at the C-terminal unstructured region of CYC1 where Arg238 and Lys226 interact with His243 and Asn235 of UQCRC1, respectively. It occurs in coordination with the repulsion of the C-terminal of CYC1 away from UQCRC1 (Fig. 4C). Interactions between UQCRC1 and UQCRFS1/UQCRQ. The transmembrane tail domain of UQCRFS1 extends to the matrix-facing region and interacts closely with UQCRC1, whereas the head domain lies on the inter-membrane space away from UQCRC1 (Fig. 4A). The N-terminal of UQCRFS1 interacts with one of the two L-shaped helix-turn-helix motifs of the UQCRC1 on the exterior surface of the latter and lies parallel to the N-terminal of CYC1. The hydrogen bond contacts between the UQCRC1 and UQCRFS1 were increased in the oxidized form compared to the control (Fig. 5A), indicating increased stability of the complex formed between the two subunits. UQCRQ interacts with UQCRC1, MT-CYB, UQCRFS1, and CYC1 (Fig. 4A). UQCRQ is sandwiched between UQCRC1, CYC1, and UQCRFS1 on the matrix side. The region, H12-S17 on UQCRQ forms alternate backbone hydrogen bonding with UQCRC1 and UQCRFS1. A part of UQCRQ interacts with MT-CYB on the intermembrane side as well as throughout the membrane-spanning region. However, there are no interactions between UQCRQ and CYC1 or UQCRFS1 on the intermembrane side, in control. Hydrogen bond analysis between UQCRC1 and UQCRQ revealed a higher number of hydrogen bonds in the oxidized form compared to the control (Fig. 5B), indicating increased stability between the interacting subunits UQCRC1 and UQCRQ. Interactions between UQCRC1 and UQCR10/UQCR11. UQCR10 is a single pass transmembrane helix with a loop-short helix on the non-membrane spanning region. The short loop of the helix-loop-helix structure on UQCR10 at the N terminal interacts with UQCRC1. The former lies in close proximity to the transmembrane helices of UQCR11 and UQCRFS1 on the matrix-facing side. On the other hand, UQCR10 interacts closely with CYC1 on the intermembrane space facing region (Fig. 4A). This arrangement of UQCR10 may be responsible for transmitting signal from UQCRC1 to other subunits. Hydrogen bond analysis between UQCRC1 and UQCR10 in the oxidized form exhibited a substantial decrease in the number of contacts (Fig. 5C), suggesting weakened stability of the interaction between UQCRC1 and UQCR10. UQCR11 associates with the neighboring subunits UQCRC1 and UQCR10 (Fig. 4A). The N terminal region of the subunit interacts with UQCRC1 at the interior bottom surface. Hydrogen bond analysis suggested that although the control possesses ~3-4 hydrogen bonds initially, post 5 ns, the average number of hydrogen bonds www.nature.com/scientificreports www.nature.com/scientificreports/ secured between the two subunits falls to one (Fig. 5D). In the oxidation state, the trend is reversed. There are ~3 hydrogen bonds between the two subunits till 45 ns, which increases to ~5. It may probably cause a stiffening effect in UQCR11. On validation, the consistent hydrogen bond was established between the Gln12 of UQCR11 and Glu351 of UQCRC1. The post-oxidation trajectory revealed additional hydrogen bonds between Trp24 and Asn16 from UQCR11 with Arg445 and Thr347 of UQCRC1, respectively (not shown). Other Interactions. Interaction between UQCRFS1 and CYC1. UQCRFS1 and CYC1 are embedded in the membrane, with its globular heads facing the intermembrane space and the tail regions exposed to the matrix (Fig. 4A). Their respective tails interact with UQCRC1 at the matrix-facing region. The head domain of CYC1 of one monomer interacts with the head domain of UQCRFS1 of the other monomer of the dimer (Fig. 6A). This crisscross link establishes CIII as a functional monomer, although it exists as a structural dimer 11 . An intermolecular hydrogen bond interaction between Lys90 of UQCRFS1 and Glu99 of CYC1 was present, in control throughout the simulation, whereas it was completely lost in the oxidized form (Fig. 6B). We speculate from this analysis that the loss of interaction at the head region of UQCRFS1 may lead to a significant shift of UQCRFS1 with respect to CYC1, probably due to a hinge movement at the neck region (Fig. 6C). Distance between the electron transferring heme groups and iron-sulfur clusters. The MT-CYB contains two heme groups that play a critical role in electron transfer (Fig. 7A). Distance analysis between these heme groups indicated a slight increase in the oxidized form (Fig. 7B). However, the average distance of these two hemes in the oxidized form is relatively stable compared to the control. The increase in the distance between the iron atoms of the hemes correlates well with the higher Rg in the oxidized form as shown previously (Fig. 2E). The subunits UQCRFS1 and CYC1 contain one iron-sulfur cluster and one heme group, respectively. They complete the line of the electron transfer process to the transporter protein, cytochrome C. Interestingly, the distance analysis between these two cofactors indicated a fluctuation on the higher side in the oxidized form compared to the control (Fig. 7C). The distance between the iron-sulfur cluster in the UQCRFS1 and heme (b H ) in MT-CYB www.nature.com/scientificreports www.nature.com/scientificreports/ reaches its minima around 20 ns of simulation time and retains the position for a while, although the distance was not comparable to the range observed, in control (Fig. 7D). It suggests that the head domain of UQCRFS1 probably moves further towards MT-CYB in the oxidized form. Distance analysis performed between the electron transfer groups of the apo structure (1NTM) showed a similar range of intermolecular distances (Figs S8A and S8C) except the distance between the 2[Fe-S] cluster of UQCRFS1 and the heme group from CYC1, which showed significant decrease in the distance of the oxidized form compared to the control state (Fig. S8B). It may be due to the increased flexibility of the head domain of the UQCRFS1, speculating a possibility of occupying the docking crater of MT-CYB in the control state of the apo-structure. For the substrate-bound complex, the distance analyses showed consistent with the inhibitor-bound structure (not shown). Distance analyses revealed that the electron transfer function may be lost due to significant changes in the distance between the clusters, in the oxidized form. This could, in turn, lower the enzyme activity of the complex. Conclusion Mitochondrial respiratory chain complexes have multi-subunit structures that offer a distinct advantage during electron transfer and proton pumping activities. Such intricate structural organization is required for optimal orientation of the Fe-S clusters and other components to improve the overall metabolic efficiency. However, the multi-subunit structure has certain limitations, including vulnerability to PTMs, as exemplified in the current study. During pathophysiological conditions such as myodegeneration, mitochondrial dysfunction, and oxidative damage are evident. Degeneration-dependent protein oxidation could potentially regulate large protein complexes such as CIII. Based on molecular dynamics simulations, the current study demonstrated that a single PTM (i.e., W395) in a subunit (UQCRC1) farther from the active site could trigger profound structural changes across the complex thereby disrupting the electron flow and lowering the enzyme activity in CIII (Fig. 8). The reduction in the maintenance of stable hydrogen bonds between UQCRC1 and UQCRC2 dictates the instability of the core domains, thus affecting CIII maturation. Other interactors of UQCRC1 like CYC1 and UQCR10 also showed decreased stability in interactions with UQCRC1. The interactors, UQCRFS1, UQCRQ, and UQCR11 exhibit increased www.nature.com/scientificreports www.nature.com/scientificreports/ the stability of complex formation with UQCRC1. The functionality of UQCRFS1 depends on the flexibility of the neck region to mobilize its intermembrane space facing the head domain and carry out the electron transfer. Hindrance in this functionality occurs owing to the stiffening of the neck region, causing the fixation of the head domain on the MT-CYB surface. This fixation reduces the distance between the electron transfer groups of UQCRFS1 and CYC1. Additionally, the deformation at the Cytochrome C binding site hinders the prospects of the transfer of electrons, if at all, from CIII to Complex IV. By considering these analyses, we propose that the structural effects of this oxidation directly impinge on electron transfer through the pivotal subunits taking part in the electron transfer process (CYC1 and UQCRFS1)). Further studies may reveal the effects of such pathological post-translational effects on the proton-pumping efficiency through in-vitro studies. Methods Generation of oxidized CIII. The CIII dimer complex containing 22 subunits (PDB Id: 1SQB 13 ) embedded with the coordinates of membrane position was downloaded from the OPM (Orientation of Proteins in Membranes) database 18 and implanted into POPC (phosphatidylcholine) membrane. The structure also contains ligand azoxystrobin, which binds in the Q 0 site. The oxidized CIII was generated by manipulating the chemical structure of Trp at position 395 in the subunit UQCRC1 to the oxidized state, i.e., oxindolylalanine (with a mass increase of +16 kDa), based on the mass spectrometry studies carried out previously 2,3 . The proteins were processed using the protein preparation wizard module of the Schrodinger Drug Discovery Suite. The protein structures were reviewed for the presence of any important water molecules for the simulation. All the water molecules were deleted before the simulation, considering that none of them established any important bonds with the protein in its vicinity. The ligand, azoxystrobin was retained in the protein structure. Ligand state was generated for pH 7.0. Further, the H-bonds were optimized to the neutral pH, followed by restrained minimization converging the heavy atoms close to 0.30 Angstrom (Å). The same procedure of preparation of oxidized complex and the subsequent methods including MDS and data analysis was followed for the unbound/apo-form (PDB: 1NTM 14 containing all the 22 subunits) and the substrate-bound CIII (PDB: 1NTZ 14 ) without any changes. Membrane set-up and relaxation. The POPC membrane was set up on the retrieved prealigned membrane coordinates. The lipid-protein equilibration/relaxation protocol was utilized as prescribed by Desmond membrane relaxation protocol, developed by Dmitry Lupyan in collaboration with Schrodinger Inc (New York, NY, USA) 12 . The relaxation was carried out at a temperature of 300 K. The various steps in this relaxation process involved (i) minimization with restraints on solute atoms (Protein atoms) (ii) minimization without any restraints (iii) heating from 0.0 K to 300.0 K (iv) H 2 O barrier and gradual restraining, followed by (v) NPT (isothermal -isobaric) www.nature.com/scientificreports www.nature.com/scientificreports/ equilibration with and without the barrier. The NPT ensemble itself consisted of 5 steps in sequential order starting from (i) NPT ensemble with barrier for 200 ps (ii) NPT ensemble equilibration of solvent and lipids for 100 ps (iii) NPT ensemble with protein heavy atoms annealing from 10.0 kcal/mole to 2.0 kcal/mole for 600 ps (iv) NPT ensemble with restraints on C-alpha atoms at 2.0 kcal/mole for ps and finally (v) NPT ensemble with no restraints for 100 ps. Molecular dynamics production. The final molecular dynamics production was carried out for a simulation time of 50 ns by allowing the default relaxation of the system at a temperature of 300 K and pressure of 1.01 bar. The trajectory files were recorded for every 4.8 ps. The final simulation trajectories were analyzed using other Desmond operations including the generation of protein-protein interaction data, Root mean square deviation (RMSD), Root mean square fluctuation (RMSF) and radius of gyration (Rg) of the proteins. Following the close visual inspection, calculation of the qualitative, and quantitative data comprising the number of hydrogen bonds, distances, and angle measurements were carried out. PyMOL 19 was used for viewing trajectories and exhibition of the essential aspects as illustrations. Analysis of trajectories. The 50 ns trajectories of the control (unmodified) CIII was analyzed for protein backbone parameters such as the RMSD, RMSF, secondary structure variations and Rg throughout the duration of the simulation. The RMSD and Rg values were calculated against the simulation time and expressed as the deviation or radius of the selected group of atoms, respectively, in Å. The RMSF values of the protein backbone were calculated over the range of residues and expressed as summation throughout the simulation for each residue and were expressed in Å. Although the RMSD and Rg values were calculated for the protein backbone, the same parameters were also calculated for the individual subunits of CIII to describe the detailed effects translated over the timeline. The trajectories were stripped off from the POPC membrane and water (5.0 Å from the protein surface). The number of hydrogen bond contacts was analyzed between pivotal pair of subunits of CIII involved in electron transport. These contacts were visualized in the trajectory to derive the location and time point of the differences observed between the control and oxidized forms of CIII. The interactions were also supported with evident information of gain or loss of contacts and change in structural conformation through the calculation of angles and distances wherever required. The distances between electron transfer groups involving the heme and iron-sulfur (2[Fe-S]) clusters were also calculated for the control and oxidized form. The Desmond module was used for the calculation of parameters. Maestro and PyMOL were used for the generation of high-resolution illustrations. Multiple sequence alignment. The protein sequences of the UQCRC1 subunit of CIII for the eukaryotic species -bovine, murine and human, were retrieved from the Uniprot database 20 . The multiple sequence alignment was performed using Clustal Omega 21 . The alignment was plotted using ESpript 22 .
7,068.6
2019-07-23T00:00:00.000
[ "Biology", "Chemistry" ]
Analysis of the Relationship between Tourist Demand and Sustainable Development Indicators in the Context of the Danube River in the Romanian Trajectory Practice has shown that tourism is an activity with a global spread, and sustainable development being a concept with global applicability, the intersection of the two elements is considered inevitable. Both elements are commensurable, which makes it possible to study them and analyze the relationships that arise from cohabitation in the economic and social environment. The purpose of this study is to find out to what extent the variation of tourism demand is influenced by the variation of some indicators of sustainable development. A multifactorial regression model was used, in which the number of tourists represents the dependent variable, and the number of unemployed, the natural increase of the population and the existing accommodation capacity are independent variables. For data processing, the Eviews statistical software was used. The greatest impact on the number of tourists is manifested by the existing accommodation capacity, and overall, the variation of the dependent variable is explained in proportion of 83% by the variation of the independent variables. Introduction In an age characterized by the technological evolution of economic activities, accelerated industrialization and the effects they have on the environment, the only solution for tourism to maintain its continuity over time is to become sustainable.It is obvious that the presence of tourists in a particular destination can bring both advantages and disadvantages, following the principle that each right corresponds to an obligation.Also, the presence of tourists in a particular destination is generated by a multitude of reasons or, better said, by a multitude of influencing factors.These influencing factors can be classified into several categories, as presented by Minciu (2004, 40-41): factors of influence by their nature, factors of influence by the duration of the action, factors of influence by role, factors of influence by direction of action, determining factors in accordance with the orientation of their influence. In terms of sustainable development, its influencing factors have economic, environmental or social valences and is measured by different indicator systems, as presented by the European Commission (2016), the United Nations (2007), the National Institute of Statistics ( 2018) and other such institutions.Thus, the purpose of the present research is to find out to what extent the number of tourists is influenced by certain indicators of sustainable development, given the county of Calarasi in Romania. According to Choi and Turk (2011: 124-129), sustainable indicators that can influence tourism can be part of the six dimensions, namely the economic, social, cultural, environmental, political and technological.In other words, sustainability indicators have a very broad spectrum of action.Starting from the premise that the number of tourists is an indicator that influences different indicators of sustainable development, another research premise can be developed, namely whether sustainable development indicators can influence the number of tourists.The second premise is the one from which this research started.The content of the research consists in presenting the analyzed area, the purpose and objectives, the methodology used, the literature review, results, conclusions and bibliography. Geographical and economic coordinates of the analyzed area The Danube River crosses ten countries, namely: Germany, Austria, Slovakia, Hungary, Croatia, Serbia, Romania, Bulgaria, Moldova Republic and Ukraine (Olson and Krug, 2020, p.885).Also, most of the river (30%) is found in Romania (Mazilu, 2011, p.45).Romania is located in the lower basin of the Danube (Țigu, 2012, p.170) and is bordered by the river in the south.The importance of the Danube River for Romania is boundless, as the Danube and its tributaries represent 97.8% of the waters that cross the territory of the country (Țigu, 2012, p.170).The Danube trajectory on the territory of Romania measures 1,075 kilometers, being crossed the following counties: Caraş-Severin, Mehedinţi, Dolj, Olt, Teleorman, Giurgiu, Calarasi, Ialomița, Constanța, Brăila, Galați and Tulcea (Mitrică et al., 2016, p.244).In other words, the Romanian Danube area is composed of the 12 counties that the Danube River crosses.A statistic of two tourist indicators and two very important economic indicators for the 12 Romanian counties bordering the Danube is presented in table 1. With regard to the table 1, it should not be taken out of context that, in the territory of Constanta County, there are the main tourist resorts on the Romanian Black Sea coast; and in the territory of Tulcea County, there is the Danube Delta.These aspects may influence the statistics shown in table 1.Also, although all these counties have a valuable common point, namely the Danube River, it is noted that the statistics presented in table 1 is heterogeneous.From an economic point of view, Calarasi County is characterized by the following elements: the employed population totals 88,100 people; the main activities of the national economy carried out in Calarasi County, in descending order, are agriculture, industry and construction and services; the unemployment rate is 3.6%; the contribution to the gross domestic product of the county is 0.8%; the main industrial branch is the food and beverage industry (Regional Directorate of Statistics Calarasi, 2021, pp.8-15). Among the natural tourist resources are included the Danube, with a length of 152 kilometers on the territory of the county, the natural reservations (Șoimul Island, Ciocănești Island, Haralambie Island), the lakes, and deciduous forests.Among the anthropogenic tourist resources are the monastic ensembles (the Church of Plătărești, the Church of Negoiești), the museums, the former Byzantine fortress "Păcuiul lui Soare" (Calarasi County Council, 2015).As a conclusion, it can be admitted that Calarasi County is an agrarian county, which has, as strengths, the agricultural production and the food industry, and whose main touristic resource is represented by the hydrographic network. Aim and objectives The purpose of this study is to find out to what extent the variation of tourist demand, expressed by the number of tourists, is influenced by the variation of some indicators of sustainable development, considering a Danube County, more precisely Calarasi County.The objectives of the study include identifying statistically significant indicators of sustainable development, finding out the values that change the tourist demand when increasing by one unit the value of sustainable development indicators, and identifying the sustainable development indicator that has the greatest impact on tourism demand. Methodology The data used in the research was taken from the website of the Review of the Literature The literature review is particularly important part, as based on it, conceptual clarifications are presented on the analyzed subjects, in this case the tourist demand and the indicators of sustainable development.Also, the literature review includes results of the approaches of other authors regarding the analyzed subjects. Tourist demand Like other activities, tourism activity can be measured by specific indicators.One of these indicators is represented by the number of tourists arriving at a particular destination.In a narrow sense, one can equate tourist demand with the number of tourists arriving at a particular destination.The tourist demand is a representative indicator for the touristic circulation, representing the number of tourists at a destination, calculated annually or on shorter time intervals (Turcu and Weisz, 2008, p.10).Globally, the continent of Europe has the highest number of tourists -744 million, followed by Asia and the Pacific -362 million, the Americas -219 million, Africa -70 million and the Middle East -65 million (World Tourism Organization, 2020).The distinction between tourist demand and tourist consumption is very important. According to Turcu and Weisz (2008, p.23), the tourist demand takes shape within the residence of the tourism consumer i.e., the tourist, and the touristic consumption materializes in the destination where the touristic offer manifests itself.In a general sense and from an economic point of view, the tourist demand or the number of tourists arriving at a certain destination represents, along with the touristic offer, a component of the tourist market. A more detailed definition of the tourist demand is given by Bălăcescu and Zaharia (2011, p.11) who argue that "the tourist demand represents the number of people who materialize their desire to travel outside their own residence, temporarily and periodically, the reasons being other than the carrying out of paid activities".It is noted that the previous definition is built on the model of the classical definition of tourism, with the mention that there is a visible emphasis on the number of people who carry out the movement. As mentioned above, the tourist demand, more precisely the number of tourists, is an indicator that can be calculated annually, but also on shorter time intervals, which means that the statistical analyses on this indicator can also be made annually or on shorter time intervals, such as months (Dincu et al., 2016, p.40).Many statistical analyses that take into account this indicator refer to its dynamics and the trends it may have in the future.The importance of analyzing this indicator is also given its economic impact, better said by the relationship between it and various macroeconomic indicators, such as the Gross Domestic Product.Lazar and Pop (2012, p.11) concluded that the elasticity of Gross Domestic Product according to the number of tourists is significant and positive.It is worth noting that not only the number of tourists can influence certain economic indicators, but also the number of tourists can be influenced by certain economic indicators.Gabroveanu, Stan and Radneantu (2009, p.68) showed that, in the first decade of the XXI century, the influence of the consumer price index and the total household incomes had on the number of tourists in Romania was 95% approximately.In other words, between the number of tourists and the different economic indicators, there is a relationship of interdependence. Also, tourism is not only closely linked to some economic indicators but is also linked to some indicators of sustainable development.If we treat the number of unemployed as an indicator of sustainable development (National Institute of Statistics, 2018), the sustainable link between the number of tourists arriving at a destination and the number of unemployed people at that destination would mean that at an increase in the number of tourists, the number of unemployed to decrease.This is very likely, given that, in general, tourism has the capacity to attract the surplus on the labor market and, implicitly, to reduce unemployment (Minciu, 2004, p.28).Previous statements evoke the prospect of the dependent link between tourism and unemployment, or more precisely between the number of tourists and the number of unemployed, in which the number of unemployed depends on the number of tourists.In other words, there is also the prospect of the inverse dependent link, in which the number of tourists depends on the number of unemployed. Another element that can be treated as an indicator of sustainable development would be the natural increase of the population (National Institute of Statistics, 2018).From a mathematical point of view, a numerical increase in the population could generate an increase in the number of tourists.More specifically, in the case of economically developed countries, a numerical increase in population could generate an average annual increase in the number of tourists between 0.5% and 1% (Minciu, 2004, p.45).The previous statements present the perspective of the link of dependence between the number of tourists and the natural increase of the population, in which the number of tourists represents the dependent variable. If the two examples of sustainable development indicators mentioned above are of a more general nature, the existing accommodation capacity (National Institute of Statistics, 2018) is an indicator of sustainable development specific to tourism, as it is also an indicator of quantification of tourism activity.It is clear that the accommodation service, expressed by the accommodation capacity, that is to say, by the number of accommodation places, is indispensable for any stay.The correlation between the number of tourists and the accommodation capacity is a subject treated by many authors, whether it is treated at county level, such as the case of Suceava County (Zaharia, Hapenciuc and Gogonea, 2008) or at the level of the development regions of Romania (Popescu, 2016).In the first case, an increase in the accommodation capacity generates an increase in the number of tourists; and in the second case, an increase in the number of tourists generates an increase in the accommodation capacity.In other words, there is a relationship of interdependence between the number of tourists and the accommodation capacity. The indicators such as number of unemployed, the natural increase of the population and the capacity of tourist accommodation can also be grouped in terms of resources, in the sense that the number of unemployed affects the statistics of human resources in tourism, the natural increase of the population supports the possible increase of human resources in tourism, and the existing accommodation capacity is an anthropogenic resource indispensable for carrying out tourism activities.Human resources are considered to be the resources "that ensure the functioning of the elements of the tourist offer" (Dedu, 2012, p.122).The series of sustainable development indicators, which could influence local tourism and, implicitly, the number of tourists can continue, for example: the population connected to wastewater depletion stations, the number of passengers travelling by public transport, the average gross monthly salary, climate change through indicators like rainfall (Liu, 2016).It is worth noting that the number of tourists can take the form of a variable dependent on certain economic indicators and indicators of sustainable development, but it can also take the form of an independent variable.Klarin (2018, p.76) found that there are three key elements when discussing sustainable development, namely: development, needs and future generations.These elements are also contained in United Nations definition of sustainable development, "the meeting of one's own needs by all people and the fulfilment of all the aspirations for a better life" (United Nations, 1987, p.24).In other words, sustainable development aims at a balanced improvement of the quality of living.In this context, the consumption-result ratio takes on a particular importance, in the sense that it is desirable that the results of human activities be achieved following a rational and responsible consumption of resources. Over time, there have not been many studies that have dealt the relationship between tourism and sustainable development, although it has been found that the principles of sustainable development cannot be implemented as tourism specific economic and social activities (Sharpley, 2000, p.14).However, there are also studies that show the capacity of tourism to contribute to the achievement of the objectives of sustainable development, because the tourism industry has one of the strongest impacts worldwide (Robu, David Sobolevschi and Petcu, 2019).Moreover, studies have shown that tourism indicators, such as the occupancy coefficient of accommodation capacity, can be influenced by various indicators of sustainable development such as the Gross Domestic Product per capita, the schooling rate, life expectancy at birth or gas emissions (Popescu et al., 2014), but also the fact that between tourism indicators and sustainable development indicators there can be strong positive or negative relationships (Popescu et al., 2014). According to the World Tourism Organization (2015), tourism can contribute to supporting all sustainable development goals, but most importantly, it supports the following goals: sustainable and inclusive economic growth, sustainable production and consumption, sustainable use of ocean and marine resources.Tourism is considered to contribute mainly to achieving these goals, as tourism is a great generator of jobs, can promote local tradition and products, and is a viable economic solution for the vast majority of coastal areas. Regarding the sustainable development indicators, it can be admitted that they can be specific, more precisely they are created based on the characteristics of small areas such as cities or villages (Khalifa and Connelly, 2009, pp.1184-1185) Kirilchuk, Rykunova and Panskov (2018, pp.293-294), who for sustainable development indicators with territorial coverage propose: emissions of greenhouse gases, resources used for energy production, the amount of pollutant emissions, the amount of polluting emissions from large enterprises. The analysis of tourism sustainability can also be done by calculating the Sustainability Tourism Index (Mitrică et al., 2021).Thus, to calculate this indicator, the indicators of tourism sustainability can be used, such as: anthropic tourist resources, tourist intensity, the population employed in tourism, occupancy rate in accommodation units, the length of the drinking water supply network, the length of the gas supply network, the population density, the number of elderly people compared to the number of young people, road accessibility and percentage of protected areas (Mitrică et al., 2021, p. 6).Based on previous list, it is observed that the indicators of tourism sustainability are very diverse having a tourist and social character. Tourists and sustainable development Tourists visiting Romania consider it necessary to implement the principles of sustainable development in society and in the field of tourism, taking into account the following reasons: conservation of resources, reducing the effects of pollution, finding solutions for activities that are not friendly to the environment, ensuring prosperity for future generations (Madar and Neacșu, 2020).In other words, tourists are aware that tourism also has disadvantages, and that sustainable development is the solution to eradicate these disadvantages. In relation to sustainable development, tourists can be influenced by the extent to which tourism service providers direct their efforts towards sustainable development.Dabija and Băbuț (2013) showed that the biggest influence on tourist satisfaction has the economic dimension of sustainable development, put into practice by hotels.More precisely, the tourist's satisfaction is influenced by the financial situation of the accommodation unit, its investments and its financial stability.Not only the economic dimension, but also the social dimension of sustainable development influences the satisfaction of tourists depending on their type, for example, the satisfaction of cultural tourists, as shown by Asmelash and Kumar (2018).Tourists can relate to some elements of sustainable development depending on the values they believe in.Adongo, Taale and Adam (2018) showed that tourists who believe that man is the center of the universe show an empathic attitude towards nature conservation, and tourists who support economic growth through tourism activities show an empathic attitude towards other tourists and towards the development of the local community.These things reinforce the fact that tourists are aware of the impact that their actions can have on nature, on other tourists and on the local community.Thus, it is necessary for tourists to be followers of sustainable practices. The importance of tourists is not only revealed by their number but also by the fact that they, through their behavior, can positively or negatively affect the destination they visit.Tourists can acquire environmentally friendly behavior, the environment being the ''key element of the concept of sustainable development'' (Dabija and Băbuț, 2013, p.627) insofar as they have knowledge of the environment show an empathic attitude and are aware of their impact on the environment.This goal can be achieved through informal education (Meschini et al., 2021).At the same time, personal norms are very important in determining the behavior of tourists, for example, regarding the reduction of the amount of waste at the destinations they visit (Wang et al., 2021).Personal norms also positively affect the behavioral intention of tourists to practice a civilized tourism (Liu, An and Jang (Shwan), 2020).Usually, the foundations of these personal norms are laid in the family environment.Further, Szromek, Hysa and Karasek (2019, p.11) showed that tourist of all generations (Baby Boom, X, Y and Z) agree that the adoption of a civilized behavior does not depend on the destination and must be the same both at home and at the destination visited.In some regions of the globe, such as the Arctic, "tourists have the most positive attitude towards sustainable development practices, compared to residents or companies operating in the field of tourism'' (Chen, 2015, p.229).In conclusion, tourists through their behavior are a key factor in implementing sustainable development at the destinations they visit. Based on the above considerations, it is justified to study the relationship between tourism demand and sustainable development indicators. Results and Discussions Due to the fact that the data series used are time series, it is imperative that non-stationary data series be stationary.For this, the ADF -Augumented Dickey-Fuller Test -was applied (Codirlașu, Moinescu and Chidesciuc, 2010, p.24).Thus, the series of the average gross monthly salary and the harvested wood are stationary series as such, and the series of local public passenger transport, the natural increase of the population, the number of unemployed, the population connected to the wastewater depletion stations, the drinking water production capacity, the tourist accommodation capacity and the number of tourists arriving in Calarasi County are stationary series at the first difference.By estimating the regression model between stationary data series, using the ls (least squares) function, an invalid regression model was obtained since the probability of the F-test (0.12) is more than 5%.Moreover, due to the fact that only the probability of the coefficient related to the series the number of unemployed (0.04) is less than 5%, all the variables whose coefficients have a probability of more than 15% have been eliminated.Thus, the analysis still included the number of tourists arriving in Calarasi County, the tourist accommodation capacity, the number of unemployed and the natural increase of the population.Estimating the regression model again, using the ls function, a valid multifactorial regression model was obtained since the probability of the Ftest (0.02) is less than 5%.Thus, the model takes the following form: (1) Where: b0 -the free term of multifactorial regression model b1 -coefficient for the X2 data series (natural increase of population) b2 -coefficient for the X3 data series (number of unemployed persons) b3 -coefficient for the X6 data series (tourist accommodation capacity) ei -model errors. Additionally, the regression between the number of tourists arriving in Calarasi County, as a dependent variable and the natural increase of the population, the number of unemployed and the tourist accommodation capacity, as independent variables, is not false because Durbin Watson statistic (2.55) is higher than R-squared (0.61).Given that there is no false regression, and the estimated model is valid, it was possible to test the hypotheses of the estimated model. The VIF -Variance Inflation Factors -method was used to test the multicollinearity hypothesis (Anghelache et al., 2012, p.228).For the data series the tourist accommodation capacity and the natural increase of population, VIF was equal to 1.09, and for the data series the number of unemployed, VIF was equal to 1.07.Since the VIF took values lower than 6, it is admitted that there is no multicollinearity.Therefore, it is not necessary to correct the estimated regression model. The Breusch-Godfrey test was used to test the hypothesis of lack of autocorrelation of errors (Codirlașu, Moinescu and Chidesciuc, 2010, p.51).Although the probability of the Chi-square statistic (0.02) is less than 5%, the probability of F test (0.055) is higher than 5%.Thus, it is accepted that the errors in the estimated model are self-correlated.Therefore, it is necessary to correct the estimated regression model.The White test was used to test the homoskedasticity hypothesis (Anghelache et al., 2012, p.228).Since the Chi-square statistic and the F test have probabilities higher than 5%, respectively 0.66 and 0.74, it is admitted that the errors are homoscedastic.Therefore, it is not necessary to correct the estimated regression model.The Jarque-Bera test was used to test the hypothesis of normality of errors (Codirlașu, Moinescu and Chidesciuc, 2010, p.30).Since the probability of the Jarque-Bera test (0.65) is higher than 5%, it is accepted that the errors are normally distributed.Therefore, it is not necessary to correct the estimated regression model. Since the errors are self-correlated, it is necessary to correct the estimated regression model.The Cochrane-Orcutt procedure was used to correct the autocorrelation of the errors and, implicitly, of the estimated regression model (Pagliacci et al., 2015, p.76).Following the Cochrane-Orcutt procedure, the output was obtained as shown in table 4. Because the number of values in the data series is relatively small, the model cannot be used to make forecasts, but summarizes as a whole, the link between tourism demand in Calarasi County and indicators of sustainable development at the territorial level.These results can be interpreted as follows: ▪ At an increase of the existing accommodation capacity with 1 accommodation place, the number of tourists arriving in Calarasi County will increase by 16.60 tourists; ▪ At an increase of the number of unemployed with 1 unemployed person, the number of tourists arriving in Calarasi County will decrease by 0.85 tourists; this can also be explained by the fact that by increasing the number of unemployed there is a possibility that the providers of basic tourist services will be unable to provide service; ▪ At an increase of the natural increase of the population with 1 person, the number of tourists arriving in Calarasi County will increase by 9.13 tourists; this prediction can be explained by the fact that, with the increase in the number of family members, household expenses increase and the family is forced to consolidate their income and a solution would be to open business in tourism, and more specifically for Calarasi County would be agrotourism; ▪ Because R-squared (coefficient of determination) has the value 0.83, it can be admitted that the variation of the number of tourists arriving in Calarasi County is explained in proportion of 83% by the variation of the accommodation capacity, the number of unemployed and the natural increase of the population. The proposed prediction model could take the following form: number of tourists = intercept + (natural increase of the population x coefficient of natural increase of the population) + (number of unemployed x coefficient of the number of unemployed) + (tourist accommodation capacity x coefficient of tourist accommodation capacity) + errors.(3) The values of the coefficients related to the data series demonstrate that the greatest impact on the number of tourists arriving in Calarasi County is held by the variable entitled existing accommodation capacity, so of an anthropogenic resource.This is followed by the variable entitled natural increase of the population and finally by the variable entitled number of unemployed. Conclusion and Recommendations Based on the fact that the statistics of the number of tourists and other economic indicators differ among the 12 counties that make up the Romanian trajectory of the Danube River, it is not excluded that the relationship between the number of tourists and the indicators of sustainable development will differ from one county to another.The statement found in the literature, according to which tourism has the ability to reduce unemployment is also valid for Calarasi County, because an increase in the number of unemployed by 1 unemployed person, the number of tourists will decrease by 0.85 tourists, which means that a decrease in the number of unemployed, the number of tourists will increase.In other words, if the number of tourists visiting the county increase, the number of unemployed in the county will decrease.So, a small number of unemployed people stimulates the increase of the number of tourists arriving in the county.At the same time, this interpretation may or may not be valid for the other counties that make up the Romanian trajectory of the Danube River.Moreover, Romania is not a developed country, but is a developing country.In this sense, the statement that an increase in the number of populations generates an increase in the number of tourists is also valid for Calarasi County, because an increase in natural increase of the population by 1 person, the number of tourists visiting Calarasi County will increase by 9.13 tourists.This interpretation may also be valid or not for the other Danube counties.If the number of unemployed and the natural increase of the population are two indicators with social values, existing accommodation capacity is an indicator specific to tourism and it turned out that, for Calarasi County, this indicator influences more the number of tourists than the first two indicators mentioned above.Existing accommodation capacity generates an increase in the number of tourists visiting Calarasi County by 16.60 tourists.This interpretation may or may not be suitable for the other Danubian counties. With the exception of unemployment, it turned out that the natural increase of the population and existing accommodation capacity have positive influence on the number of tourists arriving in Calarasi County, and overall, the influence of the three indicators of sustainable development on the number of tourists is 83%.Thus, in order to increase the number of tourists visiting Calarasi County, solutions must be found and stimulated to increase the population and the existing accommodation capacity, but also solutions to reduce unemployment, one of them being to increase the number of tourists, within the sustainability limits of the region concerned.Referring to the themes from which the indicators come, solutions must be found to support and strengthen social cohesion (Kamble and Bouchon, 2016) and public health at local level (Spiegel et al., 2007). number of tourists arriving in Calarasi County x1 -local public passenger transport (thousands of people) x2 -the natural increase of the population x3 -number of unemployed persons x4 -population connected to wastewater depletion stations x5 -drinking water production capacity (m 3 /day) x6 -tourist accommodation capacity (number of seats) x7 -average gross monthly salary x8 -harvested wood (m 3 ) Sustainable development is a concept that has gained momentum with the awareness and intensification of the negative effects produced by human activities on society and the environment.Like any other concept that has global applicability, sustainable development has a number of principles and indicators of measurement.Also, on the topic of sustainable development, countless conferences and meetings were held aiming at strengthening the implementation of the principles of sustainable development, especially in areas affected by the negative effects of human activities.Among these conferences and meetings can be mentioned the United Nations Conference on Human Environment held in Stockholm in 1972, the Brundtland Report of 1987, the United Nations Conference on Environment and Development held in Rio de Janeiro in 1992, the Johannesburg Summit of 2002 and many other conferences and meetings on sustainable development.Based on these meetings and conferences, and based on the documents issued, Table 1 : Economic and touristic characteristics of the Danube counties Source: Website of the National Institute of Statistics (Tempo Online time series, http://statistici.insse.ro:8077/tempo-online/,accessed on 18 September 2021). National Institute of Statistics.Due to the fact that Calarasi County represents a part of the total area of Romania, respectively, a part of the Romanian Danube trajectory, the sustainable development indicators at territorial level, provided by the National Institute of Statistics, were used.The Romanian Danube area consists of several counties, including Calarasi County.Although it benefits from the same special natural resource, namely the Danube River, Calarasi County, compared to the other Danube counties, registers the lowest number of tourists in most of the years of the analyzed period (National Institute of Statistics, n.d.); and based on this reason, Calarasi County was chosen for this research.Due to the lack of data and the different date ranges, the 2006-2019 timeframe was chosen, as data for most indicators were available for this range.For the same reason, for this research, the indicators of sustainable development chosen were local public passenger transport, the number of unemployed, the natural increase of the population, the average gross monthly salary, the tourist accommodation capacity, the population connected to the wastewater depletion stations, the harvested wood mass and the drinking water production capacity.These indicators represent the independent variables, and the number of tourists arriving in Calarasi County represents the dependent variable, and it was taken from the Tempo Online database of the National Institute of Statistics (National Institute of Statistics, n.d.). Table 3 : Sustainable development indicators according to the National Institute of StatisticsAs shown in table 3, most sustainable development indicators at territorial level are grouped in theme 1.It is also noted that only one indicator measuring sustainability has been considered for tourism.In general, the themes and, implicitly, the indicators presented in the previous table fall within the economic dimension, the social dimension or the environmental dimension.The indicators of sustainable development represent in fact, the figures and percentages behind some economic and social realities in a given area, having the ability to influence other elements of the economic and social reality, including tourism.A comprehensive classification of sustainable development indicators is made by 10 Public utility of local interest The length of the streets in the cities.Source: Website of the National Institute of Statistics (https://insse.ro/cms/files/IDDT2012/index_IDDT.htm,2018). Table 4 : Estimated model after application of the Cochrane-Orcutt procedure Based on the data in table 4, it is observed that the probability of F test (0.006) is less than 5%.Furthermore, the probabilities are given the tourist accommodation capacity (0.02), the number of unemployed (0.002) and the natural increase of the population (0.008) are less than 5%, which means that the coefficients are statistically significant and that the estimated regression model is valid and can be written as: Y = 974.35+(16.60 x X6)+(-0.85 x X3)+ (9.13 x X2) + ei(2)
7,548.8
2021-12-30T00:00:00.000
[ "Environmental Science", "Geography", "Economics" ]
Study of Eclipsing Binaries: Light Curves & O-C Diagrams Interpretation : The continuous improvement in observational methods of eclipsing binaries, EBs, yield more accurate data, while the development of their light curves, that is magnitude versus time, analysis yield more precise results. Even so, and in spite the large number of EBs and the huge amount of observational data obtained mainly by space missions, the ways of getting the appropriate information for their physical parameters etc. is either from their light curves and / or from their period variations via the study of their (O-C) diagrams. The latter express the di ff erences between the observed, O, and the calculated, C, times of minimum light. Thus, old and new light curves analysis methods of EBs to obtain their principal parameters will be considered, with examples mainly from our own observational material, and their subsequent light curves analysis using either old or new methods. Similarly, the orbital period changes of EBs via their (O-C) diagrams are referred to with emphasis on the use of continuous methods for their treatment in absence of sudden or abrupt events. Finally, a general discussion is given concerning these two topics as well as to a few related subjects. Introduction A lot of time has passed since the primitive observations of EBs made with naked eye till today's space surveys. In the meantime, small or large telescopes equipped, or not, with photometers or CCD cameras were used. Later, due to the great technological progress, earth-based observations were carried out with robotic or automatic photometric telescopes, while today, observations on EBs are received from the various space missions. The later provided us with a huge amount of data and it was then realised that EBs are not rare. See, for example, some photometric survey results of OGLE, MACHO, EROS-2, etc., not to mention the number of eclipsing binaries from Kepler. To be more specific: A catalogue of 1575 contact EBs fainter than I = 18 mag are identified in the OGLE-I database in the selected directions toward the Galactic bulge and the Galactic bar [1]. Meanwhile, 11,589 EBs have been identified in the Galactic disk fields from OGLE-III survey [2], while over of 450,000 EBs and ellipsoidal binary systems have been detected towards the galactic bulge from the OGLE project [3]. Moreover, a search for EBs in the central regions of SMC and LMC collected from OGLE-II showed 1500 and 3000 eclipsing stars in these two galaxies, respectively [4]. The foregoing mentioned numbers became 8401 and 40,204 from the OGLE-IV survey [5], while 493 were new discoveries from a catalogue of 1768 EBs detected in the outer region of LMC by EROS-2 [6]. Similarly, the CoRoT space mission equipped with four CCDs, collected 177,454 light curves, from which 2269 EBs were detected [7]. The benefits from EBs study were well known long ago, and this is the reason of their continuous observations. Much information for stellar structure and evolution can come out of them; special important is their role in providing fundamental data for stars as are masses, radii, etc. The latter is achieved via the study of their light curves, while the first through the study of their orbital period variations. Furthermore, EBs were used to trace anomalies or differences in the brightness over a stellar surface, while it is possible to determine empirical limb-darkening coefficients or gravity darkening exponents. Finally, as an eclipsing pair may consist of similar or absolutely different stars, it is a challenge for the stars' evolution. To the foregoing mentioned benefits, two more and very important were added: (a) EBs can be used as distance indicators if they are found in clusters or other galaxies except our own. Thus, the distances of SMC [8,9] and LMC [10] have been measured, as well as this of the Andromeda galaxy [11,12] and that of the Triangulum galaxy, M33 [13,14]; (b) EBs can provide useful information for other planetary systems, since the recent results from the Kepler mission yield the discovery of planetary companions, transiting exoplanets [15,16], which can provide useful information for other planetary systems. On the other hand, in spite the large number of eclipsing binaries and the huge observational data obtained mainly by space missions, the ways of getting the appropriate information for their main physical parameters etc. is either from their light curves and/or from their period variations via the study of their (O-C) diagrams. For this reason, the main scope of the present is to pass briefly through what has been done up to now concerning the light curves treatment of EBs, while the ways of facing their orbital period changes is examined, too. Light Curves of Various Ebs and Models for Their Analysis Different models have been proposed so far to explain the variations observed in light curves of Ebs, which consist of a gravitationally connected pair of stars, and in which the inclination of their orbital plane to the line of sight is such that permits mutual eclipses to be detected. Classical Ebs are usually divided into three main categories based on the Roche model, which is used in all codes or programs used today: detached, D, semi-detached, S-D, and contact systems, C. Moreover, a historical division, which is also very often used, is that of: Algols for D type, β Lyrae for S-D type and W UMa for contacts, from the prototype of each category observed. Moreover, contact binaries were divided to two sub-groups, A-type and W-type, according to which star is eclipsed during the primary minimum. Moreover, other systems are characterized as near contact, marginally contact, and over-contact ones. To the foregoing basic division, some other classes and/or sub-classes have been reported during the last 15 years, based either on earth data as are the oEA systems, i.e., EBs of Algol-type where one of the components is an oscillating star, (e.g., as referred to in [17][18][19][20][21][22]), or on data from space missions, as is for example the totally eclipsing Algol-type system whose primary component shows over 50 pulsation frequencies [23]. To them, EBs with very low mass ratios [24] could be added, as well as hot Algols in our Galaxy and in SMC and LMC. As concerns the light curves analysis of EBs to get the fundamental elements of the two component stars, various programs have been proposed so far. Starting from the simplest spherical model, where the two members of a binary were considered spheres being well inside their corresponding Roche lobes, soon the two stars' shape changed to ellipsoid, which is more realistic since the two components should be considered deformed because of their axial rotation, their mutual tides etc. Thus, various programmes have been developed for getting the two stars' fundamental parameters. Since is not possible to mention all of them, we restricted to the mostly known and widely used, which are the Russell-Merill model [25], WINK [26], and the frequency domain method, FDT. The latter was described in a series of papers and then in a book [27], and these three methods can be characterised as the old ones. From the old methods, FDT is the mostly used, since in the last decades of 20th century it was new, quite easy to be used yielding to the computation of the most significant elements of the EBs. Besides, it could be applied to any kind of system, (D, S-D, or C), as well as to any kind of eclipse, (total or partial). The simplest to be analysed were detached systems, but S-D and contact were also analysed after taking into account the photometric perturbations, as given in [28,29], and tidal and rotational distortion inside and outside eclipses for any kind of them, as described in [30,31]. A large number of classical EBs of every type were analysed with this method, a sample of which can be found in the works [32][33][34][35][36][37][38]. At the same time, the development and new capabilities of computers permitted many researches to write their own programs or codes, like LIGHT [39], LIGHT2 [40], the BINARY MAKER [41], and many others such as EBOP [42], and this is presented in [43]. Even so, the most known and used of such codes is the W-D code [44], which is still in use after passing through some modifications and improvements [45], while the latest and much more powerful program is PHOEBE [46], which is continuously improved, too. The foregoing mentioned codes are characterised as new methods of light curves treatment of EBs. From them, W-D code and PHOEBE are mostly used due to their continuous improvement. Besides, is mentioned that sometimes other programs are firstly used to get a set of preliminary elements for a particular eclipsing binary before applying W-D code or PHOBE. For example, the BINARY MAKER is very often used, e.g., in cases of AK Her [47] and II UMa [48]. Similarly, a simple spherical model & EBOP code were used for WR 20a [49]. Further, the W-D code was used to get solutions for some of the new discovered EBs outside our own Galaxy from the various space missions. The big difference between the old and the new methods is that in the old ones the various used programs were really analysed an observed light-curve to get the two stars' basic elements, while what characterized the new methods is synthesis described in detail by many authors in [50,51]. Furthermore, for eclipsing binaries in cataclysmic variables, CV's, in low mass x-ray binaries, LMXBs or high mass X-ray binaries, HMXBs, in Wolf-Rayet stars, WR stars, in symbiotic systems etc. similar programs and codes have been developed, too. The difference from those already mentioned is that they have been constructed in such a way to be suitable to face the peculiarities of these kinds of close binaries, as are for example atmospheric eclipses for WR-O stars, or an accretion disc around the gainer component. Thus, these codes were prepared to be in agreement with the proposed theoretical models, as are for example those described in [52,53], suitable for spots and for CVs, respectively. Aldedo, Limb and Gravity Darkening It is worthwhile to mention that in all of the forgoing mentioned programs and codes, a number of separate functions, or routines and sub-routines are used to compute various needed quantities, while standard values are given to some elements. Besides, as regards the model atmospheres used, except the classical black body model others are also used as the Kurucz, or the BaSel ones, while albedo, limb-darkening coefficients, and gravity-darkening exponents are kept constant giving them their theoretically values. So, albedo is taken equal either to 1.0 for hot or 0.5 for cool stars, respectively. As concerns limb-darkening coefficients, various tables exist in the literature [54][55][56][57]. Regarding the gravity-darkening exponent, β, its standard values are: β = 0.25 for purely radiative transfer and β = 0.08 for stars with convective envelopes. Moreover, the phenomenon of gravity darkening attracted the interest of many investigators like [58,59] etc., while it was examined for some specific stars like Algol [60]. In real stars, a smooth transition is achieved between the two energy transport mechanisms and, thus, β can take all intermediate values, as referred in [61]. The latter was confirmed from the analysis of the observational data of a number of EBs of S-D type, as shown in [62,63]. In the meantime, and since, of all classical EBs, those of the most interest are contact systems, because of the mutual interaction between the two components, many different models have been proposed especially for them, among which the following are mentioned: the contact discontinuity model, DSC, the thermal relaxation oscillation model, TRO, and the angular momentum loss model, AM. Dark Spots and Corresponding Dark-Spots Models On the other hand, some anomalies yielding to asymmetric light curves were faced as due to spots, dark spots, i.e., cooler than the surrounding photosphere. The first complete description of the effects of circular or elliptical spots at any longitude and latitude from rotating spherical stars was given by [64], followed by the development of many codes especially for EBs of RS CVns-type. From the big number of such programs developed and used by individuals or groups of researchers we are limited to [65,66] only, since in the meantime all of the so-called new analysis methods were taken into account circular dark spots, to be used for stars with convective envelopes, i.e., capable to develop magnetic activity like our Sun. The today used spotted models usually considered one or two circular spots, while the computed parameters of the spots are their radius, location, (longitude & latitude on the star's surface), and temperature difference from the surrounded area [67]. A lot of such studies exist concerning the light curves analysis of many individual stars, while general information for spots on the surfaces of single and binary stars can be found, too, as in [68]. Even so, there were many problems using these so-called spotted models with the biggest being that of the uniqueness of solution, for which much discussion has been had, because of the pretty good light curves fitting received from various spot(s) sizes and/or spot(s) positions. On the other hand, and with the aid of spectroscopy it was possible to follow the line profiles due to spot(s) activity over the course of the stars period of axial rotation. Thus, it was found that, in classical EBs, dark spots were detected on the surfaces of the late type component of Algols, and on one or both members of W UMa-type stars. Similarly, they were detected on CVs, or on one or both members of RS CVns-type binaries. However, not all of this class of variables belong to EBs. The existence of dark spots on the surface of the foregoing mentioned type of binaries was confirmed from simple and/or continuous photometric observations, as well as with high-resolution spectroscopic data. So, many stars were monitored, and their data were subsequently analyzed with simple spotted models. Great has been the aid of APT or RT towards this direction, as various groups of scientists all over the world made continuous photometric observations to specific group of stars and especially to magnetic active ones. Spots are usually connected to flare activity that has been detected in many single or binary variables and many reports there exist for individual stars, while a homogeneous sample of all flares occurred in active stars from the EUVE data has been presented, too. Similarly, flares have been detected in some EBs either from ground-based observations, or from space. Regarding the first, from the big number of individual EBs referred, here, only the four [64][65][66][67][68][69][70][71][72] are mentioned. They concern the stars X Tri, GSC 2314-0530, and GJ 3236 from a campaign between 2014 and 2016, and BX Tri from a campaign between 2014 and 2017, respectively. As regards space observations, only two are mentioned concerning close binaries observed by Kepler [73,74]. Period Variations Our knowledge of period changes in close binaries is mainly and almost exclusively based on EBs and especially on the analysis of their (O-C) diagrams, which carry much and valuable information. Thus, it is very important to distinguish the apparent from the real orbital period variations [75], while it is also very important the way with which such a diagram is constructed and analysed. Observations have shown that in general there are small but definite orbital period variations in a close binary due to various reasons. Strictly periodic and of alternating sign period changes for example can be caused by apsidal motion or by the presence of a third body in a close pair, (apparent changes). They are easily recognizable one from the other, since the primary and secondary minima behave differently in each one of these two cases. Both apsidal motion and light time effect have been investigated for many years and remain of interest among various researchers generally and from the theoretical point of view, as in [76,77]. As regards the apsidal motion hypothesis in an (O-C) diagram, it could provide remarkable results, as in the case of DI Her [78]. For this star various hypotheses were made to explain the disagreement between theory and its very slow rate of apsidal motion, which had been interpreted even as a possible failure of the theory of general relativity, GR. In Ref. [78], the problem was solved via a more detailed technique with the final result to be in good agreement with GR theory. Concerning the light-time effect, it is produced by the presence of a third companion in the eclipsing pair. From the big number of studies on individual EBs there is evidence that many close binaries have distant tertiary companions [79][80][81][82]. Current observational estimates suggest that about more than 30% of all binary stars are in triple systems, while according to [83], the abundance of a third body in W UMa-type was found to be much greater than the estimated values concluding that most contact binaries exist in multiple star systems. General information for triple and multiple systems can be found in the updated Multiple Star Catalogue, MSC [84]. Moreover, investigators who combined old earth based results with the latest space missions data were able to study these two phenomena in some particular EBs in both Magellanic clouds, e.g., the apsidal motion of three eccentric EBs in the LMC [85], the apsidal motion for 13 eccentric EBs in the LMC [86], and the light-time effect in some bright and some massive EBs in the SMC [87,88]. On the other hand, it is noteworthy to mention the perturbing effects of a third companion forming a hierarchical triple system with a close eclipsing binary discussed in [89,90]. In these cases, it seems very possible that the dynamical interaction of the third body could cause real and not apparent orbital period variations, as a simple light-time effect. Moreover, from the continuous and long-term observation of various types of binary systems it was found that a large amount of EBs exhibits real orbital period variations, which in some cases are of quasi-periodic nature [91]. This latter kind of orbital period behaviour has been detected in Algols and W UMas from the classical EBs, as well as in eclipsing RS CVns and CVs. Various theories have been proposed to explain the origin of the real orbital period variations with first the mass and angular momentum transfer and/or loss, e.g., [92,93] etc. This can be achieved through stellar wind, via the second Lagrangian point L2, or through a process in which when one of the two components fills its Roche lobe matter is transferred through the inner Lagrangian point L1 to its mate, (Roche lobe over flow, RLOF), if restricted to usual cases and not to sudden catastrophic events like novae or supernovae. These mechanisms act on different time scales produce short or long-term orbital period changes related to their evolution. So, theoretic al estimates of AML based on different assumptions for magnetic braking law for binaries in general and with solar-type components have been carried out [94][95][96][97][98]. Similarly, average mass transfer rates have been derived for Algols [99,100], for CVs, and other binaries [101,102], while a method to control the existing numerical instabilities during mass loss overflow in contact binaries has been proposed [103]. To the foregoing mentioned mechanisms to explain the real orbital period changes of EBs, and of close binaries in general, the development of magnetic activity cycles was added [104,105]. Because a theory to explain period changes as a consequence of magnetic cycles that may be periodic or quasi-periodic had been suggested [106]. Great has been the aid of APT or RT towards this direction, as scientists all over the world were able to observe continuously specific group of stars and especially the magnetic active ones. Thus, clear evidence of long-term activity circles was detected, as well as evidence of spot migrations. From the big number of such studies only two are referred [107,108]. The Construction of an (O-C) Diagram The basic problems in the construction of an (O-C) diagram are the quality of the observational material and the time interval they cover. A general discussion is given in [109], while many studies concerning the (O-C) diagrams of various stars can be found in [110]. Moreover, the efficiency of (O-C) diagrams as diagnostic tools for long-period variations have been examined [111,112], too. Moreover, it is clear that long time intervals together with very small orbital periods mainly for contact binaries might yield to wrong results and conclusions, as for instance in the case of AK Her, whose light curves analysis and period study are described in [47]. Moreover, since the values of the C s are calculated according to an ephemeris formula, the shape of an (O-C) diagram is strongly depended on it, as is clearly demonstrated in the (O-C) diagram of AB And [113]. For this reason, some investigators use different ephemeris formulae to detect the existence of a possible hidden periodic term. Furthermore, two distinct small problems have to be also considered: (a) to test the significance level of possible orbital period changes and (b) to extract the precise form of them, since the presence of small random variations might be intrinsic to the star. This must be considered very seriously especially after the realization that some primary components of Algol type EBs are pulsating variables, i.e., they belong to the oEA sub class of Algols as already referred [17][18][19][20][21][22]. Ways of an (O-C) Diagram Analysis As regard the (O-C) diagrams analysis, the well-known linear and piecewise approximation, or step variation, as well as the quadratic one, i.e., the parabola fitting, were the first used to analyse the (O-C) diagrams of EBs. They might be good enough for some EBs, but not for all. So, later in contrast to these old methods, some new, continuous methods, have been proposed, e.g., [114][115][116]. It is mentioned in [114] that spline interpolation is used to join the various sub regions of the (O-C) diagram, while [116] is the last from a series of papers, and thus the former can be easily found. As regards the work of the first investigators, a first application was made to AB And [113], while many others followed by various researches among whom some are mentioned [117][118][119][120][121][122][123][124][125]. In the application to AB And [113], it is clearly shown that from two different (O-C) diagrams -constructed using two different ephemeris formulae-the same results come out for the orbital period changes of this system analysed with the continuous method proposed in [114]. On the other hand, assuming that the orbital period variations are due to magnetic activity only, there is possibility to find the variation of the magnetic field of the active component through a new method, namely variable sine algorithmic analysis (VSAA) [126]. It analyses the orbital period in the joined time-frequency domain, and thus provides an accurate description of the time variation. This makes the method suitable for tracing variable periodicities and applicable not only to EBs, but to other stars and phenomena like the Blazhko effect, the solar spot cycles etc. as is demonstrated in [127,128], while similar properties are referred to have the method described in [129]. Discussion After John's Goodricke idea in 1783 that eclipses can be a possible explanation for β Persei light variations, which was later spectroscopically confirmed, it was realized the importance of EBs. This early recognition of EBs significance yield to their continuous observations, since the studies of EBs is made either through their light curves or via the times of their minimum light. Binary stars and especially EBs have offered a lot in our understanding the structure and evolution of stars in general. They have, thus, been the subject of many theoretical as well as observational studies. Besides, many Books have been written about from which the following for binary systems in general are mentioned [130][131][132]. Similarly, EBs were the main subject of many conferences, at some of which the developed programs for their light curves analysis were firstly presented, as is the LIGHT-2 and the BINARY MAKER in [40] and [41], respectively. On the other hand, the great space missions, with various subjects as main goals, provided us with a huge amount of data, and methods had to be developed to identify EBs out of them, as proposed in [133]. As a result, hundreds of new light curves of EBs not only in our own galaxy, but in the Magellanic clouds as well as in the Andromeda galaxy have been discovered. Similarly, a huge amount of minima times was provided by the last years' space surveys. In spite of the huge amount of data, both the light curves and the (O-C) diagrams of EBs are treated using old well-known methods, which although improved or modified have the same basis. This was the main reason for which these methods were briefly discussed here as an honour to the pioneers of this subject and as recognition of their contribution. It is amazing to compare the past with the present in photometry of EBs. In the past early observers, using their simple photometers, could observe only one eclipsing binary each time, while today thousands of photometric data and light curves of EBs are available in various databases. Some of these data have been already analysed, yielding to some interesting results, as is the observed Doppler boosting in some Kepler light curves [134]. Similarly, it is astonishing to think of the near future when huge amounts of data of the order of terabytes, or more, will need to be analysed, even if the first clearing will be made automatically, as proposed [135]. However, their storage, security, and cost have to be very seriously considered. Moreover, our main concern has to be what new information will be added to our knowledge of EBs from the analysis of such huge amounts of data except statistics. Regarding the methods used to get the principal parameters of the two components of an EB via their light curves, we characterized them as old and new ones. The big difference between the old and the new methods is that in the old ones the various used programs were really analysed an observed light curve to get the two stars' basic elements, while the basic characteristic of the new methods is synthesis, as already referred. The word synthesis used in this case is rather a good choice. It comes from the Greek word σύνθεσις, originating from the verb συνθέτω = συν-θέτω, meaning to put things together. Indeed, in this approach, the investigator choosing the values of some particular parameters and leaving the rest free, tries via the code to get the best fitting for the light curves of a particular EB, achieved when all elements are put together. It is similar to the work of a music composer-συνθέτης in Greek, i.e., a word of the same origin-who tries to achieve a nice music result by putting many different organs to play together. Moreover, and although the FDT method was the most used from the old ones, all others were also used. For example, the WINK was used for the light curve analysis of GO Cyg [136] and WZ Cyg [137], while the light curves of WZ Cyg were also analysed with W-D code. Similarly, the light curves analysis of AG Per referred in [33] was carried out with three different of the characterized as old methods. On the other hand, some irregularities like the O'Connell effect detected in the light curves of EBs were confronted using various dark spots models, as for instance these presented and described in [66,67]. Similar phenomena, i.e., dark spots, were detected in EBs from the various space missions, as these are referred to in [138,139]. Moreover, and for the completeness of our task, is mentioned that except dark spots, bright and/or hot spots are also used to explain some of the observed light curves anomalies, in agreement with the theoretical models, since the physics of the two kind of spots (dark or hot) is absolutely different. Moreover, other irregularities are faced using a disc model. Among the many existing examples, two are mentioned here, namely the case of DL Cygni [140], and that of the eclipsing symbiotic AR Pavonis [141]. Today, both the characterised as new codes, that is, W-D and PHOBE are widely spread and almost exclusively used. For example, PHOBE was used for the light curves analysis of HIP 12039 [142]. Further, it is worthwhile to mention that the special code of PHOBE prepared for the determination of the principal parameters of detached EBs from OGLE project [143], had to be modified to be used for EBs from Kepler mission [144], because of the superb quality of the observational material of the latter. Concerning the W-D code, it was used to get solutions for some of the new discovered EBs in LMC and SMC as well as for the light curves analysis of some EBs from OGLE. Thus, although the light curves of the new discovered EBs from the various space missions are analysed using mainly W-D code or PHOEBE, the use of other known programs and codes to which we referred here cannot be excluded, as the case of the eclipsing binary WRa, already mentioned [49]. On the other hand, the treatment of (O-C) curves of EBs was discussed, since it is connected with orbital period changes and though it with their evolution. For this reason, some theoretical works connected with the evolution of various kinds of EBs, or for close binaries in general, were mentioned, too because systematic mass loss, mass exchange, possible existence of magnetic cycles etc. are associated to the long-term secular period variations. Moreover, relations connecting the rate of orbital period variation with the significant parameters of their evolution have been developed under non conservative conditions in a fundamental level of description [145]. Moreover, the scenario according to which contact EBs will merge in one single star was confirmed from direct observations [146], while the very low mass ratios of some of these systems had made investigators to suppose and expect such a result long ago. Concerning the influence of possible existed spots on the surface one of the components in the (O-C) diagrams of close binaries it has been examined some years ago [147], while recently this subject was also discussed together with the possible existence of third components in 41 EBs from Kepler survey [148]. Moreover, according to [149] these first findings of Kepler mission support the idea that the formation of close binaries involves the deposition of angular momentum into the orbital motion of a third component. As regards the detection of a third companion in an EB from its (O-C) diagram, it is made through the light time effect, as referred, and the findings has shown that tertiary is a quite common phenomenon. From the many such studies, the spectroscopic search for faint tertiaries in contact EBs is mentioned [148], as well as the novel proposed method to determine compact triples [150]. On the other hand, it is interesting to find the kind of the third body. In most cases, it was found to be a star, but there are cases where the third body has sub-stellar mass as reported in [151,152]. This is in agreement with the early results from Kepler mission yielding to the discovery of stellar and planetary companions in binaries, as already referred in [15,16]. Moreover, the case in which one of the two components of an EB is a pulsating star was also mentioned, since this phenomenon was detected in some Algol type systems: the oEA class, based on earth observations as already referred to in [17][18][19][20][21][22] and in [153]. Similarly, EBs with pulsating components have been reported from space missions results, e.g., with a δ Scuti type, or a hybrid δ Scuti [154], and as already mentioned in [23], or a Cepheid, (the eclipsing Cepheid OGLE-LMC-CEP-0227 in the LMC) [155], while other cases have been also referred [156]. Furthermore, some irregularities detected on the light curves of some EBs were treated supposing the existence of an accretion disc around the gainer component. A search for eclipsing binaries that host discs has been made [157], and only one of the many existing examples is given here [158], and it is related to the new class of DPVs. This is not regarding this new category of EBs discovered by space missions, but hot Algols. From this, as well as from the new class of nascent EBs with extreme mass ratios [24], we expect to learn much for their construction and evolution. As a conclusion, it can be said that the existent codes for the light curves analysis/synthesis are good enough for getting at least a first set of the fundamental parameters of EBs, while some modifications may be necessary. Such modifications will help to confront not only some observational irregularities or anomalies, but also face other problems, e.g., the superb quality of the light curves from Kepler mission. As concerns the (O-C) diagrams, although sudden or abrupt changes cannot be excluded, they are worth treating with great care, especially when one of the components is a pulsating star. Moreover, remember that as much more time interval space is covered, more reliable results will be obtained. Funding: This research received no external finding. Conflicts of Interest: The author declares no conflict of interest.
7,775.6
2020-11-13T00:00:00.000
[ "Physics" ]
THE ROLE OF THE PERFORMANCE DASHBOARD IN THE MANAGEMENT OF MODERN ENTERPRISES Nowadays, the management of any modern enterprise requires a real-time information system which should allow the continuous and quick display of the data which is critical in order to control the company in the current economic context. The performance dashboard is such an information system. It is made up of quantity, quality or financial management indicators or, in other words, of greatly significant pieces of information put together, which have immediate meaning for the person reading it. We must take into consideration the fact that the data is extremely dense and it is in continuous movement, being used in forming plans, in supporting decisions and in achieving control. The quality of decision and the achievement of performance depend on the quality of the information supplied. That is the reason why, in order to be useful for the decision-making process, the information must be reliable, up-to-date, complete, pertinent and accessible for the decision makers. Thus, an efficient performance dashboard is the one that allows the assessment and management of performance by means of the progress methods set in the strategy, and that helps the management face the changes and the challenges in the current economic climate. General information concerning the performance dashboards. Use, functions, principles The performance dashboard developed on the level of the enterprise encloses global, aggregate information which describes the evolution of the enterprise strategic orientation.The evolution of activities specific to different fields of responsibility is described in the inventory dashboard.The performance dashboards drawn up for the lower levels of hierarchy will be consolidated and condensed into the dashboards for the upper levels of hierarchy.For the highest level of hierarchy, the dashboard will include a general view on the enterprise management, which will be a presentation of the achievements reported to the action plan, and will have as a main objective the control of management and the analysis of the economic and social indicators (Tabără et al, 2009).The unity of the dashboards, considering the characteristics of modern management, can be suggestively presented as follows: Figure 1 Dashboard "offer" considering the characteristics of the modern management Source: Albu, N., Albu, C. (2003), Instrumente de management ale performanţei, Volume II Control de gestiune, Economic Publishing House, Bucharest, p. 126 The performance dashboard seems to be a tool for controlling the action and the responsibilities.From this point of view, its main virtue is that it produces a piece of information almost instantaneously and makes it possible for the main people in charge to act in due time.Thus, we can point out the main functions of the dashboard: Characteristics of modern management Dashboard "offer"  different time dimension due to celerity and a high degree of volatility  it shortens the reaction time, being followed on all levels of the company  knowledge of the enterprise and of the environment  informing the manager on the state of the department he/she is running;  warning on any unfavourable situations, on any deviation from normality;  assessing the results achieved in the endeavour to reach the objectives and, implicitly, assessing the quality of the decisions made and of the actions taken in order to make these decisions operational;  the decision-making function, meaning that pertinent information sent in due time to the managers placed on different levels in the company chain of command allow them to substantiate and make proper decisions.(Caraiani & Dumitrana (coord.) et al, 2005) In order to fulfil these functions, the management dashboard must cover the following principles: -its architecture must coincide with the structure of the enterprise; if the action is drawn up according to the chain of command, the architecture of the information system will have to follow the management structure; -in order to achieve the desired dashboard, threshold values will have to be set for each indicator, values which have to be considered by those in charge; the information that must be provided will have to be defined, and the rules using those indicators will have to be determined; -for each level, the dashboard must also include some collateral information for a better fulfilment of the tasks assigned to the responsibility centres; -the dashboard must have an open perspective on the competition, it must take into consideration the performance of the best competitor as reference in guiding the company's actions; -as it is a decision-making tool, the design and the content of the dashboard must be adapted to the personality of its user; -a high-performance dashboard supplies in real time indicators, as well as a history of them, which should allow the anticipation of events and the activation of the ones in charge in due time; -the periodicity of indicators must be adapted to the frequency of analysis and to the action capability of the ones in charge.This periodicity must allow for a timely reaction. -the indicators listed in the dashboard for the supervisor must coincide with the ones for the subordinates. The dashboard building process The whole process of drawing up the dashboard of a company involves the following stages: Table 1 The construction stages of the dashboard Stages Description of stages 1. Drawing up the management flow chart and naming the ones in charge of devising and ensuring the logistics necessary for the dashboard to be functional -the dashboard must follow the existing organizational structure, and not the other way round; -the organizational structure must be clear and coherent; -in order to draw up the management flow chart, one must set the responsibilities and the functional informal hierarchical chain; set the appropriate methods; assign the responsibilities according to the objectives set. Setting the objectives (the key points in the decision-making process) -the objectives pertain to the enterprise itself as well as to the tasks of drawing up, filling out, sending and using the dashboard; -they are expressed using quantitative and/or qualitative indicators for the enterprise as well as indicators specific to the dashboard; -the objectives underline the purpose for which they were drafted and the enterprise must work as a whole as well as on the level of each one of its procedural and structural units; -setting some key-points in the decision-making process aims at selecting the main tasks and objectives; starting from the strategic objective of the enterprise, the sub-objectives are set according to the type of centre (cost centre, profit centre etc.); Drafting a list of tasks, competences and responsibilities -specific for each functional and operational department; -the purpose is to allow the personnel to receive the information necessary for achieving their own objectives as well as the objectives of the other departments; -for some of the indicators, there is available information and consequently they can be determined; a problem occurs when there is a lack of updated information or there is no immediate source that can be used -in this case, based on studying the existing data, we resort to estimations and extrapolations.6. Typing the dashboard and using the information -in order to become an efficient tool for management control, the dashboard has to list the information in the way and at the time set in advance and it must be adapted to the characteristics and the information needs of the enterprise; -in drawing up and implementing the dashboard, we usually use as a working methodology questionnaires and interviews, and the existing management tools are analyzed; -drawing up the dashboard involves establishing beforehand the way in which the information should be presented, the periodicity of drafting the dashboard and of updating the information, the person in charge of drafting it; -it is recommended to avoid abbreviations and to mention the units of measure for the indicators; -the information supplied must be clear and easy to grasp for the reader; to this purpose, it is recommended to use tables and charts, to present them using different "appealing" colours; -the dashboard, due to the information it includes, must allow the users to take immediate and efficient action.-frequency -it refers to the deadline for drafting the dashboard and to the speed in disseminating it, and it depends on the duration of the life cycle of the decisions and the actions taken in that centre; -efficiency -it indicates the dashboard capability of leading to action, to analysing, of correcting and interpreting the deviations as well as taking corrective measures if necessary; -standards -they are the objectives, the results of the previous actions and the hypotheses based on which the deviations are set and analysed. Types of performance dashboards used in enterprises In the specialized literature and in the economic practice there are many types of dashboards, all based on different classification criteria according to the desired information -limited, referring to a certain procedural or structural part, or broad, referring to all the aspects investigated. We can narrow the range of dashboards to three common types: -strategic, -tactical, -operational.Strategic dashboards list information with a high degree of processing, ensure a quick overview on the way the organization works and make it easy to draw certain conclusions, following all the functions of the economic entity in a balanced way (Popa & Ionescu, 2004).An example of strategic dashboard is presented in Figure 3. Figure 3 An example of business intelligence strategic dashboard containing actual vs. target analysis and recommended action Source: BI Dashboards, http://www.bidashboard.org/types/strategic.html The information supplied is used to monitor the progress of the company in achieving the preset objectives.The main performance objective of any enterprise is to ensure the economic, the social and the environmental task.In order to achieve this objective, certain other derived objectives must be achieved first: ensuring the sustainable economic performance of the company, by means of managing simultaneously the levels of uncertainty for the future; meeting the needs and the expectations of those interested by creating value for parties concerned; ensuring the sustainable development.The parties concerned with the achievement of these objectives are: the shareholders, the clients, the users, the enterprise itself, the partners, the employees, the collectivity, the result aimed at being to create value. Tactical dashboards give more details on the information listed in the strategic dashboards, in order to identify the trends in relation to the objectives and the initiatives of the company.There are many factors and values which could be measured by these dashboards, but all of them are reported to the preset objectives.An example of tactical dashboard is presented in Figure 4. Figure 4 An example of business intelligence tactical executive dashboard in a hotel network Source: BI Dashboards, http://www.bidashboard.org/types/tactical.html Tactical dashboards are drafted for tactical purposes, for monitoring the actions taken within the departments, the projects, the geographical areas, etc. Situated on an intermediary strategic level, the tactical dashboards have the role to facilitate the link between the strategic and the operational levels. Operational dashboards.Unlike the strategic and the tactical dashboards which are conceived for and addressed exclusively to the managers, the operational dashboards are used on operational level (departments) and are addressed to the employees within these departments, seldom to the managers.These dashboards must allow the data analysis, following the history of the data, and based on the information gathered those concerned could make decisions which should lead to the improvement of the present situation.An example of operational dashboard is presented in Figure 5.No matter the level on which it is drafted, the information included in the dashboard should be: -consistent, which means relevant, synthetic and accurate concerning the field in question; -accurate, meaning that they should highlight the economic phenomena and send the information in real time; -synthetic, with different degrees of aggregation according to the hierarchic level of the beneficiary; -accessible, easy to grasp, clear, explicit; -balanced, highlighting the economic phenomena according to their weight in that particular field; -expressive, presented as suggestively as possible by means of tables, charts; -adaptable, meaning that they should be easily adjusted according to the modifications occurring in the company's activity; -economical, meaning that they should reflect the effects achieved compared to the efforts made.The tool which is most frequently used in drafting dashboards is Microsoft Excel, although there are over 50 software products available.Excel has the advantage of being a familiar tool, easy to use for drafting, with quick results in a short period of time. Performance control by means of the dashboard The control by means of the performance dashboard implies the definition of the performance and pilotage indicators.By means of performance indicators we can measure the level of performance achieved, while the pilotage indicators set the level to which the action plan has advanced.The place of the dashboard in the control process is presented in Figure 6. There is a close connection between the two categories of indicators.From this point of view, the dashboard becomes a working tool.Four situations may occur connected to these indicators (see Figure 7).The indicators chosen must be accurate and objective, must vary just as the phenomenon subjected to the measurement, must have identical significance in time and space, they have to be quickly calculated and able to be synthesized when passed to a higher level in the chain of command (Albu & Albu, 2003).-formal results-they imply comparing the achievements with the predicted data; -derived results -they are the unpredicted consequences of he actions involved; -implicit results -they lead to the modification of the competition game, referring to the strategic information.The dashboards can be drafted and developed in order to meet a large range of requirements, starting from monitoring the strategic level of the enterprise, and extending to monitoring and controlling the achievement of the operational objectives on the level of each department.In other words, in conceiving the dashboard we must take into consideration the stages of the management process: formulating the strategy (strategic objectives), planning the means for applying the strategy suggested, assigning the resources, drafting the budgets, the action, the reports, the monitoring, Unachieved The action plan was achieved, but the performance was not reached (possible cause: underestimating the effort necessary for reaching The action plan was not achieved, but the performance was reached (possible cause: the objective set was not ambitious enough) The action plan was not achieved, and the performance was not reached (this situation shows that the objectives were not reached due to the fact that not enough effort was made) Unachieved The action plan was achieved and the performance reached the control and the result analysis.Most types of dashboards existing on the market take into consideration the stages of the management process until the monitoring stage inclusively.However, the importance of the analysis stage has been reckoned nowadays, and consequently it started to be included in the enterprise dashboard (see Figure 8). Figure 8 Dashboards and Performance Management Source: http://www.dashboardinsight.com/articles/digital-dashboards/fundamentals/dashboards-role-in-abusiness-intelligence-solution.aspx The studies performed (Kawamoto & Mathers, 2007), which aimed at highlighting the real requirements of the users in the economic field concerning the building/design/offer of information included in the dashboard identified a series of key success factors: -a dashboard must be easy to draft with minimum effort, must have a logical structure and offer quick results; -defining certain measures using business terms relevant for the parties concerned, which should supply a mixture of operational, financial and company specific data; -the dashboard should be a tool meant to facilitate the management process; -the dashboard should allow revising and changing the data as often as necessary, according to the economical changes and to the new business conditions (requirement which proved to be the most difficult to meet).From the data presented above, it follows that the performance dashboard is a set of informational elements which should lay the foundation for the decisions made by the company managers, its role being to measure the distance covered, as well as to put into practice the action programmes while taking corrective measures in order to achieve the desired objectives (Muntean, 2006).Thus, the purpose of the dashboard as a management tool lies in reaching the preset objectives.The dashboard, due to the information it supplies, to the way it is drafted and built as well as to its design, must allow the assessment and the management of performance using the ways of progress set in the strategy, must be a tool for the performance management that should keep up with the changes and the challenges in the current economic context. Source: Processing according to Caraiani, C., Dumitrana, M. (coord.)et al (2005), Contabilitate de gestiune şi control de gestiune, InfoMega Publishing House, Bucharest, pp.433-437 The purpose of the performance dashboard in reached by obeying certain principles during the construction stage: -coherence -it means that the information offered by the performance dashboard should follow the company's chain of command and the departments with identical functions placed on the same level in this chain of command should have the same performance indicators, the same definition of the indicators and the same source of information; -pertinence -the indicators chosen (the critical ones) should reflect the key points in the performance of the centres, and the data on which these indicators are based should emphasize correctly the achievement of the command centre objectives; Figure 5 Figure 5 An example of an operational dashboard for a car manufacturer Source: BI Dashboards, http://www.bidashboard.org/types/operational.html Figure 6 Figure 7 Figure 6 The place of the dashboard in the control process Source: Caraiani, C., Dumitrana, M. (coord.)et al (2005), Contabilitate de gestiune şi control de gestiune, InfoMega Publishing House, Bucharest, p. 438 According to their role, the indicators can be: -warning indicators -they signal an abnormal state; -balance indicators -they highlight a normal state; -anticipation indicators -they predict and anticipate possible operational trends, changes, decisions.In order to ensure coherence and visibility to the system of indicators in the dashboard, they can be: -financial indicators; -activity indicators; -cost measuring indicators; -profitability indicators; -productivity indicators; -specific indicators.The indicators must show the results of the actions: internal and external diagnosis tool  global tool (considering as database partial tools: costs, budgets etc.) coordination of the actors, exploiting creativity  provides dialogue, raises questions in order to find solutions  the need for "contingent", adapted, supple, quality tools  flexible, adjustable, evolutional tool Figure 2 The functions of the dashboard Source: Fernandez, A. (2008) Les nouveaux tableaux de bord des managers.Le projet décisionnel en totalité, on the Edit Organisation, 4 th édition, http://www.nodesway.com/methode/methode_GIMSI_concevoir_le_tableau_de_bord.htm
4,236.4
2013-12-15T00:00:00.000
[ "Business", "Computer Science" ]
Search for dark matter produced in association with a single top quark in $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector This paper presents a search for dark matter in the context of a two-Higgs-doublet model together with an additional pseudoscalar mediator, $a$, which decays into the dark-matter particles. Processes where the pseudoscalar mediator is produced in association with a single top quark in the 2HDM+$a$ model are explored for the first time at the LHC. Several final states which include either one or two charged leptons (electrons or muons) and a significant amount of missing transverse momentum are considered. The analysis is based on proton-proton collision data collected with the ATLAS experiment at $\sqrt{s} = 13$ TeV during LHC Run2 (2015-2018), corresponding to an integrated luminosity of 139 fb$^{-1}$. No significant excess above the Standard Model predictions is found. The results are expressed as 95% confidence-level limits on the parameters of the signal models considered. Introduction Strong evidence for the existence of a new, non-luminous matter component of the universe, dark matter (DM), arises from astrophysical observations such as precise measurements of the cosmic microwave background and from gravitational lensing measurements. Through its gravitational interactions, it is suggested that DM constitutes up to 26% of the matter-energy content of the universe [1,2]. The nature and properties of DM remain largely unknown in the context of the Standard Model (SM) of particle physics. Under the hypothesis that DM behaves like a weakly interacting massive particle (WIMP) [3], searches are performed using multiple, complementary approaches. At hadron colliders, searches for WIMP-like DM production crucially rely on one or more visible particles being produced in association with the sought-after invisible DM candidate. The experimental signature for DM candidates is missing transverse momentum ( ì miss T , its modulus denoted by miss T ) in collision events. Several models have been proposed in the past decades, with the details of the DM-SM production process depending on the model assumptions. A class of simplified models for DM searches at the LHC is considered in this paper. It involves a two-Higgs-doublet extended sector together with an additional pseudoscalar mediator to DM, the 2HDM+ model [4,5]. This class of models represents one of the simplest ultraviolet-complete and renormalisable frameworks for investigating the broad phenomenology predicted by spin-0 mediator-based DM models [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. For the present study, a type-II [20, 21] coupling structure of the Higgs sector to third-generation fermions is considered. The CP eigenstates are identified with the mass eigenstates, i.e. two scalars ℎ and , two pseudoscalars and , and charged scalars ± . Three mixing angles are defined in the model: denotes the mixing angle between the two CP-even weak spin-0 eigenstates, tan is the ratio of the vacuum expectation values (VEVs) of the two Higgs doublets and represents the mixing angle of the two CP-odd weak spin-0 eigenstates. The alignment (cos( − ) = 0) and decoupling limit is assumed, such that the lightest CP-even state of the Higgs sector, ℎ, can be identified with the SM Higgs boson and the electroweak VEV is set to 246 GeV. The pseudoscalar mediator couples the DM particles, , to the SM and mixes with the pseudoscalar partner of the SM Higgs boson, . Following the prescriptions in Ref. [5], the masses of the heavy CP-even Higgs boson and charged bosons ± are set equal to the mass of the heavy CP-odd partner . This set of models offers a rich phenomenology, with a variety of final states that might arise depending on the production and decay modes of the various bosons composing the Higgs sector, as investigated in Ref. [22]. A recent study [23] has shown that final-state events characterised by the presence of miss T and a single top quark provide promising sensitivity to 2HDM+ models. As in SM single top production, three different types of processes contribute at leading order (LO) in QCD: -channel production, -channel production and associated production with a boson ( ). In the following, these are collectively referred to as DM processes. The -channel process →¯receives its dominant contributions from the two diagrams shown in Figures 1(a) and 1(b). These two diagrams interfere destructively, ensuring the perturbative unitarity of the →¯process. The magnitude of the interference decreases with increasing ± mass. In the case of the production channel, the two diagrams shown in Figures 1(c) and 1(d) provide the dominant contributions to the DM cross section. As in -channel production, these two diagrams interfere destructively. When the decays ± → ± are kinematically possible, the charged Higgs bosons are produced on-shell and the cross section of →¯, assuming ± masses of a few hundred GeV, increases to produce a sizeable event rate. Finally, -channel production is relevant in regions of the parameter space characterised by low ± masses (< 300 GeV) and it is not directly targeted by the analysis, but its contribution to the signal is taken into account. This paper presents a dedicated search for single top quarks produced in association with DM candidates, exploiting final-state signatures characterised by the presence of: large miss T ; jets, possibly arising from the fragmentation of -hadrons ( -jets); and one or two charged leptons, either electrons or muons (ℓ = , ). The analysis is conducted using proton-proton ( ) collisions at a centre-of-mass energy √ = 13 TeV produced at the LHC and collected by ATLAS between 2015 and 2018, for a dataset corresponding to 139 fb −1 . Three analysis channels, characterised by different lepton or jet multiplicities, are optimised to target different processes: tW 1L and tW 2L (single-lepton and dilepton final states, respectively) for the +DM events and tj 1L for -channel DM production. The results are interpreted in the context of 2HDM+ models, considering various assumptions about the most relevant parameters, , ± , and tan . Furthermore, the mutually exclusive tW 1L and tW 2L analysis channels are statistically combined to maximise the sensitivity to +DM processes. Previous searches for 2HDM+ models targeted associated production of DM candidates with Higgs or bosons [24], as well as DM and a¯pair (referred to as DM¯) (see Ref. [25] for CMS and Ref. [22] and references therein for ATLAS). This search is targeting the unexplored models within ATLAS where DM produced in association with single top quarks(for CMS results, see Ref. [26]). The analysis is also sensitive to DM¯processes in regions of the parameter space where the DM and DM¯production rates are similar. directly, leading to a di↵erent phenomenology. For completeness, we examine a model where is a Standard Model (SM) singlet, a Dirac fermion; the mediating particle, labeled , is a charged scalar color triplet and the SM particle is a quark. Such models have been studied in Refs. [?, ?, ?, ?, ?, ?]. However, these models have not been studied as extensively as others in this Forum. Following the example of Ref. [?], the interaction Lagrangian is written as directly, leading to a di↵erent phenomenology. For completeness, we examine a model where is a Standard Model (SM) singlet, a Dirac fermion; the mediating particle, labeled , is a charged scalar color triplet and the SM particle is a quark. Such models have been studied in Refs. [?, ?, ?, ?, ?, ?]. However, these models have not been studied as extensively as others in this Forum. Following the example of Ref. [?], the interaction Lagrangian is written as ATLAS detector The ATLAS detector [27] is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and nearly 4 coverage in solid angle. 1 The inner tracking detector consists of pixel and microstrip silicon detectors covering the pseudorapidity region | | < 2.5, surrounded by a transition radiation tracker which enhances electron identification in the region | | < 2.0. A new inner pixel layer, the insertable B-layer [28,29], was added at a mean radius of 3.3 cm during the period between Run 1 and Run 2 of the LHC. The inner detector is surrounded by a thin superconducting solenoid providing an axial 2 T magnetic field and by a fine-granularity lead/liquid-argon (LAr) electromagnetic calorimeter covering 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point in the centre of the detector. The positive -axis is defined by the direction from the interaction point to the centre of the LHC ring, with the positive -axis pointing upwards, while the beam direction defines the -axis. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle by = − ln tan( /2). Rapidity is defined as = 0.5 ln[( + )/( − )] where denotes the energy and is the component of the momentum along the beam direction. The angular distance Δ is defined as | | < 3.2. A steel/scintillator-tile calorimeter provides hadronic coverage in the central pseudorapidity range (| | < 1.7). The endcap and forward regions (1.5 < | | < 4.9) of the hadron calorimeter are made of LAr active layers with either copper or tungsten as the absorber material. A muon spectrometer with an air-core toroid magnet system surrounds the calorimeters. Three layers of high-precision tracking chambers provide coverage in the range | | < 2.7, while dedicated fast chambers allow triggering in the region | | < 2.4. The ATLAS trigger system consists of a hardware-based level-1 trigger followed by a software-based high-level trigger [30]. Data and Monte Carlo simulation The data analysed in this paper correspond to an integrated luminosity of 139 fb −1 of collision data collected between 2015 and 2018 by the ATLAS detector with a centre-of-mass energy of 13 TeV and a 25 ns proton bunch crossing interval. The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [31], obtained using the LUCID-2 detector [32] for the primary luminosity measurements. All detector subsystems were required to be operational during data taking. The average number of interactions in the same and nearby bunch crossings (pile-up) increased from = 13.4 (2015 dataset) to = 36.1 (2018 dataset), with a highest = 37.8 (2017 dataset) and an average = 33.7. Candidate events were recorded using a combined set of triggers [30] based on the presence of missing transverse momentum or charged leptons (ℓ = , ). The miss T trigger [33] is fully efficient for events with reconstructed miss T > 250 GeV and it was used for the single-lepton analysis channels. Furthermore, an OR between miss T and single-lepton triggers was used for the tj 1L channel for events with reconstructed miss T < 250 GeV. Triggers based on a single muon (electron) require the presence of a muon (electron) with transverse momentum T (transverse energy T ) above certain thresholds, and impose data quality and lepton isolation requirements. The lowest T ( T ) threshold without trigger prescaling is 24 (26) GeV for muons (electrons) and includes a lepton isolation requirement that is not applied for triggers with higher thresholds. In the two-lepton channel, lower thresholds for electrons and muons must be applied to retain sensitivity to the signal. A combined set of two-lepton triggers was used, with the muon (electron) T ( T ) trigger threshold depending on the data-taking period. The lepton trigger threshold ranged between 8 and 22 GeV for muons, and between 12 and 24 GeV for electrons. The analysis selections are chosen to guarantee maximum trigger efficiency, generally above 95%. Trigger matching requirements [30] are applied where the lepton(s) must lie in the vicinity of the corresponding trigger-level object. Dedicated Monte Carlo (MC) simulated samples are used to model SM processes and estimate the expected signal yields. All samples were produced using the ATLAS simulation infrastructure [34] and G 4 [35], or a faster simulation based on a parameterisation of the calorimeter response and G 4 for the other detector systems [34]. The simulated events are reconstructed with the same algorithms as used for data. They contain a realistic modelling of pile-up interactions with pile-up profiles matching the ones of each dataset between 2015 and 2018, obtained by overlaying minimum-bias events simulated using the soft QCD processes of P 8.186 [36] with the NNPDF2.3 LO set of parton distribution functions (PDFs) [37] and the A3 [38] set of tuned parameters (tune). Signal MC samples for single top quark production in association with DM include , -channel and -channel processes. Samples were produced either varying the ( , ± ) parameters and assuming tan equal to unity, or varying the (tan , ± ) parameters and setting = 250 GeV. Details of other parameter value assumptions are provided in Section 7. The samples were generated from leading-order (LO) matrix elements using the M G 5_aMC@NLO [39] v2.6.2 generator interfaced to P 8.212 [40] with Table 1: List of generators used for the different SM background processes. Diboson includes , and production. Information is given about the underlying-event tunes, the PDF sets and the perturbative QCD highest-order accuracy (LO, NLO, next-to-next-to-leading order (NNLO), and next-to-next-to-leading-log (NNLL)) used for the normalisation of the different samples. Diboson cross sections are taken directly from S . the A14 tune [41] for the modelling of parton showering (PS), hadronisation and the description of the underlying event. Parton luminosities are provided by the five-flavour scheme NNPDF3.0 NLO [42] PDF set. Signal cross sections are calculated to LO accuracy in QCD. Additional simulated samples are used for DM¯processes. They were generated using LO matrix elements, with up to one extra parton using the M G 5_aMC@NLO v2.6.7 generator interfaced to P 8.244 with the same PDF set and tune as used for the , -and -channel processes. The top quark decay was simulated using M S [43]. In this case, signal cross sections are calculated to next-to-leading-order (NLO) accuracy using the same version of M G 5_aMC@NLO as suggested in Ref. [16]. Background samples were simulated using different MC event generators, accurate at NLO or higher order, depending on the process. All background processes are normalised to the best available theoretical calculation of their respective cross sections. The event generators, the accuracy of theoretical cross sections, the underlying-event parameter tunes, and the PDF sets used in simulating the SM background processes most relevant for this analysis are summarised in Table 1. For all samples, except those generated using S [44][45][46][47][48], the E G v1.2.0 [49] program was used to simulate the properties of the -and -hadron decays. Event reconstruction and object definitions Common event-quality criteria and object reconstruction definitions are applied for all analysis channels, including standard data-quality requirements to select events taken during optimal detector operation. In addition, in each analysis channel, dedicated selection criteria, which are specific to the objects and kinematics of interest in those final states, are applied as described in Section 5. Events are required to have at least one reconstructed interaction vertex with a minimum of two associated tracks each having T > 500 MeV. In events with multiple vertices, the one with the highest sum of squared transverse momenta of associated tracks is chosen as the primary vertex [62]. A set of baseline quality criteria are applied to reject events with non-collision backgrounds or detector noise [63]. Two levels of object identification requirements are defined for leptons and jets: baseline and signal. Baseline leptons and jets are selected with looser identification criteria, and are used in computing the missing transverse momentum as well as in resolving possible reconstruction ambiguities. Signal leptons and jets are a subset of the baseline objects, with tighter quality requirements which are used to define the search regions. Isolation criteria, defined with a list of tracking-based and calorimeter-based variables, are used to select signal leptons by discriminating between semileptonic heavy-flavour decays and jets misidentified as leptons. Electron candidates are reconstructed from energy deposits in the electromagnetic calorimeter that are matched to charged-particle tracks in the inner detector (ID) [64]. Baseline electrons are required to satisfy T > 10 GeV and | | < 2.47, excluding the transition region between the barrel and endcap calorimeters (1.37 < | | < 1.52). They are identified using the 'loose' likelihood identification operating point as described in Ref. [64]. The number of hits in the innermost pixel layer is used to discriminate between electrons and converted photons. The longitudinal impact parameter 0 relative to the primary vertex is required to satisfy | 0 sin | < 0.5 mm. Signal electrons are required to also satisfy T > 20 GeV and the 'tight' likelihood identification criteria as defined in Ref. [64]. The significance of the transverse impact parameter 0 must satisfy | 0 / ( 0 )| < 5 for signal electrons. Signal electrons with T < 200 GeV are further refined using the 'FCLoose' isolation working point, while those with larger T are required to pass the 'FCHighPtCaloOnly' isolation working point, as described in Ref. [64]. Corrections for energy contributions due to pile-up are applied. Muon candidates are reconstructed from matching tracks in the ID and muon spectrometer, refined through a global fit which uses the hits from both subdetectors [65]. Baseline muons must have T > 10 GeV and | | < 2.5, and satisfy the 'medium' identification criteria. Like the electrons, their longitudinal impact parameter 0 relative to the primary vertex is required to satisfy | 0 sin | < 0.5 mm. Signal muons are defined with tighter requirements on their transverse momentum and transverse impact parameter significance: T > 20 GeV and | 0 / ( 0 )| < 3. The 'FCLoose' isolation working point is also required for signal muons [65]. Jets are reconstructed from topological clusters of energy depositions in the calorimeters using the antialgorithm [66], with a radius parameter = 0.4 [67]. The average energy contribution from pile-up is subtracted according to the jet area and the jets are calibrated as described in Ref. [68]. To further reduce the effect of pile-up interactions, the jets with | | < 2.4 and T < 120 GeV are required to satisfy the 'medium' working point of the jet vertex tagger (JVT), a tagging algorithm that identifies jets originating from the primary vertex using track information [69,70]. Baseline jets are selected in the region | | < 4.5 and have T > 20 GeV. The selection of signal jets requires them to be in the region | | < 2.5 and to have T > 30 GeV. Jets containing -hadrons are identified as arising from -quarks (' -tagged') using a multivariate algorithm (MV2c10), based on the track impact parameters, the presence of displaced secondary vertices and the reconstructed flight path of -and -hadrons inside the jet [71]. These -tagged jets are reconstructed in the region | | < 2.5 and have T > 20 GeV. The -tagging working point provides an efficiency of 77% for jets containing -hadrons in simulated¯events, with average rejection of 110 and 4.9 for light-flavour jets and jets containing -hadrons, respectively [72]. To resolve the reconstruction ambiguities between electrons, muons and jets, an overlap removal procedure is applied to baseline objects in a prioritised sequence as follows. First, if an electron shares the same ID track with another electron, the one with lower T is discarded. Any electron sharing the same ID track with a muon is rejected. Next, jets that are not -tagged are rejected if they lie within Δ = 0.2 of an electron. Similarly, jets that are not -tagged are rejected if they lie within Δ = 0.2 of a muon if the jet has fewer than three associated tracks or the muon is matched to the jet through ghost association [73]. Finally, electrons and muons that are close to a remaining jet are discarded if their distance from the jet is Δ < min(0.4, 0.04 + 10 GeV/ T ) as a function of the lepton T . The missing transverse momentum ì miss T , with magnitude miss T , is calculated as the negative vectorial sum of the transverse momentum of all baseline reconstructed objects (electrons, muons, jets and photons [74]) and the soft term. The soft term includes all tracks associated with the primary vertex but not matched to any reconstructed physics object. Tracks not associated with the primary vertex are not considered in the miss T calculation, improving the miss T resolution by suppressing the effect of pile-up [75,76]. To compensate for differences between data and simulation in trigger, particle identification and reconstruction efficiency, correction factors that are usually functions of the relevant kinematic variables are derived from data and applied to the samples of simulated events. Analysis strategy The search is conducted in three independent analysis channels differing in lepton and jet multiplicities to maximise the sensitivity to distinct signal processes. The tW 1L analysis channel targets +DM events where one of the bosons (directly produced or arising from the top quark decay) decays leptonically (Section 5.2). The tW 2L analysis channel targets the same signal processes, but considers events where both bosons decay leptonically (Section 5.3). The two selections are designed to be mutually exclusive. The results of these two analysis channels are statistically combined to maximise the sensitivity to the +DM processes. Finally, the tj 1L analysis targets -channel production of DM candidates and requires a single lepton in each event (Section 5.4). In all analysis channels, large missing transverse momentum and jets are required. Event selections and background estimation methods specific to each analysis channel are described in this section, as are the definitions of the signal, control, and validation regions (SR, CR, and VR, respectively). Dedicated CRs are designed in each analysis channel for the major SM background processes in order to predict their expected contribution in the SRs. The CRs and SRs are mutually exclusive, with the CRs enriched in the major background processes relevant to each analysis channel while minimising the contamination from signal. The potential signal contamination in the CRs is found to be negligible, at the level of < 3% of the total SM expectation for all analysis channels. The expected SM backgrounds are first determined independently for each channel, with a profile likelihood fit [77] in a background-only fit. In this fit, normalisation factors of the backgrounds, for which dedicated CRs are defined, are adjusted simultaneously to match the data in the associated CRs. The input to the background-only fit includes the number of events observed in the associated CRs and the number of events predicted by simulation in each CR for all background processes. They are both described by Poisson statistics. The systematic uncertainties, described in Section 6, are included in the fit as nuisance parameters. They are constrained by Gaussian distributions with widths corresponding to the sizes of the uncertainties and are treated as correlated, when appropriate, between the various regions. The product of the various probability density functions forms the likelihood, which the fit maximises by adjusting the background normalisation and the nuisance parameters. Normalisation and nuisance parameters obtained from the background-only fit to the control regions are then extrapolated [77] to the SRs to quantify potential excess in data. The reliability of the MC extrapolation of the SM background estimates outside of the control regions is verified in dedicated validation regions. Statistically independent from the corresponding CRs and SRs, these VRs are designed to probe a kinematic region closer to that of the SRs. The potential signal contamination in the VRs is at the level of < 1% of the total SM expectation for most validation regions, and between 8% and 15% in a few validation regions in the tW 1L analysis channel. In the absence of a significant event excess in the SRs, as determined after the background-only fit, simultaneous fits of the CRs and SRs are performed to constrain the parameters of the targeted signal models as well as a generic beyond the standard model (BSM), referred to as model-dependent and model-independent signal fits as detailed in Section 7. Kinematic requirements and event variables The event selection criteria in each analysis channel are defined using the physics objects described in Section 4 and the event variables defined in this section. The following variables are defined using simple combinations of the physics objects in the events. • jet is the number of jets with | | < 2.5 and T > 30 GeV. • forward jet is the number of jets in the forward region, 2.5 < | | < 4.5 and T > 30 GeV. • -jet is the number of -jets with | | < 2.5 and T above a given threshold defined in each analysis channel. • The minimum azimuthal distance Δ min between the ì miss T and the ì T of each of the four leading jets in the event is useful for rejecting events with mismeasured jet energies leading to miss T in the event, and is defined as: where min ≤4 selects the jet that minimises Δ . • ℓℓ is the invariant mass of the dilepton system in the event. • An iterative reclustering approach as defined in Ref. [78] is used to reconstruct the hadronically decaying bosons. All the signal jets in the event are first reclustered using the anti-algorithm with a large radius parameter of = 3.0. The radius of each large-radius jet is then iteratively reduced to an optimal radius, ( T ) = 2 × / T . The mass of the reclustered jet, reclustered , is used in the tW 1L channel. • ℓ 1 1 is the invariant mass of the leading lepton and -jet in the event. A set of variables based on transverse mass are defined in order to distinguish between the signal and SM background processes in the following. • The transverse mass formed by the ì miss T and the leading lepton in the event, lep T , is used to reduce the +jets and semileptonic¯backgrounds. It is defined as: • Similarly, the transverse mass ℓ T is formed by the ì miss T and the system of the leading lepton and -jet in the event to suppress the + background, and is defined as: • Closely related to lep T , the stransverse mass T2 [79, 80] is used to bound the masses of pair-produced particles, such as in¯production, each of which decays so as to produce a visible particle that can be detected and an invisible particle that contributes to the missing transverse momentum. In the case of a dilepton final state, it is defined by: where ì T is the transverse momentum vector that minimises the larger of the two transverse masses lep T , and ì ℓ 1 T and ì ℓ 2 T are the leading and subleading transverse momenta of the two leptons in the pair. For the dileptonic¯background events, T2 has a kinematic endpoint at . • The asymmetric stransverse mass T2 [81,82], a variation of T2 , is used in the tW 1L final state to reduce the number of dileptonic¯background events where one of the leptons is undetected. For these events, T2 has a kinematic endpoint at the top quark mass. To improve the selection of single-top events in the tW 2L channel, the following quantities based on invariant mass are defined. • min ℓ is the minimum invariant mass found by combining the leading -jet with each of the leptons, min ℓ = min( 1 ℓ 1 , 1 ℓ 2 ). An upper endpoint at approximately 153 GeV or 160−170 GeV is expected for the events with one or two leptonic top quark decays, respectively. • To further reduce the background with two leptonic top quark decays, such as¯and¯, t ℓ , an extended variation of min ℓ , is used in the tW 2L final state. It is defined as: where ℓ is the invariant mass of lepton ℓ and jet , where 1 and 2 are the two jets with highest -tag discriminator value. For the¯and¯backgrounds where both top quarks decay leptonically, t ℓ has a kinematic endpoint at approximately 160−170 GeV. Additional variables based on angular separations of the objects are used in the tj 1L analysis to suppress SM background contributions, as defined below. • Δ (ℓ 1 , ì miss T ): the azimuthal angle difference between the ì miss T and the leading lepton in the event. Table 2 summarises the trigger and preselection requirements for all analysis channels, in terms of lepton, jet and -jet multiplicities, as well as transverse momenta and global kinematic variables. Events with extra baseline leptons are vetoed in addition. Single-lepton tW 1L analysis channel Events with exactly one electron or muon are first selected for the SR if they also contain at least three jets, exactly one of which must be -tagged, and satisfy the preselection requirements described in Table 2. The dominant SM background contributions in the channel are¯, +jets, and single top ( channel) production. Discriminating variables, miss T , lep T , reclustered and the asymmetric stransverse mass T2 as described in Section 5.1, are used to further separate the signal from backgrounds. A 'genetic algorithm' [83] is used to optimise a baseline signal region defined as in Table 3. To increase the sensitivity to different signal model parameters, a binned miss The acceptance times detector efficiency for the +DM signal processes after applying all selection criteria is between 0.3% and 5.1% in the parameter space of tan = 1, ∈ [100, 450] GeV and ± ∈ [400, 1500] GeV, and between 0.2% and 4.8% in the parameter space of = 250 GeV, tan ∈ [0.5, 30] and ± ∈ [400, 1500] GeV. Dominant background contributions from the¯and +jets processes are estimated using MC simulation and the dedicated CRs. The contribution from multijet production, where the lepton is a misidentified jet or originates from a heavy-flavour hadron decay or photon conversion, is found to be negligible. The remaining sources of background (single-top, +jets, diboson,¯, and production, as well as rarer processes such as triboson,¯¯, and¯), are estimated from simulation. Dedicated control regions CR tW 1L (tt) and CR tW 1L (W), defined in Table 3, are designed for the¯and +jets background estimations. Compared to the SR, the acceptance for¯events is increased in CR tW 1L (tt) by requiring at least two -jets, inverting the selection on T2 and removing the requirement on reclustered . To increase the acceptance of the +jets events and hence the sample size, CR tW 1L (W) is first selected by requiring 40 < lep T < 100 GeV and reclustered < 60 GeV. To exploit the lepton charge asymmetry of the Table 3: Summary of signal, control and validation region definitions used in the tW 1L analysis channel. The '-' entries represent an inclusive selection with no requirements. The +jets control and validation regions are each split into two regions with opposite lepton charges as described in the text. > 220 < 220 > 220 < 220 > 220 > 220 > 220 +jets events relative to the remaining backgrounds, it is subsequently split into two regions with opposite lepton charges, CR tW 1L (W + ) and CR tW 1L (W − ). Normalisation factors,¯and +jets , defined as the ratio of the number of observed events to the SM prediction, are found to be 0.96 ± 0.08 and 1.01 ± 0.05 after the background-only fit for the¯and +jets processes, respectively. To validate the¯background predictions and the reliability of MC extrapolation in reclustered and T2 , two validation regions, VR1 tW 1L (tt) and VR2 tW 1L (tt), are defined by reversing the SR selection requirements on T2 and reclustered respectively, as shown in Table 3. To increase the sample size, the SR selection requirement on the reclustered is removed in the VR1 tW 1L (tt) region. Similarly, for the +jets background processes, two validation regions, VR1 tW 1L (W) and VR2 tW 1L (W), are defined by varying the SR selection requirements on lep T and reclustered shown in Table 3, respectively. Each of the +jets validation regions is split into two regions with opposite lepton charge. Figure 2 shows the post-fit miss T distributions in the representative validation regions. Good agreement is observed between data and SM expectation in all validation regions. The observed yield, post-fit background estimates and significance [84] in each CR and VR are shown in Figure 3 after the background-only fit. Since the +jets CR is split into two regions with opposite lepton charges sharing the same normalisation factor, the significances in the CRs are shown explicitly. The data event yields are found to be consistent with background expectations. Dilepton tW 2L analysis channel Events with exactly two oppositely charged leptons (electron or muon) are first selected for the SR if they also contain at least one signal jet, at least one of which must be -tagged with T > 50 GeV, and satisfy the preselection requirements described in Table 2. The dominant SM background contributions in the channel after these selections are from the¯,¯, and processes, followed by that of diboson events. The contribution from misidentified or non-prompt lepton backgrounds (referred to as 'Fakes /non-prompt' in Figures 4 and 5) is found to be negligible in the signal region. Discriminating variables, min ℓ , t ℓ , T2 and Δ min as defined in Section 5.1, are used to define the final signal region as shown in Table 4. The acceptance times detector efficiency after applying all selection criteria for the +DM signal processes is between 0.07% and 0.7% in the parameter space of tan = 1, ∈ [100, 450] GeV and ± ∈ [400, 1500] GeV, and between 0.05% and 0.6% in the parameter space of = 250 GeV, tan ∈ [0.5, 30] and ± ∈ [400, 1500] GeV. Figure 3: Comparison of the predicted backgrounds with the observed numbers of events in the CRs and VRs associated with the tW 1L channel. The normalisation of the backgrounds is obtained from the background-only fit to the CRs. The 'Others' category includes contributions from +jets and production, and rare processes such as triboson,¯¯,¯, and Higgs boson production processes. The upper panel shows the observed number of events and the predicted background yield. All uncertainties are included in the uncertainty band. The lower panel shows the significances in each region. The contributions from the¯,¯(with = or boson) and diboson background processes are estimated from MC simulation and dedicated CRs. The remaining sources of background, including the Table 4: Summary of signal, control and validation region definitions used in the tW 2L analysis channel. The '-' entries represent an inclusive selection with no requirements. In the final states with three leptons, the corrected miss T , min ℓ and T2 variables are used instead as described in the main text. The selection requirement on the corrected min ℓ in the VR(3ℓ) region varies according to the jet and -jet multiplicity as described in the main text. Events with additional baseline leptons are vetoed. process, which is dominated by the → component, single top quark production,¯ℎ production and other rarer processes such as¯¯and¯, are estimated from simulation. The acceptance for¯events is increased in CR tW 2L (tt) by requiring a low value of T2 and inverting the SR selection criteria on t ℓ . The¯contribution is dominated by the¯component (about 80% of¯in the SR), especially where → . A dedicated control region, CR tW 2L (ttZ), is defined by first selecting three leptons, where at least one same-flavour-opposite-charge (SFOS) pair is required to be consistent with coming from a boson decay with an invariant mass within a window of [71,111] GeV. If more than one such pair is present in the event, the pair which has an invariant mass closest to the boson mass is chosen. The purity ofē vents is further increased by requiring at least three jets. To reduce the diboson background in this region, events with exactly one -jet and three jets are rejected. Due to the presence of three leptons in this region, the background contribution from misidentified or non-prompt leptons becomes non-negligible and is estimated using a data-driven matrix method (MM) as described in Refs. [85,86]. Two types of lepton identification criteria, 'tight' and 'loose' are defined in the evaluation, corresponding to the baseline and signal lepton selections described in Section 4. The number of events containing misidentified or non-prompt leptons in the¯CR is estimated from the number of observed events with tight or loose leptons using as input the probability for loose prompt, misidentified or non-prompt leptons to satisfy the tight criteria. The probability for prompt loose leptons to pass the tight selection is determined from¯MC simulation. The equivalent probability for loose misidentified or non-prompt leptons to pass the tight selection is measured in a¯-enriched region with two same-sign leptons (electrons or muons) and a least one -tagged jet, which is dominated by events with at least one misidentified or non-prompt lepton. In the CR tW 2L (ttZ) region, to mimic the event topology of the¯background in the signal region, a corrected ì miss T is obtained by vectorially adding the transverse momenta of the SFOS pair, and it is subsequently used to calculate a transverse mass ( lep T ) with the third lepton, referred to as corrected T2 . The two leptons from the SFOS pair are excluded in the calculation of min ℓ , which effectively becomes the invariant mass of the third lepton and the leading -jet. To improve the estimation of the dominant background from the process in the CR tW 2L (ttZ), a dedicated CR, CR tW 2L (WZ), is defined by inverting the CR tW 2L (ttZ) selection requirements on the jet multiplicity and the corrected min ℓ . This CR is also used to aid in the estimation of all diboson processes in the SR. Normalisation factors¯,¯and Diboson are found to be 1.00 ± 0.03, 0.76 ± 0.26 and 0.80 ± 0.16 after the background-only fit for the¯, and diboson processes, respectively. A validation region, VR tW 2L (tt), is defined in order to validate the¯background predictions by applying all the signal selection criteria, apart from requiring lower values of T2 , as shown in Table 4. For the background predictions of the¯and diboson processes, a 3ℓ validation region, VR tW 2L (3L), is defined with selection requirements similar to those of the CR tW 2L (ttZ) and CR tW 2L (WZ). To ensure that the VR tW 2L (3L) is orthogonal to those two CRs, the selection on the corrected min ℓ variable is varied according to the jet and -jet multiplicities. For the events with exactly one -jet, the corrected min ℓ is required to be larger than 170 GeV if jet > 3, or smaller than 170 GeV if jet ≤ 3. For the events with more than one -jet and jet > 2, the corrected min ℓ is required to be larger than 170 GeV. To increase the sample size in this region, the T threshold for the -tagged jets is reduced to 40 GeV. Figure 4 shows the post-fit kinematic distributions in the validation regions. Good agreement is observed between data and the SM expectation in all validation regions. The observed yield, post-fit background estimates and significance [84] in each CR and VR are shown in Figure 5 after the background-only fit. The data event yields are found to be consistent with background expectations. Single-lepton tj 1L analysis channel Events with exactly one electron or muon are first selected for the SR if they also contain 1-4 jets with T > 30 GeV, one or two of which must be -tagged, and satisfy the preselection requirements described in Table 2. The fourth jet in the event, if present, is required to have T < 50 GeV. The second -tagged jet is required to have T > 30 GeV. The dominant SM background contributions in this channel are from , +jets, and single top ( channel) production. Discriminating variables, miss T , lep T , forward jet and Δ (ℓ 1 , 1 ) as described in Section 5.1, are used to define the signal region as shown in Table 5. To further improve the sensitivity, a boosted decision tree (BDT), provided by the Toolkit for Multivariate Analysis (TMVA) [87], is trained to distinguish between signal and background processes, using events passing the preselection defined in Table 2. BDT training settings found to be optimal for this analysis include number of trees set to 1500 with a maximum depth of 5 and gradient boosting. Cross-validation is performed to ensure there is no over-training. The following nine kinematic variables defined in Section 5.1 are used as input: • T and of the highest-T jet: T ( 1 ) and ( 1 ). • The transverse masses: lep T and ℓ T . • ℓ of the leading lepton and -jet system. Similarly to the tW 1L analysis channel, dominant backgrounds from the¯and +jets processes are estimated using MC simulation and dedicated CRs. The contribution from multijet production is found to be negligible. The remaining sources of background (single-top, +jets, diboson,¯,¯ℎ, production and rarer processes such as triboson,¯¯, and¯) are estimated from simulation. Dedicated control regions CR tj 1L (tt) and CR tj 1L (W) are designed to estimate the¯and +jets background processes, respectively, as shown in Table 5. Compared to the SR, the acceptance for¯events is increased in CR tj 1L (tt) by requiring exactly two -jets and large Δ (ℓ 1 , 1 ) values. The contribution from +jets events in the CR tj 1L (W) is enhanced by selecting events with one or two jets, exactly one -jet, and low lep T and large Δ (ℓ 1 , 1 ) values. No splitting based on the boson charge is applied. The normalisation factors and +jets are found to be 1.00 ± 0.27 and 1.10 ± 0.13 for the¯and +jets processes, respectively. To validate the¯background predictions, a validation region VR tj 1L (tt) is defined by requiring a BDT score that is lower than in the SR definition, as shown in Table 5. For the +jets background, a validation region VR tj 1L (W) is defined by requiring a lower lep T value than in the SR definition, as shown in Table 5. To ensure orthogonality to the corresponding CRs, events in these two VRs are required to have low Δ (ℓ 1 , 1 ). Figure 6 shows the post-fit distribution of representative kinematic variables and the BDT score for these two validation regions. Good agreement is observed between data and expectation in all validation regions. The observed yield, post-fit background estimates and significance [84] in each CR and VR are shown in Figure 7 after the background-only fit. The data event yields are found to be consistent with background expectations. Figure 7: Comparison of the predicted backgrounds with the observed numbers of events in the CRs and VRs associated with the tj 1L channel. The normalisation of the backgrounds is obtained from the background-only fit to the CRs. The upper panel shows the observed number of events and the predicted background yield. The 'Others' category includes contributions from +jets and production, and rare processes such as triboson,¯¯,¯, and Higgs boson production processes. All uncertainties are included in the uncertainty band. The lower panel shows the significance for each region. Systematic uncertainties Several sources of experimental and theoretical systematic uncertainty in the signal and background estimates are considered. Their impact is reduced through the normalisation of the dominant backgrounds in the control regions defined with kinematic selections resembling those of the corresponding signal region. Uncertainties are included as nuisance parameters, common across all regions, with Gaussian constraints in the likelihood fits, taking into account correlations between different regions. Uncertainties due to the numbers of events in the CRs are also included in the fit for each region. The magnitude of the contributions arising from uncertainties on the background normalisation factors and on the detector, theoretical modelling and statistics of the MC samples are summarised in Figure 8 as a relative uncertainty in the total background yield for each SR in the three analysis channels. Dominant detector-related systematic uncertainties arise from the jet energy scale and resolution, and from the -tagging efficiency and mis-tagging rates. The uncertainties in the jet energy scale and resolution are based on their respective measurements in data [68] and are derived as a function of the T and of the jet, as well as of the pile-up conditions and the jet flavour composition (light-quark, -quark, or gluon-initiated jets) of the selected jet sample. Their contributions to the SRs are the dominant experimental uncertainty components and are almost equivalent in all analysis channels. The systematic uncertainty in the -tagging efficiency is the second largest experimental uncertainty. It ranges from 4.5% for -jets with T ∈ [35,40] GeV up to 7.5% for -jets with high T (> 100 GeV). The -tagging uncertainty is estimated by varying the -, T -and flavour-dependent scale factors applied to each jet in the simulation within a range that reflects the systematic uncertainty in the measured tagging efficiency and mis-tag rates in data [88]. The uncertainties associated with trigger requirements, pile-up modelling, and lepton reconstruction and energy measurements have a small or negligible impact on the final results; however, the lepton, photon and jet-related uncertainties are propagated to the calculation of the miss T , and additional uncertainties due to the energy scale and resolution of the soft term are included in the miss T . Finally, uncertainties in estimates of the non-prompt or misidentified leptons background are found to be below 1% in the tW 2L analysis channel and negligible for single-lepton selections. Uncertainties in the modelling of the SM background processes in MC simulation and their theoretical crosssection uncertainties are also taken into account. Furthermore, for these processes the 1.7% uncertainty in the combined 2015-2018 integrated luminosity is included. Modelling uncertainties in the¯and single-top backgrounds are dominant in all SRs for the tW 1L and tj 1L analysis channels, and the second leading source of uncertainty for the tW 2L SR. They are computed as the difference between the predictions from nominal samples and those from additional samples differing in hard-scattering generator and parameter settings, or by using internal weights assigned to the events depending on the choice of renormalisation and factorisation scales ( R and F , respectively, varied independently by factors of 2 and 0.5), initial-and final-state radiation parameters, and PDF sets. The impact of the PS and hadronisation model is evaluated by comparing the nominal generator with a P -B sample interfaced to H 7 [89,90], using the H7UE set of tuned parameters [90]. To assess the uncertainty due to the choice of hard-scattering generator and matching scheme, an alternative generator set-up using M G 5_aMC@NLO interfaced to P 8 is employed. For single-top production, the impact of interference between single-resonant and double-resonant top quark production and on the implementation of the lineshape in the generator is estimated in all analysis channels by comparing the nominal sample generated using the diagram removal method with alternative samples, including those generated using the diagram subtraction method [91]. For the tW 2L selection, this results in a 100% uncertainty in the subdominant contribution. For the¯+ / background, uncertainties due to parton shower and hadronisation modelling are evaluated by comparing the predictions from M G 5_aMC@NLO interfaced to P 8 and H 7, while the uncertainties related to the choice of renormalisation and factorisation scales are assessed by varying the corresponding event generator parameters up and down by a factor of two around their nominal values. Their contribution is dominant in the tW 2L analysis channel and subdominant or small in all other SRs. A similar approach is used to assess the uncertainties in the process, with an additional 20% uncertainty assigned to account for uncertainties in the effects of interference between the¯+ / and processes. The 20% is assigned on the basis of preliminary comparisons of alternative approaches developed to evaluate interference effects in the¯- [92] and¯-processes [93]. Finally, modelling and normalisation uncertainties in minor backgrounds are also considered. For diboson and / +jets events, they are estimated by varying the renormalisation, factorisation and resummation scales up and down by a factor of two around the values used to generate the nominal samples. For¯, ,¯, ℎ, ℎ,¯¯, and triboson production processes, experimental and theoretical uncertainties in the event yields are also evaluated and found to be negligible. For the DM signal processes, both the experimental and theoretical uncertainties in the expected signal yields are considered, including the aforementioned luminosity uncertainty. Experimental uncertainties are found to be 3-35% (2.5-11%) across the -± and -tan planes for the tW 1L (tW 2L ) analysis channel, and in the range 3-14% as a function of ± for the tj 1L selection, independently of tan . In all SRs, the dominant uncertainty in the signal yields is found to be from the jet energy scale and resolution, followed by uncertainties in -tagging rates. Larger uncertainties for the tW 1L selections are found for the highest miss T -binned region, where MC statistical fluctuations are also relevant. In the modelling of the signal samples, uncertainties due to the variations of the renormalisation and factorisation scales are dominant. They are evaluated using a variation scheme wherein R and F are scaled simultaneously by either a factor of 2 or 0.5. For the PS and hadronisation uncertainties, alternative samples with varied A14 tune parameter values are used. The effect of each systematic variation on the acceptance and efficiency is evaluated for each analysis channel SR by comparing the variation samples with the corresponding nominal sample. The impact on the total yields for +DM,¯+DM and -channel production processes is also evaluated for each signal scenario and found to be between 5% and 15%. For the tW 1L and tW 2L analysis channels, the uncertainties vary between 5% and 30% across the -± and -tan planes, with the largest values obtained for samples characterised by low values of the ± mass and independently of tan . For the tj 1L analysis channel, uncertainties are found to be between 15% and 5% as a function of increasing ± for all tan values considered. Results The event yields for all SRs in the three analysis channels are reported in Tables 6 and 7 and are summarised in Figure 9, where the significance for each of the SRs is also presented. The SM background expectations resulting from background-only fits are shown along with their statistical plus systematic uncertainties. No significant deviations from the expected yields are observed in any of the signal regions considered. The largest background contribution in the tW 1L and tj 1L analysis channel SRs arises from¯production, whilst the contribution from¯is largest in the tW 2L SRs, with subdominant contributions from the¯, single-top (including ) and diboson processes. Other non-negligible background sources are +jets and +jets production. Table 6: Background-only fit results for the tW 1L and tW 2L signal regions. The backgrounds which contribute only a small amount (rare processes such as triboson,¯¯,¯and Higgs boson production processes, and non-prompt or misidentified leptons background) are grouped and labelled as 'Others'. The quoted uncertainties of the fitted SM background include both the statistical and systematic uncertainties. Reasonable agreement is found between data and SM predictions in all distributions, although a mild excess of data events is found in the tW 2L distributions, accounting for a discrepancy lower than 2 considering statistical and systematic uncertainties. Figure 9: Results of the background-only fit extrapolated to all SRs. The normalisation of the backgrounds is obtained from the fit to the CRs. The upper panel shows the observed number of events and the predicted background yields. The 'Others' category includes contributions from rare processes such as triboson,¯¯,¯, and Higgs boson production processes. All uncertainties defined in Section 6 are included in the uncertainty band. The lower panel shows the significance in each SR. The significance calculation is performed as described in Ref. [84]. and (c) T2 in the tW 2L channel. Observed data are compared with the SM background predictions extrapolated from the background-only fit. All SR selections except the one on the quantity shown are applied. The SR requirement is indicated by the arrow. As the t ℓ is defined for events with at least two jets, the events with exactly one jet are included in the overflow bin. The 'Others' category includes contributions from rare processes such as triboson,¯¯,¯, and Higgs boson production processes. The expected distributions for representative scenarios with different , ± , and tan are shown for illustrative purposes. The overflow events, where present, are included in the last bin. The lower panels show the ratio of data to the background prediction. The hatched error bands indicate the combined experimental and MC statistical uncertainties on these background predictions. Statistical combination of the tW 1L and tW 2L analysis channels A statistical combination of results from the tW 1L and tW 2L channels is performed to maximise the sensitivity to +DM models. The simultaneous fit is performed such that the individual background normalisation factors, 1¯, 2¯, +jets and¯, are constrained in the same regions as the respective, individual analyses to avoid extrapolations into a different phase space. Experimental uncertainties in the background and signal are evaluated using the same methods as described in Section 5 and correlated across channels. Modelling uncertainties from the same source for a given process are correlated, e.g. all modelling uncertainties for¯are correlated across the regions. Signal systematic uncertainties are also correlated for the exclusion fits described in the next section. The predictions for SM backgrounds are, as expected, equivalent to those of the individual channels. In particular, the values for the¯background normalisation factor are found to be consistent for tW 1L and tW 2L estimates, 1¯= 0.97 ± 0.08 and 2¯= 1.00 ± 0.03, respectively. Model-independent limits The CL s technique [94] is used to place 95% confidence level (CL) upper limits on event yields from physics BSM for each signal region (model-independent limits). The profile-likelihood-ratio test statistic is used to exclude the signal-plus-background hypotheses for specific signal models. When normalised to the integrated luminosity of the data sample, results can be interpreted as corresponding upper limits on the visible cross section, vis , defined as the product of the BSM production cross section, the acceptance and the selection efficiency of a BSM signal. In the case of the tW 1L analysis channel, the miss T bins are defined inclusively, i.e. all events above the lowest bin-threshold in miss T are taken, to retain discovery potential. The SM predictions and their corresponding uncertainties are reported in Table 8. In the case of the tj 1L analysis channel, the last bin of the BDT score distribution, 0.9-1.0, is considered. Table 9 summarises the observed ( 95 obs ) and expected ( 95 exp ) 95% CL upper limits on the number of BSM events and on vis for all SRs. The 0 -values, which represent the probability of the SM background to fluctuate to the observed number of events or higher, are also provided and are capped at 0 = 0.5; the associated significance is provided in parentheses. Model-dependent limits Model-dependent exclusion limits are placed on the common signal parameters , ± , and tan in the 2HDM+ models considered in the analysis. Following the prescriptions in Ref. [5], the masses of the bosons , ± and are set to be equal. The three quartic couplings between the scalar doublets and the boson ( 1 , 2 and 3 ) are all set equal to 3, in order to reduce the number of parameters and evade the constraints from electroweak precision measurements [18]. To further reduce the parameter space, unitary couplings between the boson mediator and the DM particle ( = 1) are considered, with the DM particle mass set to = 10 GeV. The mixing angle is fixed at sin = 1/ √ 2, yielding full mixing between the and bosons and the largest cross sections for the processes of interest. Two sets of samples are considered, 2 varying either the ( , ± ) parameters and setting tan to unity, or varying the ( ± , tan ) parameters and setting = 250 GeV. The fit procedure takes into account correlations in the yield predictions between control and signal regions due to common background normalisation parameters and systematic uncertainties. The experimental systematic uncertainties in the signal are taken into account for the calculation and are assumed to be fully correlated with those in the SM background. The results of the combined fit for the tW 1L and tW 2L channels are interpreted using the sum of the respective signal yield estimates for each generated sample, with overlap between the samples removed according to the procedure illustrated in Ref. [22]. Figures 13(a) and 13(b) show the observed and expected exclusion contours as functions of ( , ± ) and ( ± , tan ), respectively, for the tW 1L and tW 2L channels, presented both individually and statistically combined. In this case, only the DM contribution of the signal is taken into account to better illustrate the sensitivity to single-top signatures. Figures 14(a) and 14(b) show the observed and expected exclusion contours for the same models, but also include the expected contributions from the DM¯process. Figures 13 and 14 also report the 1 and 2 uncertainty bands around the observed limit contour, as well as the variations obtained by changing the theoretical cross-section predictions for signal to be 15% above or below the nominal value (as this is expected to be largest uncertainty in the signal yields across the plane). For low ± masses, DM production generally dominates DM¯production, due to the contribution from the resonant ± diagrams, except when the mass difference ± − is small enough to suppress the branching fraction of ± → decay relative to ± →¯. On the other hand, DM¯contributions are dominant at high ± . The width of ± also increases at high ± , and it is about 20% of its mass for ± = 1 TeV. Moreover, as studied in Ref. [22], the DM¯cross section is proportional to 1/tan 2 , whereas the ± production cross section has a more complex dependence, with a minimum for tan ∼ 10 and an enhancement for high values of tan . For tan = 1 and ± ∼ + , the DM¯cross section also dominates the DM cross section. Assuming = 10 GeV and = 1, masses of below 190 GeV are excluded at 95% CL for all values of ± in the range 400-1400 GeV, and up to 330 GeV for ± around 800 GeV. When only DM contributions are taken into account, the constraints on decrease by 20-50 GeV. In the case where = 250 GeV, all values of ± between 450 GeV and 1.5 TeV are excluded for tan around and below unity, and scenarios with tan below 1.5 are excluded for masses of ± around 800 GeV. The sensitivity of the tj 1L channel is small compared to the other analysis channels. It targets the -channel production component of the DM signal, which has a smaller cross section with respect to the +DM process. The observed and expected cross-section limits at 95% CL as a function of ± for two representative values of tan are shown in Figure 15 assuming a fixed value of = 250 GeV. The limits are shown as a multiple of BSM , the theoretical cross section of the -channel DM production process. For tan = 0.3, ± masses above 900 GeV are excluded under these hypotheses, whilst no exclusion is obtained for tan = 0.5. in the tj 1L channel. Observed data are compared with the SM background predictions extrapolated from the background-only fit. All SR selections except the one on the quantity shown are applied. The SR requirement is indicated by the arrow. The 'Others' category includes contributions from +jets and production, and rare processes such as triboson,¯¯,¯, and Higgs boson production processes. The expected distributions for representative scenarios with different , ± , and tan are shown for illustrative purposes. The overflow events, where present, are included in the last bin. The lower panels show the ratio of data to the background prediction. The hatched error bands indicate the combined experimental and MC statistical uncertainties on these background predictions. Figure 13: The expected and observed exclusion contours as a function of ( , ± ) (top) and ( ± , tan ) (bottom), assuming only +DM contributions, for the individual tW 1L (purple line) and tW 2L (pink line) analysis channels, and for their statistical combination (green line). Experimental and theoretical systematic uncertainties, as described in Section 6, are applied to background and signal samples and illustrated by the ±1 standard-deviation and ±2 standard-deviation yellow bands and the green dotted contour lines, respectively, for the statistical combination. Figure 14: The expected and observed exclusion contours as a function of ( , ± ) (top) and ( ± , tan ) (bottom), assuming DM¯and DM contributions, for the individual tW 1L (purple line) and tW 2L (pink line) analysis channels, and for their statistical combination (green line). Experimental and theoretical systematic uncertainties, as described in Section 6, are applied to background and signal samples and illustrated by the ±1 standard-deviation and ±2 standard-deviation yellow bands and the green dotted contour lines, respectively, for the statistical combination. Conclusion A search for dark matter has been performed in the context of a two-Higgs-doublet model together with an additional pseudoscalar mediator, , which decays into the dark-matter particles. Processes where the pseudoscalar mediator is produced in association with a single top quark in the 2HDM+ model are explored for the first time at the LHC. Several final states which include either one or two leptons (electrons or muons) and a significant amount of missing transverse momentum are considered. The analysis makes use of proton-proton collision data at √ = 13 TeV collected by the ATLAS experiment during LHC Run 2 (2015-2018), corresponding to an integrated luminosity of 139 fb −1 . No significant excess above the Standard Model predictions is found. The results are expressed as 95% confidence-level limits on the 2HDM+ signal models considered. Assuming dark-matter particles with mass = 10 GeV and coupling = 1 to the mediator, and full mixing between the and bosons, masses of below 200 GeV are excluded at 95% CL for all values of ± in the range 400-1400 GeV, and up to 330 GeV for ± around 900 GeV. For = 250 GeV, all values of ± below 1.5 TeV are excluded for tan below unity, and scenarios with tan below 1.5 are excluded for masses of ± around 800 GeV. The ATLAS Collaboration
14,660.8
2020-11-18T00:00:00.000
[ "Physics" ]
Bipolar Effects in Photovoltage of Metamorphic InAs/InGaAs/GaAs Quantum Dot Heterostructures: Characterization and Design Solutions for Light-Sensitive Devices The bipolar effect of GaAs substrate and nearby layers on photovoltage of vertical metamorphic InAs/InGaAs in comparison with pseudomorphic (conventional) InAs/GaAs quantum dot (QD) structures were studied. Both metamorphic and pseudomorphic structures were grown by molecular beam epitaxy, using bottom contacts at either the grown n +-buffers or the GaAs substrate. The features related to QDs, wetting layers, and buffers have been identified in the photoelectric spectra of both the buffer-contacted structures, whereas the spectra of substrate-contacted samples showed the additional onset attributed to EL2 defect centers. The substrate-contacted samples demonstrated bipolar photovoltage; this was suggested to take place as a result of the competition between components related to QDs and their cladding layers with the substrate-related defects and deepest grown layer. No direct substrate effects were found in the spectra of the buffer-contacted structures. However, a notable negative influence of the n +-GaAs buffer layer on the photovoltage and photoconductivity signal was observed in the InAs/InGaAs structure. Analyzing the obtained results and the performed calculations, we have been able to provide insights on the design of metamorphic QD structures, which can be useful for the development of novel efficient photonic devices. Background In the last two decades, composite materials containing semiconductor nanostructures have found great use in photonic applications due to light sensitivity, ease and low cost of fabrication, spectral tunability, and highly efficient emission with short lifetime [1][2][3][4][5]. In(Ga)As quantum dot (QD) heterostructures is an important class of infrared-sensitive nanostructures, which has been widely employed in various photonic devices, such as lasers [2,6], single-photon sources [7,8], photodetectors [9][10][11][12][13], and solar cells [14][15][16]. Numerous investigations have been devoted to improve the photoelectric properties of such light-sensitive devices. For example, the photosensitivity range can be extended via the excitation through intermediate bandgap [17,18] or multiple exciton generation [19,20], so that the power conversion efficiencies of QD-based solar cells can exceed in theory the limits of single-bandgap solar cells [21]. The methods like strainbalancing [22] and misfit management technique [23] as well as the thermal annealing [24] are used to reduce strains in these structures, operating the working range [25] as well as increasing the photoresponse due to the suppression of strain-related defects [26] that can act as recombination centers. An efficient method for the strain reduction is based on the growth of an InGaAs metamorphic buffer (MB) instead of the conventional GaAs one. As a result, InAs/ InGaAs QD structures have attracted much interest in last decade [27][28][29]. By growing the QDs on the InGaAs MB, one can observe essential differences in the formation process and QD optical properties compared with conventional ones in GaAs matrix [25,[30][31][32][33]. For example, the InGaAs confining layer reduces the lattice mismatch between QDs and buffer and, hence, strains in QDs. As a result, the bandgap of InAs is reduced and a significant increase in the emission wavelength is observed [34]. Application of the MB as a confining material allows to shift its value into the telecommunication window at 1.3 and 1.55 μm [28,29,35,36]. As well, there have been hopeful attempts to apply the photoelectric properties of the metamorphic InAs QD structures on the design of such light-sensitive devices as metamorphic infrared photodetectors [11][12][13] and solar cells [37][38][39]. Some studies were carried out to develop structure design [25,[31][32][33] and other ones to improve photoelectric properties [39,40]. Investigations are going on to reduce the strains in the heterostructures [34,41], as this leads to a substantial improvement in the photocurrent density and spectral response of solar cells [39,40] as well as in the photoemission efficiency of such structures [29,32,42]. Development of the light-sensitive devices requires indepth study of the photoelectric properties. Photovoltage (PV) or photoconductivity (PC) studies is an ideal tool for the determination of the photoresponse as function of light energy, transitions between levels, carrier transport, and operating range of the device [10,43,44]. However, despite that some studies of the photoelectric properties of structures with metamorphic InAs QDs have been performed in last years [37][38][39]43], full aspects of the photoresponse mechanism still remain unclear, as along with the influence of the MB on the properties of the nanostructures. In particular, effects of the GaAs substrate and related interfaces on the photoelectric spectra of InAs/InGaAs/GaAs QD structures have not been explored in details. Although significant efforts are devoted to avoid the substrate influence, the photoresponse is affected by both the substrate and nearby layers grown by molecular beam epitaxy (MBE). Thus, while the applied contact geometry is to retain the bottom layers and substrate electrically inactive, a notable negative effect of them on PV and photocurrent has been detected by us in a previous investigation [43]. Very recently, we compared the photoelectric properties of the metamorphic InAs/In 0.15 Ga 0.85 As QD structure with those of a standard InAs/GaAs QD one and found that the photocurrent of metamorphic InAs/In 0.15 Ga 0.85 As heterostructures was not affected by levels related to defects in the vicinity of QD [45]. Furthermore, it has been concluded that efficient photonic devices at 1.3 μm can be developed with similar nanostructures as an active material. In this work, we continue the study of photoelectric properties of the heterostructures with InAs QDs embedded in either the metamorphic In 0.15 Ga 0.85 As or conventional GaAs buffers, focusing on the effect of GaAs substrate and nearby MBE layers. In order to reach a clear understanding of the role of substrate and buffer layers, we considered the structures with bottom contacts on (i) the In 0.15 Ga 0.85 As buffer layer or (ii) the bottom GaAs substrate (see Fig. 1). Thus, depending on the bottom contact selection, the current flowed through (i) only the QDs and buffer layers and (ii) the complete structure including the substrates and their interfaces with MBE layers. The analysis of the results and calculations allowed us to provide an insight into the best design for light sensors on metamorphic QD structures. Methods The structures were prepared by MBE on (001) semiinsulating (si) GaAs substrates. Substrates were n-type, with values of 3 × 10 7 cm −3 residual carrier concentration, thickness of 500 μm, and a resistivity of 2 × 10 7 Ω × cm. The metamorphic InAs/InGaAs QD structures consist of (i) 100-nm n + -GaAs buffer layer grown at 600°C, (ii) 300nm thick n + -In 0.15 Ga 0.85 As MB with n = 5 × 10 18 cm −3 grown at 490°C, (iii) 500-nm thick n-In 0.15 Ga 0.85 As MB with n = 3 × 10 16 cm −3 grown at 490°C, (iv) 3.0 monolayers (MLs) of self-assembled InAs QDs embedded in a 20-nm undoped In 0.15 Ga 0.85 As layer grown at 460°C, (v) 300-nm n-In 0.15 Ga 0.85 As upper capping layer with n = 3 × 10 16 cm −3 grown at 490°C, and (vi) 13-nm p + -doped In 0.15 Ga 0.85 As cap with p = 2 × 10 18 cm −3 grown at 490°C (Fig. 1). The growth rate was 1.0 ML/s, except for the QDs that were grown with a growth rate of 0.15 ML/s. The undoped layers are necessary to separate QDs from n-doped regions and, hence, to reduce the influence of non-radiative recombination centers, thus maximizing the QD light emission efficiency [30,46]. The standard InAs/ GaAs QD structures consist of (i) 300-nm n + -GaAs buffer layer with n = 5 × 10 18 cm −3 grown at 600°C, (ii) 500-nm thick n-GaAs MB with n = 3 × 10 16 cm −3 grown at 600°C, (iii) 3.0 MLs of InAs QDs embedded in a 20-nm undoped GaAs layer grown at 460°C, and (iv) 500-nm n-GaAs upper capping layer with n = 3 × 10 16 cm −3 grown at 600°C. The growth rate was 1.0 ML/s, except for the QDs that were grown with a growth rate of 0.15 ML/s. Atomic force microscopy (AFM) images of the uncapped structures are shown in Fig. 1. By analysis of AFM data on similar structures, most frequent values of QD sizes were estimated to be 20 nm (diameter) and 4.9 nm (height) for metamorphic QDs and 21 nm (diameter) and 5.0 nm (height) for standard QDs [30,31,45]. For photoelectric measurements, circular 500-μm thick mesas were etched up on the structures down to bottom buffer n + layers; Au rectifying top contacts with a diameter of 400 μm and a thickness of 70 nm were then evaporated on the top of mesas. To obtain ohmic contacts on the bottom n + -InGaAs and n + -GaAs buffer layers, respectively, Au 0.83 Ge 0.12 Ni 0.05 alloy was deposited at 400°C for 1 min in nitrogen atmosphere. Thick indium ohmic contacts were made on the bottoms of substrates in other pieces of the same samples, in order to have measurements also with current flowing through the GaAs buffer and si-GaAs substrate. The ohmicity of the contacts has been verified by the I-V measurements, contacting to a piece of substrate; the current-voltage characteristics were found to be linear (data not shown). Following the approach proposed in Ref. [47] and used in other works [48,49], the thin p + -InGaAs layer between the Au contact and the n-InGaAs layer was used to enhance the Schottky barrier height, since the structure obtained by the simple deposition of a metal on n-InGaAs exhibited a relatively low Schottky barrier height. Hence, the deposition of thin p + -InGaAs layer enlarges the Schottky barrier height to be similar with that of Au-GaAs contact, maintaining resemblance of barrier profile for both the metamorphic and InAs/GaAs structures. For structure and contact designing as well as understanding of the energy profile for both structures composed by the In 0.15 Ga 0.85 As or GaAs MBs, In(Ga)As QDs, undoped cap layer, and Au/AuGeNi contacts, the calculations were carried out using Tibercad software [50]. Band profiles were modeled in the drift-diffusion approximation, taking into consideration strain conditions, densities of traps related to defects at the InGaAs/ GaAs interface region, depletion layers near contacts, and appropriate Schottky barrier heights. For the calculation of the metamorphic QD band profiles, sizes from AFM data were considered and strain effects were included, following an approach already validated in Refs. [42,51]. The calculation of QD quantum levels is out of the scope of this paper, and QD modeling has been performed previously in Ref. [45]. In this work, however, we calculate band profiles of the whole heterostructure including the substrate. Vertical photocurrent and PV spectra were measured in the 0.6 to 1.8 eV range using normal incidence excitation geometry at room temperature (RT) (300 K) and same light source intensity (1.5 mW/cm 2 ). The photocurrent was measured using a current amplifier and direct current technique [10,[43][44][45], with 1 V bias. The current was measured as a voltage signal drop across a series load resistance of 100 kΩ (see the inset in Fig. 5). Photoluminescence (PL) excited at 532 nm was measured at 300 K. Some information concerning structures and methods is described in more detail in Ref. [45]. A. Photoelectric Characterization The PV spectra of both InAs/In 0.15 Ga 0.85 As and InAs/ GaAs samples are presented in Fig. 2. Contacted to only the MBE layers, thick n-InGaAs, or n-GaAs buffers, the features of the spectra have been described elsewhere [45]. The spectrum threshold of the InAs/In 0.15 Ga 0.85 As at 0.88 eV is related to the ground state absorption in the QD ensemble, which corresponds to the onset of the QD band in the PL spectrum at RT measured earlier [45] (Fig. 2a). The metamorphic QD emission spectrum shows a wide band at 0.94 eV which is in the telecom range at 1.3 μm (0.95 eV), while the QD PL demonstrates a good efficiency, as it has been noted in earlier papers [30,45,46,52]. The wide band of PV spectrum peaked at 1.24 eV and with edge at 1.11 eV is due to the carrier generation in the In 0.15 Ga 0.85 As MB and wetting layer (WL) including the way through the shallow levels. It should be added that the In 0.15 Ga 0.85 As bandgap calculated for MBE-grown layer is 1.225 eV [53], and the WL PL is observed at 1.2 eV [45]. Furthermore, a significant sharp fall above 1.36 eV is observed being caused likely by an indirect effect of the heavy doped GaAs buffer layer located outside the intercontact region that has been mentioned in Ref. [43]. The buffer layer is filled by numerous shallow levels and band non-uniformities originated from MBE growth defects and doping centers that redshift the interband absorption of GaAs [33,46,54,55]. For the conventional buffer-contacted InAs/GaAs nanostructure, the onset at 1.05 eV of the PV spectrum in Fig. 2b originates from the QD ground state, as confirmed by the PL spectrum, while the sharp step at 1.3 eV can be related to the transitions in the WL [56]. The feature after 1.39 eV is obviously related to absorption of the doped GaAs upper buffer layer. A mechanism for this effect will be discussed in detail below. As it is mentioned above, the sharp fall of PV signal above 1.36 eV in the InAs/In 0.15 Ga 0.85 As structure is related to n + -GaAs bottom layer capping the substrate. To clear effects of the layers beneath the bottom AuGeNi contact on the photoresponse, we have studied photoelectric properties of the structures using an indium contact at the substrate back. Unlike the previous Au and AuGeNi contact geometry that allows for the unipolar PV, the bipolar signal has been observed for the structures contacted to the sample top and substrate back (Fig. 2). It is necessary to note that the PV sign changes along the photon energy axis, and in Fig. 2, the spectra of both the samples are inverted by sign of voltage underneath 1.68 and 1.44 eV for the InAs/ In 0.15 Ga 0.85 As and InAs/GaAs QD structures respectively. Here, PV is considered to be positive when, as in the case of contact to the MBE layers, the positive potential is applied to the top Au contact while the negative one is applied to the bottom contact. All the optical transitions mentioned above contribute to the PV signal of the structures in the substrate-top contact geometry. However, when measuring PV through the substrate, the signal onset for the metamorphic and conventional structures occurs at about 0.72 eV. The onset at 0.72 eV is attributed to the transition from the EL2 defect center located in si-GaAs substrate and related interfaces near 0.75 eV below the GaAs conduction band [57], taking into account the possibility of transition through the shallow levels of defects [46,54,55]. The aspects related to their location as well as the EL2 PC onset redshift have been discussed in detail elsewhere [10,45]. As no signal underneath the QD-related bands was observed in the spectra of the samples contacted to the InGaAs or GaAs buffers (Fig. 2), we conclude that the substrate and related interfaces have no substantial influence on the properties of MBE-grown heterostructures. To understand the appearance of the PV signal in our samples, one should look at Fig. 3 where we show the calculated band profiles along the growth direction. Detailed explanation of PV origin between the Au and AuGeNi contacts is given in the previous paper [45]. Summing up, the light-excited electrons (holes) drift predominantly toward the substrate (surface), giving a positive potential at the Au contact and a negative one at the AuGeNi contact. Explaining the bipolar PV from the structures with the electrically active si-GaAs substrates, one can consider their calculated band structures in Fig. 3. Like before, the carriers generated in the top layers as well as in the QDs and WL might give "+" at the top and "−" at the substrate. The Fermi level in the semi-insulating substrate is located much lower than the one in the n-doped MBE layers. Therefore, the band bending near the n + -GaAs/substrate interface is opposite to that in the rest of the MBE-grown structure (see the Fig. 3). Hence, the excitation in the n + -GaAs layer and substrate (above 1.36 eV) gives an opposite PV signal to that from the QDs, WL, and buffers. The same applies to the excitation from EL2 defects (above 0.72 eV) of the GaAs substrate and especially EL2-like defects in n + -GaAs/GaAs strained region [46,57]. Contribution of the substrate and n + -GaAs to the total PV signal is essentially stronger than that of the upper MBE layers, and the negative signal of PV is generally observed at lower excitation energies, while the impact of InGaAs layers and nanostructures appears as valleys on the respective spectral curves in Fig. 2. This is clearly seen by comparing the QDs, WL, and buffer spectral bands on the PV curves of the structures contacted to MBE buffers with the valleys in spectra of the substrate-top-contacted samples. For the higher energies, however, the excitation is absorbed closer to the sample surface not reaching the deeper MBE layers and substrate, which is the main source of negative signal. Hence, the PV signal becomes positive at larger energies. So, the presence of electrically active si-substrate leads to the competition between the spectral components related a b Fig. 2 (Color online) room temperature PV spectra of the a metamorphic InAs/In 0.15 Ga 0.85 As and b InAs/GaAs QD structures; PV was measured contacted to only MBE layers [45] (black curves) and through the semi-insulating si-GaAs substrate (blue). The PV spectra measured through the si-GaAs substrate are inverted by sign of voltage below 1.68 and 1.44 eV respectively for a and b. Low-energy parts of the curves are given in the insets; the QD PL bands measured before [45] for both the structures are presented for the QD ground state energy positioning (red) to the upper MBE-grown layers and the substrate-related defects and the n + -GaAs absorption. Otherwise, a similar characteristic feature above 1.35 eV has been observed by means of surface PV spectroscopy in a recent detailed study of p-doped InAs/ GaAs QD and InAs/InGaAs dot-in-well structures based on si-GaAs substrates [58]. The drastic fall of the PV amplitude has been explained, unlike in our case, by different charge carriers generated below and above 1.35 eV. However, taking into account the drastic difference in the structures referred and present as well as specifics of the applied methods, we follow our interpretation of own results. Based on the concept of the band bending below the AuGeNi contact, one can explain the sharp fall of PV signal in the buffer-contacted metamorphic InAs/ InGaAs structure above 1.36 eV observed in Fig. 2a. This spectral feature is due to effect of the substrate and deepest MBE n + -GaAs layer. Indeed, the electrons generated there move under the intrinsic field to the AuGeNi contact evoking an additional electric field there, herewith a barrier due to the band bending at InGaAs/GaAs heterojunction is obviously too low to be an essential obstacle for the charge carriers. This aligns the band bending in the upper layers, which directly contribute to the PV, and, hence, reduces the supply of the carriers photoexcited above the n + -GaAs layer and, as a consequence, the total PV signal. A small feature near 1.39 eV is observed in Fig. 2b in the spectrum of the pseudomorphic sample contacted to the MBE buffers, though a drastic fall of the signal like in metamorphic structure should be expected above 1.36 eV, taking into account a similar band bending near n + -GaAs/substrate interface. Such a feature is not an attribute of only substrate and n + -doped GaAs; such transitions were detected in In(Ga)As/GaAs QD structures based on p-doped [58] and undoped GaAs [10,55]. These transitions obviously occur also in upper GaAs layers of our pseudomorphic structure, compensating mostly the negative effect of the near-substrate layers on the PV signal. As a result, only negligible influence of near-substrate layer can be observed on the black curve for InAs/GaAs sample in Fig. 2b rather than the fall in the curve of the metamorphic one originated from the deeper GaAs layers, despite a similar bipolar effect observed with direct participation of the substrate in PV formation. The reason for the small feature after 1.39 eV in the spectrum of InAs/GaAs sample contacted to the MBE buffers can be different from the above-discussed for metamorphic InAs/InGaAs sample. In our opinion, it is due to the slight fall of signal caused by the absorption edge of the upper MBE-grown 500-nm thick GaAs buffer shading the QDs and WL which are more efficient contributors to PV at those photon energies. Indeed, electrons and holes generated in QDs and WL are carried to different sides and avoid recombination, unlike the volume generation, where recombination is much more probable. This is the main reason of effective detection of photocarriers coming from even a single layer of QDs and WL. Photons of higher energies are band-to-band absorbed in near-surface n-GaAs buffer layer and electrons escape to the sample volume away from the holes, leading to the sharp rise of PV above 1.4 eV. Correctness of the suggested reason for the 1.36 eV feature in the buffer-contacted InAs/GaAs structure rather than that assumed for metamorphic one is The calculations were carried out using Tibercad software [50] confirmed by studies of solar cells based on InAs/GaAs structures with the bottom contacts on the n + -GaAs substrates [18,24,59], i.e., with a monotonous band bending through whole the sample from contact to contact. Their PV spectra reveal the same feature without a barrier related to the MBE-layer interface to the substrate. Furthermore, a narrow dip was observed in the same spectral range in the PC spectra of InGaAs/ GaAs structures with lateral contact geometry and no intrinsic field [10,55]. The PC spectra of the structures obtained at 1 V bias directed like the intrinsic field in the upper layers of the structures ("−" at the top and "+" at the bottom contact) are presented in Fig. 4. The PC spectra for the structures contacted to the MBE layers are very similar to the PV ones in Fig. 2. The components from the QDs, WLs, InGaAs, or GaAs buffers as well as n + -GaAs layer are observed at the same energies. Concerning the structures with the bottom contact on the si-GaAs substrate, the PC spectra have thresholds near 0.72 eV related to the EL2 defect center absorption. The features of PC spectra for the structures contacted to the MBE layers presented in Fig. 4 correspond mainly to those in the PV spectra in Fig. 2 considered above. Concerning the structures with the bottom contact on the si-GaAs substrate with the EL2 center component, there is a competition between signal from absorption in the MBE layers and from EL2-related levels, as discussed above. However, the shapes of curves allow to conclude that no charge carriers excited within the layers above n + -GaAs participate in PC; this is particularly relevant for the metamorphic QD structure spectrum. Obviously, the electrons do not reach the bottom because of the high potential barrier (see Fig. 3) induced by si-substrate. The substrate has too high resistance, and the main drop of applied bias occurs on it, hence, no barrier lowering occurs. So, one can note that PV and photocurrent are negatively affected by the substrate-related n + -GaAs layer: the absorption above 1.36 eV causes a drastic signal reduction. The main cause of the barrier below the AuGeNi contact is the si-GaAs substrate with a rather low positioning of the Fermi level resulting in the band bending opposite to that in the structure top. This is the only effect of the substrate observed in the PV at such contact geometry, and it manifests even at rather thick (400 nm) intermediate layer between the bottom contact and substrate. B. Substrate-Heterostructure Intermediate Layer Design Solutions From a practical point of view, it can be concluded that such a design of InAs/InGaAs structure with si-GaAs substrate is not useful in the vertical light-sensitive device engineering, especially together with a relatively thin n + -doped buffer, even when the contact configuration eliminates the current flow through the substrate. The space charge area formed in the n + -GaAs/substrate interface region compels the charge carriers excited here to move oppositely to the ones excited in metamorphic structure, like in Figs. 3 and 5a, thus generating an opposite PV signal and reducing the total quantum efficiency of the sample. Hence, for devices based on light absorption, a different structure design should be considered. We believe, such an improvement is necessary to be suggested because many research groups consider si-GaAs substrate as a basis for novel p-n-type both QD infrared photodetectors [11][12][13] and solar cells [15]. Simple thickening of the n + -GaAs buffer under metamorphic structure seems to be not a very good idea. Though such a buffer could absorb more excitation quanta above 1.37 eV and shadow the interface and substrate below, its thickness has to be very high, because 800 nm of more narrow-bandgap InGaAs material above is insufficient to completely suppress the negative bipolar effects. Moreover, even a very thick n + -GaAs buffer cannot exclude the negative effect of the EL2-like centers which are located mainly in the substrate and near their interface to the MBE layer. Nevertheless, as the charge carriers have a limited mean free path, thickening of the n + -GaAs layer can A better improvement could be provided by growing a thin barrier layer for the electrons coming from substrate like it is shown at Fig. 5b. For calculations, a 10-nm thin undoped Ga 0.3 Al 0.7 As barrier layer has been chosen. Such barrier could strongly confine the electrons excited in the substrate within the n + -GaAs layer. Similar high-ohmic layers grown by wide-bandgap materials as InAlAs, GaAlAs, and AlAs have been used in laser structures to avoid the charge carrier leakage from the active region of optoelectronic device [60]. However, for the case of GaAs-In 0.15 Ga 0.85 As based device, Ga 0.3 Al 0.7 As best matches due to the wide bandgap and small lattice mismatch between the epitaxial layer. Decreasing the carrier-induced field on the AuGeNi contact, it can suppress the negative effect of the substrate region on the photoresponse, especially in combination with increase in the n + -InGaAs layer thickness. Yet, a more optimal design of the vertical structures seems to be in use of a monotonous gradient of doping, including an n + -doped GaAs substrate as it is proposed in Refs [14,39,40]. This design is the most efficient and at the same time simplest. If the substrate is doped similar to the capping n + -layer or heavier, this causes the band bending depicted in Fig. 5c. Furthermore, an essential advantage of such a substrate could manifest itself in solar cell design. A low-resistive substrate allows for utilization of the configuration with the "-" contact on the sample bottom [24,[38][39][40]59], non-shadowing the MBE structure from the sunlight. Conclusions We have shown that photoelectrical characterization evidences a critical influence of the deep levels on the photoelectrical properties of vertical metamorphic InAs/ In 0.15 Ga 0.85 As and pseudomorphic (conventional) InAs/ GaAs QD structures in the case of electrically active si-GaAs substrate. Both nanostructures manifest a bipolar PV caused by a competition of the components originated from the oppositely sloped band profiles near the GaAs substrate and bottom MBE n + -GaAs layer on one side and the rest of MBE-grown structure on the other side. An alternative contact configuration, which allows to avoid the current flow through the bottom layers, demonstrates the unipolar PV. The last configuration together with thick buffers on substrate strongly suppresses the influence of the photoactive deep levels originated from interfaces with the si-GaAs substrate on photoelectric properties of the nanostructures. However, a notable negative indirect effect of the substrate on the photovoltage and photocurrent signal from the InAs/ InGaAs structure is observed when the excitation is absorbed in the substrate and near-substrate n + -GaAs MBE layer. Analyzing the obtained results and the performed calculations, we have been able to provide insights on the design of metamorphic QD structures, which can be useful for the development of novel efficient photonic devices. As/GaAs interfaces of the metamorphic structure grown on a si-substrate with the n + -GaAs layer thickness of a 100 nm (present sample), b 100 nm and a 10-nm thin Ga 0.3 Al 0.7 As barrier layer, and c structure like the present but grown on a n + -substrate doped similar to the 100-nm thick n + -GaAs layer above. The calculations were carried out using Tibercad software [50]
6,713
2017-10-05T00:00:00.000
[ "Materials Science" ]
Fidelity and Fisher information on quantum channels The fidelity function of quantum states have been widely used in quantum information science and frequently arises in the quantification of optimal performances for the estimation and distinguish of quantum states. A fidelity function on quantum channel is expected to have same wide applications in quantum information science. In this paper we propose a fidelity function on quantum channels and show that various distance measures on quantum channels can be obtained from this fidelity function, for example the Bures angle and the Bures distance can be extended to quantum channels via this fidelity function. We then show that the distances between quantum channels lead naturally to a new Fisher information which quantifies the ultimate precision limit in quantum metrology, the ultimate precision limit can thus be seen as a manifestation of the distances between quantum channels. We also show that the fidelity on quantum channels provides a unified framework for perfect quantum channel discrimination and quantum metrology, in particular we show that the minimum number of uses needed for perfect channel discrimination is exactly the counterpart of the precision limit in quantum metrology, and various useful lower bounds for the minimum number of uses needed for perfect channel discrimination can be obtained via this connection. The fidelity function of quantum states have been widely used in quantum information science and frequently arises in the quantification of optimal performances for the estimation and distinguish of quantum states. A fidelity function on quantum channel is expected to have same wide applications in quantum information science. In this paper we propose a fidelity function on quantum channels and show that various distance measures on quantum channels can be obtained from this fidelity function, for example the Bures angle and the Bures distance can be extended to quantum channels via this fidelity function. We then show that the distances between quantum channels lead naturally to a new Fisher information which quantifies the ultimate precision limit in quantum metrology, the ultimate precision limit can thus be seen as a manifestation of the distances between quantum channels. We also show that the fidelity on quantum channels provides a unified framework for perfect quantum channel discrimination and quantum metrology, in particular we show that the minimum number of uses needed for perfect channel discrimination is exactly the counterpart of the precision limit in quantum metrology, and various useful lower bounds for the minimum number of uses needed for perfect channel discrimination can be obtained via this connection. I. INTRODUCTION Fidelity, as a measure of the distinguishability between quantum states [1][2][3], plays an important role in many areas of quantum information science, for example it is related to the precision limit in quantum metrology [4], serves as a measure of entanglement preservation through noisy quantum channels [5], and a measure of entanglement preservation in quantum memory [6]; it has also been used as a characterization method for quantum phase transitions [7], and a criterion for successful transmission in formulating quantum channel capacities [8]. Unlike the fidelity of quantum states which is defined directly on quantum states, most commonly used measures for the distinguishability of quantum channels are defined indirectly through the effects of the channels on the states. For example the diamond norm, which is defined as K 1 − K 2 = max ρ SA K 1 ⊗ I A (ρ SA ) − K 2 ⊗ I A (ρ SA ) 1 [9][10][11]( here X 1 = T r √ X † X, ρ SA denotes a state on system+ancilla, and I A denotes the identity operator on the ancillary system), is induced by the trace distance on quantum states ρ 1 − ρ 2 1 ; another measure on quantum channels which is defined as arccos F min (K 1 , K 2 ) = arccos min ρ SA F S [K 1 ⊗I A (ρ SA ), K 2 ⊗I A (ρ SA )] [12,13], is induced by the fidelity on quantum states F S (ρ 1 , ρ 2 ) = T r ρ 1 2 1 ρ 2 ρ 1 2 1 . These induced measures through quantum states lack a direct connection to the properties of quantum channels, which severely restrict the insights that can be gained from these measures. A direct measure on quantum channels is expected to provide more insights thus highly desired. In this paper we provide a fidelity function defined directly on quantum channels, and show that this fidelity function on quantum channels, together with the classical fidelity on probability distribution and the fidelity on quantum states, form a hierarchy of fidelity functions in terms of optimization. This fidelity function on quantum channels also lead to various distance measures defined directly on quantum channels, in particular we show the Bures angle and the Bures distance can be extended to quantum channels. We then show the distance between quantum channels leads naturally to a new Fisher information on quantum channels which quantifies the ultimate precision limit in quantum metrology. We also show that this fidelity function provides a unified framework for perfect quantum channel discrimination and quantum metrology, in particular we show the minimum number of uses needed for perfect channel discrimination is exactly the counterpart of the precision limit in quantum metrology, and various useful lower bounds for the minimum number of uses needed for perfect channel discrimination can be obtained via this connection. II. FIDELITY FUNCTION ON QUANTUM CHANNELS We start by defining the fidelity function on unitary channels then extend it to noisy channels. For a m × m unitary matrix U , we denote e −iθj as the eigenvalues of U , where θ j ∈ (−π, π] for 1 ≤ j ≤ m and we call θ j the eigen-angles of U . We define(see also [14][15][16]) U max = max 1≤j≤m | θ j |, and U g as the minimum of e iγ U max over equivalent unitary operators with different global phases, i.e., U g = min γ∈R e iγ U max . We then define (1) Quantitatively C(U ) is equal to the maximal angle that U can rotate a state away from itself [16,17,21], i.e., cos[C(U )] = min |ψ | ψ|U |ψ |. For mixed states it can be written as cos[C(U )] = min ρ F S (ρ, U ρU † ). If θ max = θ 1 ≥ θ 2 ≥ · · · ≥ θ m = θ min are arranged in decreasing order, then C(U ) = θmax−θmin 2 when θ max − θ min ≤ π [16]. We then define Θ QC (U 1 , U 2 ) = C(U † 1 U 2 ), here U 1 and U 2 are unitary operators on the same Hilbert space(we can expand the space if they are not the same). It is easy to see that Θ QC (U 1 , U 2 ) thus corresponds to the maximal angle between the output states of U 1 and U 2 (however we note that the definition of Θ QC (U 1 , U 2 ) is independent of the states). We then denote F QC (U 1 , U 2 ) = cos[Θ QC (U 1 , U 2 )] as the fidelity between U 1 and U 2 . For unitary channels this is equivalent to the fidelity function proposed previously in [17]. We now generalize this to noisy quantum channels. A general quantum channel K, which maps from m 1 -to m 2dimensional Hilbert space, can be represented by Kraus operators, Equivalently it can also be written as where |0 E denotes some standard state of the environment, and U ES is a unitary operator acting on both system and environment, which we call as the unitary extension of K. We define Θ QC (K 1 , K 2 ) = min {U ES1 ,U ES2 } Θ QC (U ES1 , U ES2 ) and F QC (K 1 , K 2 ) = cos Θ QC (K 1 , K 2 ), where U ESi are unitary extensions of K i , i ∈ {1, 2}. In Appendix A, we show that the optimization can be taken by fixing one unitary extension and just optimizing over the other unitary extension, i.e., In terms of F QC (K 1 , K 2 ) it can be written as This can be seen as the counterpart of Uhlmann's purification theorem on quantum states [22](however the proof does not use Uhlmann's purification theorem [18]). In Appendix B, we show that Θ QC (K 1 , K 2 ) is a metric and can be computed directly from the Kraus operators of K 1 and K 2 as [18] and F 2j denote the Kraus operators of K 1 and K 2 respectively, w ij denotes the ij-th entry of a q × q matrix W with W ≤ 1 where · is the operator norm which corresponds to the maximum singular value, here W arises from the non-uniqueness of the Kraus representations. Thus We emphasize that F QC is defined directly on quantum channels without referring to the states, such direct connection, in contrast to the induced measure, is crucial when applying the fidelity to channel discrimination and quantum metrology as we will show later. Furthermore the fidelity can be formulated as a semi-definite programming and computed efficiently as max W ≤1 Analogous to the Bures distance on quantum states B S (ρ 1 , ρ 2 ) = 2 − 2F S (ρ 1 , ρ 2 ), we can similarly define a Bures distance on quantum channels as B QC (K 1 , K 2 ) = 2 − 2F QC (K 1 , K 2 ). In Appendix A, we prove an intriguing and useful connection between B QC (K 1 , K 2 ) and the minimum distances between the Kraus operators of K 1 and K 2 as where {F 1i }, {F 2i } are the sets of all equivalent Kraus representations of K 1 and K 2 respectively. This connection is particular useful in studying the scalings of the distance between quantum channels as we will show later. In which sense we call F QC (K 1 , K 2 ) a fidelity function? It turns out that F QC (K 1 , To see this, it is proved in the supplemental material of Ref. [18] that which coincides with Eq. (6). From this relationship it is also immediate clear that F QC (K 1 , K 2 ) is stable, i.e., F QC (K 1 ⊗ I, K 2 ⊗ I) = F QC (K 1 , K 2 ). This result gives an operational meaning to F QC (K 1 , , K 2 ). We emphasize that although we made connections between F QC (K 1 , K 2 ) and the minimum fidelity of the output states, F QC (K 1 , K 2 ) is defined directly on quantum channels and does not depend on the states. The definition and the operational meaning of F QC (K 1 , K 2 ) play distinct roles in applications, the operational meaning provides a physical picture while the direct definition brings insights which enable or ease the proofs and computations, which will be demonstrated in the applications. This is in analogy to how fidelity of quantum states is connected to the classical fidelity F S (ρ 1 , [3], here similarly the fidelity between quantum states has the operational meaning as the minimum classical fidelity, however the fidelity between quantum states is defined directly on quantum states which is independent of the measurements and such direct definition has provided numerous insights which would be hindered with just the classical fidelity. It is known that the trace distance and the fidelity between quantum states have the following relationships [19] 1 from which it is straightforward to get the relationships between the diamond norm and the fidelity of quantum channels. This can be obtained by substituting which gives Since F QC (K 1 , K 2 ) can be computed directly from the Kraus operators, this also provides a way to bound the diamond norm using the Kraus operators. In [20] the Choi matrices of the quantum channels are used to compute the fidelity between the channels, which corresponds to the fidelity between the output states of two quantum channels when the input state is taken as the maximal entangled state. As the maximal entangled state is in general not the optimal input state, the fidelity thus defined does not have operational meaning as the minimum fidelity of the output states, thus can not be related to the ultimate precision limit in quantum metrology etc(instead related to the precision limit when the probe state is taken as the maximally entangled state). III. A UNIFIED FRAMEWORK FOR QUANTUM METROLOGY AND PERFECT CHANNEL DISCRIMINATION Next we demonstrate the applications in quantum information science, in particular we show how the fidelity provides a unified platform for the ultimate precision in quantum metrology and the minimum number of uses needed for perfect channel discrimination. The task of quantum metrology, or quantum parameter estimation in general, is to estimate a parameter x encoded in some channel K x , this can be achieved by preparing a quantum state ρ SA and let it go through the extended channel K x ⊗ I A with the output state ρ x = K x ⊗ I A (ρ SA ). By performing POVM, {E y }, on ρ x one gets the measurement result y with probability p(y|x) = T r(E y ρ x ). According to the Cramér-Rao bound [24][25][26][27], the standard deviation for any unbiased estimator of x is bounded below by δx ≥ , where δx is the standard deviation of the estimation of x, J C [p(y|x)] is the classical Fisher information and n is the number of times that the procedure is repeated. The classical Fisher information can be further optimized over all POVMs, which gives where the optimized value J S (ρ x ) is usually called the quantum Fisher information [4,24,25,28], here for distinguish we will call it the quantum state Fisher information. We first recall established connections between the fidelity functions and the Fisher information. Given ρ x and its infinitesimal state ρ x+dx , for a given POVM {E y }, the classical fidelity between p(y|x) = T r(E y ρ x ) and p(y|x The classical Fisher information is related to the classical fidelity as 1 up to the second order of dx [4], this can also be written as If we optimize over {E y } the classical fidelity then leads to the fidelity between quantum states as [4] min and the classical Fisher information leads to the quantum state Fisher information J S (ρ x ) = max {Ey} J C [p(y|x)] and up to the second order of dx [4,28] The precision can be further improved by optimizing over the probe states, which leads to the ultimate local precision limit of estimating x from K x . Intuitively, this ultimate precision limit should be quantified by the distance between K x and its infinitesimal neighboring channel K x+dx , in a way analogous to how Bures distance of quantum states quantifies the precision limit of estimating x from the state ρ x [4]. However although much progress has been made on calculating the ultimate precision limit [29][30][31][32][33][34][35][36][37], such a clear physical picture has still not been established after more than two decades since Braunstein and Caves's seminal paper [4], this is mainly due to the lack of proper tools on quantum channels. Here we show that the fidelity between quantum channels can be used to establish such a physical picture, which also leads naturally to a new Fisher information on quantum channel. Further optimizing over the probe states this leads naturally to a quantum channel Fisher information J QC (K x ) = max ρ SA J S (ρ x ) which is similarly related to the distance on quantum channels as The quantum channel Fisher information quantifies the ultimate precision limit upon the optimization over the measurements and probe states This connects the precision limit directly to the distance between quantum channels which provides a clear physical picture for the ultimate precision limit. The scaling of the ultimate precision limit can now be seen as a manifestation of the scaling of the distances between quantum channels as we now show. Two schemes on multiple uses of quantum channels are usually considered in quantum parameter estimation, the parallel scheme and the sequential scheme as shown in Fig.1. We will show that for both schemes, the scaling of the distances between two quantum channels are at most linear, which underlies the scaling for the Heisenberg limit. For parallel scheme with N uses of a channel K as shown in Fig.2, the total dynamics can be described by K ⊗N ⊗I A . If we denote U ES as one unitary extension of K, then U ⊗N ES is a unitary extension of K ⊗N as shown in Fig.3. Given two channels K 1 and K 2 , we choose U ES1 and U ES2 as the unitary extension for K 1 and K 2 respectively which satisfies we then have For the sequential scheme, we consider the general case that controls can be inserted between sequential uses of the channels. Any measurements that are used in the control can be substituted by controlled unitaries with ancillary systems, the controls interspersed between the channels can thus be taken as unitaries, which is shown in Fig.4. Parallel scheme can be seen as a special case of the sequential scheme by choosing the controls as SWAP gates on the system and different ancillary systems [36]. We show that with N uses of the channel, the distance is still bounded above by N Θ QC (K 1 , K 2 ). We present the proof for the case of N = 2, same line of argument works for general N . For N = 2, one unitary extension of U 2 K 1 U 1 K 1 is U 2 U E2S1 U 1 U E1S1 , similarly U 2 U E2S2 U 1 U E1S2 is a unitary extension of U 2 K 2 U 1 K 2 , here U Ej Si denote a unitary extension of K i , i = 1, 2, with E j as the environment. We can choose U Ej Si such that Θ QC (K 1 , K 2 ) = Θ QC (U Ej S1 , U Ej S2 ), here all operators are understood as defined on the whole space so the multiplication makes sense, for example the control U 1 , which only acts on the system and ancillaries, is understood as U 1 ⊗ I E , an operator on the whole space including the environment. We then have i.e., with two uses of the channel, the distance is bounded above by 2Θ QC (K 1 , K 2 ). With the same line of argument it is easy to show that with N uses of the channel the distance is bounded above by N Θ QC (K 1 , K 2 ). Substitute K 1 with K x and K 2 with K x+dx , we have Θ QC (N K x , N K x+dx ) ≤ N Θ QC (K x , K x+dx ) for both schemes. From Eq.(19) the ultimate precision limit is then bounded by the scaling 1/N is called the Heisenberg scaling, which, as we showed, is just a manifestation of the fact that the distance between quantum channels can grow at most linearly with the number of channels. For N uses of the channels under the parallel scheme we can also obtain a tighter bound as here K W = q i=1 q j=1 w ij F † 1i F 2j as previously defined, and the inequality holds for any W with W ≤ 1 (see Appendix C). In the asymptotical limit, N (N − 1) I − K W 2 is the dominating term, in that case we would like to choose a W minimizing I − K W to get a tighter bound. This can be formulated as semi-definite programming with If we let K 1 = K x and K 2 = K x+dx , then Eq.(23) provides bounds on the scalings in quantum parameter estimation, which is consistent with the studies in quantum metrology [29,30,32,35,36] but here with a more general context (see also Ref. [18]). Given two quantum channels K 1 and K 2 , they can be perfectly discriminated with one use of the channels if and only if there exists a ρ SA such that K 1 ⊗ I A (ρ SA ) and K 2 ⊗ I A (ρ SA ) are orthogonal, i.e., min ρ SA F S [K 1 ⊗ I A (ρ SA ), K 2 ⊗ I A (ρ SA )] = 0, which is the same as Θ QC (K 1 , K 2 ) = π 2 . When K 1 and K 2 can not be perfectly discriminated with one use of the channel, finite number of uses may able to achieve the task [42]. This is in contrast to the perfect discrimination of non-orthogonal states which always requires infinite number of copies. The minimum number of uses needed for perfect channel discrimination should satisfy Θ QC (N K 1 , N K 2 ) = π 2 . The perfect channel discrimination is thus determined by the distances between quantum channels, and the scalings of Θ QC (N K 1 , N K 2 ) obtained before can be used to determine the minimum N . For example, from Θ QC (N K 1 , N K 2 ) ≤ N Θ QC (K 1 , K 2 ) we can obtain a lower bound on N as where x is the smallest integer not less than x. This bound is tighter than existing bounds for noisy channels [40] and for unitary channels it reduces to the formula which is known to be tight [17]. For noisy channels under the parallel scheme we can also substitute Θ QC (K ⊗N 1 , K ⊗N 2 ) = π 2 into the inequality (23) to get a tighter bound. The lower bound on minimum N can also be obtained via a connection to quantum metrology. Given two channels K 1 and K 2 , let K x , x ∈ [a, b] as a path connecting K 1 and K 2 . With N uses of the channel under the parallel strategy . From the triangular inequality This connects the prefect channel discrimination to the ultimate precision limit. By choosing different paths various useful lower bounds on the minimum number of uses for perfect channel discrimination can be obtained. For example, given K 0 (ρ) = e iθσ1 ρe −iθσ1 and K 1 = 1+η 2 ρ + 1−η 2 σ 3 ρσ 3 , where σ 1 , σ 2 and σ 3 are Pauli matrices and assume θ = 0.3, η = 0.5. For the parallel strategy the lower bound given by Eq. (25) is N ≥ π 2Θ QC (K0,K1) = 3. If we choose a simple path K x = (1 − x)K 0 + xK 1 , x ∈ [0, 1], which is a line segment connecting K 0 to K 1 , then with the connection provided by Eq. (26) we obtain N ≥ 4. Other paths may be explored to further improve the bound. By using the inequality (23) with the W obtained from the semi-definite programming that minimizes I − K W , we get N ≥ 5. For any N we can also choose the W to minimize ) with the increasing of N , it turns out that the minimum N such that Θ QC (K ⊗N 0 , K ⊗N 1 ) = π 2 is actually 6. All computations here are done with the CVX package in Matlab [44]. IV. SUMMARY A fidelity function defined directly on quantum channels is provided, which leads to various distance measures defined directly on quantum channels, as well as a new Fisher information on quantum channel. This forms another hierarchy for fidelity functions and Fisher information as shown in the table: where cos Θ i = F i and J i = lim dx→0 In this table the functions on quantum states equal to the optimized value over all measurements of the corresponding functions on probability distribution, and the functions on quantum channels equal to the optimized value over all probe states of the corresponding functions on quantum states. This framework connects quantitatively the ultimate precision limit and the distance between quantum channels, which provided a clear physical picture for the ultimate precision limit in quantum metrology. It also provide a unified framework for the continuous case in quantum parameter estimation and the discrete case in perfect quantum channel discrimination, with this framework the progress in one field can then be readily used to stimulate the progress of the other field. We expect these tools will find wide applications in many other fields of quantum information science. We show that the distance between two quantum channels Θ QC (K 1 , can be computed from the Kraus operators of K 1 and K 2 as Θ QC (K 1 , , here U ESi are unitary extensions of K i , i ∈ {1, 2} and λ min (K W + K † W ) denotes the minimum eigenvalue of K W + K † W with K W = ij w ij F † 1i F 2j , F 1i , F 2j denotes the Kraus operators of K 1 and K 2 , w ij denotes the ij-th entry of a q × q matrix W with W ≤ 1( · is the operator norm which equals to the maximum singular value), q is the number of the Kraus operators. Furthermore the minimization on both U ES1 and U ES2 can be reduced to the minimization of just one We start by a general unitary extension for any given channel K(ρ) = q j=1 F j ρF † j with q j=1 F † j F j = I, which maps from a m 1 -to m 2 -dimensional Hilbert space, F q * * · · · * 0 * * · · · * . . . . . . . . . where W E ∈ U (p) only acts on the environment and can be chosen arbitrarily, here U (p) denotes the set of p × p unitary operators with p ≥ q as p − q zero Kraus operators can be added. Here only the first m 1 columns of U are fixed, the freedom of other columns can be represented as F 1 * * · · · * F 2 * * · · · * . . . . . . . . . It is easy to see that W ≤ 1, conversely for any W with W ≤ 1 it can be imbedded as the first q × q block of a unitary matrix [45]. Thus by varying W E1 and W E2 we can take W to be any q × q matrix with is now reduced to the optimizing over V 1 , V 2 and W . Next we optimize over W . Basically we need to find W such that arccos 1 2 λ min [K W + K † W ] is minimized, which is equivalent to find max |W |≤1 Note that the freedom of global phase from · max to · g (see main text for definitions) has been included in the freedom of W and since max |W |≤1 It is obvious that the freedom of W can be achieved by only varying W 1 or W 2 , thus the equality can be attained by just exploring the freedom of V 1 and W 1 , or V 2 and W 2 . We then have Next we show that this distance measure has a connection to the minimum distance between equivalent Kraus operators. Given two quantum channels, (zero Kraus operators can be appended if the number of the Kraus operators are not the same), by appending additional p − q zero Kraus operators, we have the Kraus operators for K 1 and K 2 as {F 11 , F 12 , · · · , F 1q , 0, · · · , 0} and {F 21 , F 22 , · · · , F 2q , 0, · · · , 0} respectively. Equivalent Kraus operators for K 1 and K 2 can be represented asF 1i = k u ik F 1k andF 2i = k v ik F 2k where u ik and v ik are entries of U, V ∈ U (p) respectively, here 1 ≤ i ≤ p. Then where K W = q ij w ij F † 1i F 2j and w ij is the ij-th entry of W , which is the first q × q block of U † V and can be any q × q matrix with W ≤ 1 by varying U and V , i.e., by varying the equivalent representations of K 1 and K 2 . Thus we then have Appendix B: ΘQC (K1, K2) defines a metric on quantum channels We show that Θ QC (K 1 , K 2 ) defines a metric on quantum channels. First we show that Θ QC (U 1 , U 2 ) = C(U † 1 U 2 ), where C is defined in the main text, is a metric on unitary channels. We start by listing some useful properties of C(U ): where V is any unitary operator. The first equality is obvious from the definition; the second inequality can be easily verified using the formula C(U ) = θmax−θmin 2 when θ max −θ min ≤ π, the equality is saturated when C(U 1 )+C(U 2 ) ≤ π 2 ; proof of the third inequality can be found in [47,48]. It is obvious that Θ QC (U, U ) = 0 and Θ QC (U 1 , where for the inequality we have used the property that C(U 1 U 2 ) ≤ C(U 1 ) + C(U 2 ). This shows that Θ QC (U 1 , U 2 ) is a metric on unitary operators. For two general channels, Θ QC (K 1 , where U ES1 and U ES2 are unitary extensions for K 1 and K 2 respectively. It is easy to see that Θ QC (K 1 , K 2 ) = Θ QC (K 2 , K 1 ) ≥ 0 and the equality is saturated only when K 1 = K 2 . We show that Θ QC also satisfies the triangular inequality as the last inequality is valid for any U ES2 , specially we can choose the U ES2 which minimizes C(U † ES2 U ES3 ), thus Θ QC (K 1 , K 2 ) thus defines a metric on the space of quantum channels.
7,378
2015-06-02T00:00:00.000
[ "Physics" ]
Characterisation of interface states of Al/p-Si Schottky diode by current–voltage and capacitance–voltage–frequency measurements In this study, the fabricated Al/p-Si Schottky diode is characterised at room temperature using current–voltage (I–V) and capacitance–voltage–frequency (C–V–f) techniques. The energy distribution profile of the diode’s interface state density is generated using different diode parameters. In the I–V measurements, the variation in energy, charge, and density of the interface states is described in terms of the applied forward bias with respect to the zero Schottky barrier height. The capacitance measurements, on the other hand, are used to address a long-standing low-voltage capacitance peak in terms of the distribution of interface state charge. In general, both techniques complement each other, indicating that the space charge region (SCR) starts to be varied at a voltage of − 0.66 V, after the compensation of interface states by majority carriers. The findings presented here are critical for current and future research on junction-based devices for a variety of applications in which the SCR and bulk material properties are examined solely from metal-semiconductor (m–s) interface states. Introduction Schottky diodes are the simplest metal-semiconductor (m-s) contact devices that are used in opto-electronic [1][2][3][4][5][6] and radiation sensing applications [7][8][9][10]. The performance, reliability, and stability of the devices during operation are influenced by the interface layer [11][12][13][14][15][16][17][18][19], the distribution of interface states between metal and semiconductor [20,21], as well as the defects and dopants in the semiconductor [22][23][24][25]. Among others, the interface states degrade the quality of the devices, resulting in a high leakage current and an ideality factor of the device higher than unity [21]. In addition, the interface states are responsible for the recombination of majority carriers, resulting in degradation of the device's performance. The interface states are formed between the metal contact and semiconductor either during the surface preparation or the evaporation of metal, and they are because of the interruption of the periodic lattice structure at the semiconductor surface [26]. As a result of the device fabrication process involving direct metal deposition on the semiconductor, the interface states are unavoidable, and a thorough study of them is necessary to suppress (minimize) their impact and understand the diode features that are only associated with them. The study would result in an improvement in the device's quality since the material bulk and junction properties would be studied exclusively for various applications. Interface states at the m-s contacts have been studied using I-V and C-V techniques. Even though studies have been ongoing for a long time, the complementary factor of the techniques has not been understood or explained. The interface states are responsible for diode electronic properties such as, series resistance, ideality factor, and Schottky barrier height. In addition, low-voltage capacitance peak and negative capacitance, among other parameters, have long been a source of contention and are still poorly understood. Though not providing detailed information about their origin, the parameters have been presented in terms of the interface states. The parameters have been observed on Yb/p-Si [27] and on metal-insulating-semiconductor (MIS) devices [6], without reporting their origins. The negative capacitance is ascribed to the injection of minority carriers in the bulk of the semiconductor, which is the property of the ohmic-back contact [24,[28][29][30]. Wu et al., [31,32], on the other hand, describe the parameter in terms of the interface charge at occupied states, contrary to Butcher et al., [33] where the capacitance is explained in terms of the instrument used to characterise samples. Later, the result presented by McPherson [34] indicated that the parameters are due to defects that are generated in the bulk of the material. The recent data acquired on heavily irradiated silicon diodes also showed a very large negative capacitance that could not be explained [35]. A low-voltage capacitance peak, on the other hand, was explained in terms of the charge accumulation in the low-voltage range [30,36]. Studies on the interface, is therefore still necessary to fully understand the properties of the m-s contact devices and the formed interface states. A comprehensive explanation of the interface states based on complementary data from I-V and C-V techniques would result in the suppression of the interface states, allowing their properties to be studied solely through SCR. Currently, the suppression of the interface states is only achieved by operating the diode at high frequencies. Since the capacitance is also dependent on voltage, it is important to know a voltage range at which the capacitance is due to the interface states. In this work, the electrical properties of an Al/p-Si Schottky diode fabricated by the deposition method are studied by I-V and C-V-f techniques at room temperature. The data are analysed to obtain the ideality factor, Schottky barrier height, saturation current, and series resistance, parameters that are used to generate the energy distribution profile of the interface state density of the structure. The charge of the interface states has been used to explain a lowvoltage capacitance peak, a feature that has been outstanding though m-s contact devices have been studied for so long. In addition, a lack of interface state response at low-frequency capacitance measurements of the devices is explained in this work. According to the best of our knowledge, the ordering of the interface state charge in equilibrium with the semiconductor and with respect to SCR is explained for the first time in this work. This work is important for junction-based devices where the interface states are inevitable and affect the performance of the devices for various applications. Experimental details One sided polished p-type Si (doped with boron) purchased from Semiconductor Wafer; Inc was used to fabricate the Schottky diodes. Using I-V and C-V techniques, the diodes have been characterized in a dark environment and at room temperature. The meters for the measurement of current and capacitance have been built in-house, and they rely on software to carry out their respective functions. A precise contact between the probe and the diode is identified by expanding the diode under a microscope. Furthermore, the test diode is kept in a test fixture for a voltage sweep during the measurements. A metallic shield is used to cover the test fixture to isolate the measuring system from external electromagnetic fields. The I-V measurements were taken from -4.00 to 4.00 V to allow the tunnelling charge carriers to surpass the thermionic emission carriers. C-V measurements, on the other hand, were taken in reverse bias from 0.00 to -4.00 V at different frequencies ranging from 1.00 to 220 kHz. The measurements were unstable at frequencies outside the range. The layout of the fabricated diodes is shown in Fig. 1, and their fabrication process is detailed elsewhere [37] and would not be repeated here. Si was chosen in this study because, as an elemental semiconductor that has more advantages than other semiconductors in industrial applications, its surface is easily oxidized, resulting in a considerable interface state density, that affects the electrical properties of the junction-based devices. show that the diode is well fabricated with acceptable parameters to generate the trend of interface state density in order to investigate the unexplained parameters. The current increases linearly with voltage at low forward voltages, but it deviates from linearity as the voltage increases due to the possible formation of an oxide layer on the Si surface, resulting in series resistance [5]. As a result, interface states are created between the layer and Si [16][17][18][19]. The diode current (I) is then given in terms of the applied voltage (V) [16] as. Results and discussion and where A (= 2.83Â 10 -3 cm 2 ) is the diode active area, is the effective Richardson constant for p-type Si [17], T is the temperature in Kelvin, k is the Boltzmann constant, q is the electronic charge, and / B is the zero Schottky barrier height. The ideality factor, g; is determined from the slope of the linear region of the forward bias ln (I)-V characteristics at V [ 3kT=q. The evaluated values of g and / B of the diode are 2.52 and 0.61 eV, respectively. g greater than unity has also been reported on similar devices, and it is due to the voltage drop across the interfacial layer between the metal and the semiconductor [17,18,26]. The high value of g suggests the involvement of additional diode conduction mechanisms, such as tunnelling conduction mechanism, alongside the thermionic emission mechanism. The / B of 0.61 eV is the same as that evaluated before [19] on Ti/p-Si Schottky barrier diodes. Even after etching Si wafers with HF solution, a native oxide layer 1 -3 nm thick is always found on the surface [38]. The interface layer is formed either during material preparation or during the device fabrication process. The values of g and / B evaluated in this work are, however, consistent with those reported in the literature for similar diodes, indicating that the diode is well fabricated. The double logarithmic I-V plot presented in Fig. 2b is used to study the conduction mechanisms of the fabricated Al/p-Si Schottky diode. The current and voltage are related as I / V m , where m is the slope of each region. The slopes identified from the plot are 1.19, 1.34, and 1.54 for regions i, ii, and iii (region i: 0.01 \ V \ 0.12 V, region ii: 0.13 \ V \ 0.71 V, and region iii: 0.72 \ V \ 4.00 V), respectively. The slopes of regions i and ii are close to unity, suggesting that the regions are dominated by ohmic conduction mechanisms. In these regions, the effective density of the injected carriers is less than the thermal carrier density [15]. In the last region, the conduction is dominated by the space charge-limited current (SCLC) mechanism, showing that the density of injected free carriers is greater than that generated by temperature [39]. The values of R s , g, and U B are calculated using Cheung's method and compared to those obtained from the thermionic emission theory. According to and H I ð Þ is given as where the IR s term in the above equations represents the voltage drop across the diode's series resistance. As expected, the dV d lnI ð Þ -I and H I ð Þ-I trends of Fig. 2c are linear, where the values of R s and g are calculated from the slope and intercept of the dV d lnI ð Þ -I plot. The value of R s is, 7.51 kX, greater than the 3.38 kX reported on Al/p-Si [40] and less than the 13.96 kX reported on Cu/n-Si [21]. The evaluated g is 1.92, greater than the 1.85 reported on Al/MEH-PPV/p-Si [41] but less than the 3.68 reported on Cu/n-Si [21]. The value of g obtained from Eq. (4) is inserted into Eq. (5) to obtain the H I ð Þ-I plot, where the values of R s and U B are evaluated from the slope and intercept of the H I ð Þ-I plot, using Eq. (6). The value of U B is, 0.80 eV, greater than the 0.75 eV reported before [41] on Al/p-Si but the same as the one reported on Al/ MEH-PPV/p-Si [42] using the Cheung method, further confirming the presence of the layer between Al and p-Si. These parameters are within the range of those reported before, confirming that the diode is well fabricated. The parameters can, therefore, be used to generate a trend of interface state density to explain a low-voltage capacitance peak. Device parameters evaluated using thermionic emission and Cheung's methods are different (Table 1). This difference is attributed, among others, to the voltage dependence of the ideality factor and the Schottky barrier height due to the interface states [18] as and where / e is the effective barrier height. The dependence of g and / B is shown in Fig. 3. In this case, the ideality factor is greater than unity (2.52) and it is given [18] as. where d is the thickness of the interfacial layer, e i is the dielectric permittivity of the interfacial layer, e s is the dielectric permittivity of the semiconductor, W D is the width of space charge region (SCR) and N ss V ð Þ is the density of the interface states. The variation of the barrier height with voltage is attributed to the electric field present in the SCR and the change in the interface state charge. This variation of the charge with voltage is explained later in the text. The energy of the interface states, E ss ;(for p-type semiconductor) with respect to the top of the valence band, E V , at the surface of the semiconductor is related to the applied forward voltage [6,18] as. The energy of the interface states against voltage and the energy distribution profiles of N ss are presented in Fig. 4a and b, respectively. The interface states are biased-dependent due to the barrier inhomogeneities. The interface states are at the energy below the valence band at voltages higher than the zero Schottky barrier height, shown as region II in Fig. 4a. It is interesting to investigate the interface states at E ss À E V [ 0eV, where the voltage is lower than 0:67V, shown as the inset in Fig. 4b. The oscillation of N ss trend at E ss À E V between 0.59 and 0:67eV demonstrates that the majority carriers compensate interface states that have different reactivity at the energy above the intrinsic Fermi energy. The different reactivity is possibly due to a change in charge states with the applied voltage. This unusual trend of N ss has not been explained in the literature, though observed before only on Au/n-GaN and Au/ZrO 2 /n-GaN Schottky diodes [42]. The trend is now studied in terms of the interface charge in this work. The trend in Fig. 4 seems complex. It is, however, interesting to interpret it in terms of the schematic diagram shown in Fig. 5. The diagram in Fig. 5a shows the energy band diagrams of the Al/p-Si structure with interface states between the interfacial layer and the Si interface. As shown in Fig. 5b the states are on the Si side of the SCR. To interpret the trend in Fig. 4b, five domains can be distinguished and discussed separately in Fig. 5b. These domains are used to explain a change in the interface state charge with the applied forward bias with respect to the zero Schottky barrier height. The charge-neutrality condition at the m-s interface is satisfied because the negative ionised acceptors are compensated by the positive majority carriers on the side of the semiconductor in the SCR [7]. As a result of this charge-neutrality, the interface state charge changes with bias to show that there is a polarisation in the interface states. Domain (1): at the interface state energy range of 0.64 -0.67 eV. At low forward voltage range of 0.06 - 0.00 V, which is much lower than / B , where the charge of the interface states is dominated by metal electrons, resulting in a negative interface state charge. The high density of electrons could also be due to the minority carriers generated by temperature since it is at a voltage range lower than 3kT=q, thermal energy. Domain (2): at E ss À E V (Forward bias) range of 0.59 -0.64 eV (0.11 -0.07 V), lower than u B , the interface state energy decreases and the charge of the states is positive because the majority carriers dominate and compensate the ionised acceptor in this region. Domain (3): at E ss À E V (Forward bias) range of 0.00 -0.59 eV (0.67 -0.11 V), slightly lower than the zero Schottky barrier height, the N ss is constant at * 0 eV À1 :cm À2 , indicating that a full interface state Domain (5): at E ss À E V \0:00eV (F B [ 0:67eV ), higher than the zero Schottky barrier height, the N ss increases gently from 0 at E ss À E V ¼ 0:00 to $ 23:0 Â 10 12 eV À1 :cm À2 at E ss À E V ¼ À9:00eV. This increase confirms the existence of the interface states responsible for diode series resistance, as shown by the deviation of ln(I)-V plot at high voltages in Fig. 2a. In this region, the majority carriers are mobile, and freely contribute to the measured current. However, not, all majority carriers contribute to measured current since others compensate these interface states, resulting in the series resistance. The fabricated Al/p-Si structure is characterised by C-V-f measurements at room temperature to further investigate the interface states formed in the SCR. Figure 6, the C-V characteristics of p-Si Schottky diodes, shows a rapid decrease in capacitance at low reverse voltages followed by a gentle decrease as the SCR attains its full depletion width. The presence of the interface states is confirmed by the capacitance peak at low voltages. Usually, the peak decreases with an increase in the measurement frequency [27,43]. However, the opposite case is observed in Fig. 6a, where the peak is only observed at high measurement frequencies ([ 50 kHz). The peak intensity increases with frequency and shifts gently Fig. 6 C-V characteristics of the Al/p-Si Schottky diode for various measurement frequencies at room temperature to higher voltages. This rare diode behaviour is explained by the interface state charge. The peak is not observed at low measurement frequencies because of the low-mobility majority carriers that are active to compensate for the negatively charged interface states in this frequency range. However, as the frequency increases, the capacitance is due to the negative charge interface states, electrons, since the majority carriers are inactive, hence the initial increase in the capacitance. The high density of the mobile electrons, results in negative charge states, so the capacitance increases with frequency. As the reverse voltage increases, however, the SCR extends from domain (1) to other domains in Fig. 5b, and the majority carriers are withdrawn from the SCR, resulting in a decrease in the capacitance. The lowvoltage peak is therefore due to the negative interface state charge. Figure 7a shows the plot of the capacitance-frequency measurements at various reverse voltages. A strong capacitance dependence on frequency is observed by its drastic decrease at low measurement frequencies for all voltages. This decrease confirms the existence of the interface states [44,45], which are in equilibrium with the semiconductor as demonstrated in Fig. 5. The interface states follow the ac signal because the majority carriers are active to compensate for the relatively high mobility electrons at low frequencies. A well-known [27,46] capacitance independence of frequency is observed at high measurement frequencies ranging from 20 to 150 kHz to show that the interface states do not respond to the ac signal in this range. The states do not respond to the frequency at this range possibly due to the full compensation of electrons by the majority carriers. As the frequency increases, however, the majority carriers become immobile because of their relatively low mobility, resulting in a high concentration of uncompensated electrons contributing to the diode capacitance. Figure 7a also shows the increase in capacitance at the highest frequency range. This capacitance is more pronounced for low-voltage trends because of the high density of electrons at low voltages as demonstrated by domain (1) in Fig. 5. The variation of the interface state charge is explained in terms of the plot of the interface state density as a function of the reverse bias shown in Fig. 7(b). The plot is generated using the evaluated high-and low-frequency capacitance for each reverse voltage [46] as where C LF is the capacitance at low frequency, which is the sum of the interface state capacitance and SCR capacitance and C HF is the capacitance at high frequency. Since the C HF is the SCR capacitance, the C LF À C HF term in Eq. 11 is the interface state capacitance. The C LF and C HF was read at the maximum Fig. 7 a Experimental capacitance-frequency characteristics of Al/p-Si diode for different reverse bias at room temperature and b The density of interface states density (N ss Þ as a function of reverse bias capacitance and minimum capacitance, respectively, for the frequencies of 1 and 130 kHz. Since the reverse bias results in the widening of SCR as the majority carriers are withdrawn from the region, the variation of the voltage with the interface state density is used to analyse the interface state charge as the SCR width varies. It can be seen from Fig. 7b that at low reverse voltages, the majority carriers compensate the interface state charge before being withdrawn from the SCR, indicating that the SCR starts to be depleted at a reverse voltage of -0.66 V. As a result, an increase in N ss at low voltages in Fig. 7b indicates that the interface state charge at this region is negative, possibly due to the high-density electrons in the domain (1) of Fig. 5b. As the reverse voltage further increases, the interface state density decreases, indicating a decrease in the density of the electrons to be compensated as the majority carriers are withdrawn and SCR extends to other domains, hence an increase in SCR width after this domain. Though not well explained, the trend in Fig. 7b has previously been reported on Ni/p-GaN [46] and Au/ PVA(B-doped)/n-Si [47] Schottky diodes. Figure 7b shows that the SCR width starts to be widened at 0.66 V, after the compensation of the negative interface state charge. Conclusion In this study, a well-fabricated Si-based Schottky diode is characterised using I-V and C-V-f techniques. The parameters evaluated are found to be consistent with those reported in the literature for similar diodes and are used to analyse the interface states at the m-s interface of the diode. As a result of different charges and its density, the interface states are in equilibrium with the material, due to electrons generated by temperature and those from metal contact, ionised acceptors, and the majority carrier holes. The energy, density, and charge of the states were found to change with forward bias. At voltages lower than the zero Schottky barrier height, the interface state energy is above the valence band and their density and charge states vary with the applied bias explaining the arrangement of the interface state charge. The density of the interface increases with voltages and their charge state is positive at the energy below the valence band (voltages higher than the zero Schottky barrier height). At the voltage range of 0.11 -0.67 V, around the zero Schottky barrier height, the energy of the interface states is close to the valence band and their density is zero as the majority carriers compensate electrons to make the charge interface state neutral. A capacitance peak and a change of its height with measurement frequency at low reverse voltages confirm the existence of these interface states and their charge states. At high frequencies, the negative interface states participate in the ac signal, resulting in an initial increase in the capacitance. The density of these negative charge states is high at low voltages, possibly due to the minority carriers. At low frequencies, however, the high density of majority carriers is active to compensate for these negative interface states. After charge compensation of the interface states, the SCR is varied (widened) at a reverse bias of -0.67 V. As a result, for SCR properties derived solely from interface states, the devices should be studied at voltages greater than 0.67 eV.
5,469.8
2023-08-01T00:00:00.000
[ "Physics" ]
A revolution in sex education using sex robots Abstract After more than four decades of school sex education programs, there is still a great deal of inadequate information around, together with mixed messages and general confusion. Topics covered have ranged from “no sex” to “safe sex”, and there has been a tendency for pupils to drift into the “alternative” areas of pornographic media. This approach may have contributed to generations of women and men finding themselves, in adult life, living with unresolved sexual problems caused by a truncated and disorderly “education”. The arrival of robots presents an opportunity to reconsider the way in which sexuality is taught and to introduce an innovative and ethical educational practice. This article proposes a conceptual framework where there can be an extension of medical simulation toward sex education based on advanced sex robots, offering a dynamic, effective and ethical teaching method. This article considers why, how and where as well as what is workable and technically feasible. The target is that, with the help of a new generation of robots, a fulfilling love life will be accessible to every man and woman. The engagement of both, the simulation community itself and the political will, is vital to ensure that a more enlightened and emotionally intelligent sex education can become a reality in the near future. How is sex education currently delivered? For a long time, society was rather reluctant to accept the need for sex education, because "sex is natural" [1, p. 10]. However, over the years, sex education has proven to be essential in coping with unwanted pregnancies and the transmission of AIDS, among other things [2, p. 2]. Sex education was introduced in European schools around four decades ago, but was rather limited to "no sex" or "safe sex", and was usually delivered in a conventional way [3, p. 14]. However, in 2006 the International Planned Parenthood Federation European Network suggested the addition of the notion of pleasure into the curriculum, positioning itself in these terms: "Comprehensive sexuality education seeks to equip young people with the knowledge, skills, attitudes and values they need to determine and enjoy their sexualityphysically and emotionally, individually and in relationships. It views 'sexuality' holistically and within the context of emotional and social development. It recognizes that information alone is not enough. Young people need to be given the opportunity to acquire essential life skills and develop positive attitudes and values" [4, p. 6]. In 2010, the World Health Organization (WHO), along with other prestigious stakeholders, invested in an intensification of initiatives in the field of sex education. In its publication "Standards for Sexuality Education in Europe", the WHO recommended "that all Member States should promote holistic sex education to guarantee sexual health and sexual well-being" [5]. From the mid-1990s on, pornography became easily accessible to all [6], thanks to video tapes, DVDs and more recently the Internet, without age restriction and without control. At the beginning, pornography was viewed as an unofficial way to learn about sex, but more often, it was simply used to provoke sexual arousal. Most sexologists believe that if someone does not receive good sex education, he or she will not know how to become competent in lovemaking [7, p. 8: 8, p. 10]. Clearly, there is a need for a more balanced, efficient, insightful and sensitive approach to sex education, covering such relevant areas as human emotions, the human body, human relationships as well as erotic art and ethical concepts. Could the use of advanced robots then be a way to improve sex education? This article proposes a conceptual framework in which the extension of medical simulation in medical training could be applied to sex education based on advanced sex robots. The target is that within a decade, with the help of true-to-life simulators, sex education could be accessible in Centers of Excellence to anyone who wishes to be supported on a path that leads to a more intimate and harmonious sex life as well as an improved overall sense of well-being. Sexual experiences There have been many studies about discomfort in sexual relations [9]. Spiegelhalter studied 15,000 British adults aged 16-74 years who participated in interviews about their sex lives over a 2-year period from 2010 to 2012 ( Figure 1). "Overall, around 50% of women and 40% of men reported one or more sexual problems (dark line), these rates increasing with age. But the proportions seemed high even for 16-to 24-year-olds. Around 22% of females and 15% of males reported two or more problems (blue line), and perhaps the most remarkable feature of the graph is the way the blue lines are essentially horizontal: the incidence of multiple problems is similar for younger and older people. The problems identified included: lack of interest in or enjoyment of sex, physical pain as a result of intercourse, experiencing no excitement or arousal, difficulty in reaching climax and, for women, an uncomfortably dry vagina while, for men, trouble in achieving or maintaining an erection" [10, p. 230]. For 10 years, during a regular 2-hour program on French national radio, Brigitte Lahaie, a former sex worker, listened to the questions of her listeners and helped them shed some light on their love lives [11]. In 2011, she summarized her experience in a book covering a hundred frequently asked questions. Without taboo, she tackled such subjects as the usefulness of letting go in sexuality, lack of desire, erectile dysfunction and the use of toys and aphrodisiacs. She looked at how to build sexual knowledge as well as how to overcome certain fears. Her broadcasts were enormously popular. During my sexology sessions, I often ask the woman to draw her genital region. The drawings most often show a great lack of knowledge of this part of the body. The clitoris may be missing, and/or the vagina, the urethra, the anus. Concerning the female genital region, the results from males are generally no better. In cases such as these, how can we love and have fun with a body we do not know? How can we recognize ourselves as fully sexed human beings? It takes gentleness and time for a woman to truly know how to use her body. This must also be borne in mind and should be a factor in more enlightened types of sex education. Evidence to support the use of novel applications for simulation technology In 2015, in my capacity as a professor at the School of Health, specializing in the quality of care and patient safety, I attended the inaugural university session in the simulation laboratory. I was particularly impressed by this new way of teaching and, in parallel, as a qualified clinical sexologist, I was even more interested in the techniques used. I have since studied literature on simulation-based medical education and also followed the related training courses at, among other places, the Centre of Excellence Ilumens of the IUT Paris Diderot. It seemed both highly appropriate and urgent to explore how to adapt and transfer these new medical teaching models to the field of sex education. Simulation is a techniquenot a technologyto replace or amplify the real experiences with guided ones that evoke or replicate substantial aspects of the real world in a fully interactive manner [12]. Simulation technologies broadly encompass diverse products including computerbased virtual reality simulators, high-fidelity and static mannequins/plastic models and robots [13]. Simulation is a newly evolving teaching methodology. At present, it is intended for all health professionals, to develop, maintain or even strengthen their skills in order to provide the quality of care, respecting all required safety criteria. Indeed, simulation makes a major contribution to the reinforcement of skills by staging, analyzing and adjusting them as much as possible before the time comes to implement them. All knowledge is mobilized, thus promoting integration and transferability. Simulation sessions have to meet a set of specifications in order to guarantee the real efficiency. On the basis of scientific and ethical values, there are fundamental principles to be integrated: the use of a scenario with accurate and measurable objectives, an organized and structured briefing, and then a debriefing session which takes into account all emotional reactions [14]. Over the past 20 years, numerous simulation centers accredited as Centers of Excellence have been created to develop this way of teaching. These centers are generally connected to the medical faculties of universities and high schools. In 2019, the Bristol Medical Simulation Center listed 1,589 centers of simulation worldwide [15], which means there is a huge potential resource available. In 2011, Cook et al., the leaders of simulation techniques at the Mayo Medical School, concluded their systematic review and analysis of technology-enhanced simulation by saying: "in comparison with no intervention, technology-enhanced simulation in health professional education is consistently associated with major positive effects on knowledge, skills, and behaviors plus moderate effects on patient-related outcomes" [16]. And more recently, in 2017, Griswold et al. concluded that "a significant body of international research has begun to show how simulation-based medical education and competency-based medical education can improve patient care and patient outcome" [17]. This evidence suggests that a potential synergy between simulation using robots and sex education could and should be developed. A conceptual framework for the successful delivery of simulation techniques and sexual education Three elements are important to consider. First, the content of the curriculum must be based on scientific evidence and recent developments in the field of sexuality. There is so much to be dealt with here, and the amount of published material is limited. In 2006, Komisaruk et al., as well as Linden in 2015, explained the organization of the sensory nerves innervating the pelvic region: "sensory information from the pelvis is carried to the brain by three spinal nerves, the pudendal, the pelvic and the hypogastric [18, p. 10: 19, pp. 98-99]. They enter at different levels along the spinal cord. Sensations from the uterus and cervix are also conveyed via a cranial nerve called the vagus, which enters the brain stem directly. It is important to note that even a single nerve can convey information from a number of different skin sources. For the man, for example, the male pudendal nerve carries sensation from the penis, anus and scrotum" (Figure 2). This enables us to explain, for example, the three main types of orgasms for the woman: the one generated by clitoral stimulation in connection with the pudendal nerve, the one generated by the "G point" vagina with the pelvic nerve and the one generated by the cervix region with the hypogastric and vagus nerves. These orgasms are of different intensities in the body, and there is also individual variation in the sensory nerve wiring of the genital region. As Leleu writes, we can learn about our individual tuning by "playing the whole keyboard of orgasms" [20, p. 307]. Having access to the correct information, skills and behavior are the basic elements in the formation of stable, fulfilling sexual relationships and improved erotic competence. The second element is to translate this updated knowledge into comprehensive, sensitive and practical language, into "know how", in such a way as to arouse interest in acquiring an erotic skill. Two models in sexology appear to be compatible with the use of human advanced robots and the results of science: the Sexocorporel one and the MEBES. The Sexocorporel model, created by Professor J-Y Desjardins of the University of Montreal Sexology Department, in the 80s, is an encompassing view of human sexuality that considers all of the physiological, personal, cognitive and relational components involved in a sexual experience. In reality, these components closely interact although for didactic purposes they are treated separately [21, p. 66] [22]. On the other hand, Britton's sexual health model, the MEBES, includes assessment and the creation of an action plan for particular sexual concerns at the relevant stage of life. MEBES is founded on the idea that helping the person with a sexual concern usually involves mind (M), emotions (E), body, body image (B), energy (E) and spirit (S) dimensions [23, p. 4]. These two models complement each other and could form the basis of the teaching curriculum. Third, a great deal of work needs to be done to improve people's understanding of the robots that exist. There are realistic simulators which can feel, interact and express emotions. It is essential to encourage discussion and debate on empathy with artificial intelligence and on the situations experienced in this context [24,25]. Participants can then go beyond the simple mechanics of intercourse and the techniques of sexual stimulation; they can invest in emotional experience. The emotions and the whole psychological climate in which sexual encounters occur are as importantif not more importantas sex itself. Consequently, simulation can be considered within ethics to facilitate critical analysis. Is this new approach technically feasible? Sex education clearly needs to be improved and could be revolutionized by the introduction and use of advanced computerized human simulators for both adults and young people. We need to create a new generation of models, trueto-life simulators. It is important to distinguish between medical simulation, on the one hand, because the idea is not to medicalize sexual education, and gender-based sex robots, since we are looking at sexuality in a holistic way. These true-to-life simulators can be created, thanks to the combination of the medical industry and the sex robot industry. Of course, we need to know exactly what we want in the construction of these true-to-life simulators, and what we want to transmit between the programmer and the learner [26, p. 210] (Figure 3). Conclusion Today, there is a need for innovative teaching methods in sex education. Medical education has been revolutionized by the use of simulation, and sex education can be too. The arrival of robots is an opportunity to have better access to sex education for all in order to pursue a safe and pleasurable sexual and emotional life. Simulation, combined with advanced sex robots, can also revolutionize sex education by offering a modern, effective and ethical teaching method. The creativity and engagement of the simulation community itself is vital. We also need the political will, which is one of the biggest challenges facing the promotion of good sexual and loving health for all those who want to be responsible for their own well-being. All these elements will help us to ensure that this improved outlook in the world of sex education becomes a reality in the near future. Finally, let us, as Tisseron wrote in 2015, ask programmers to think of robots to whom we will be able to say: "Enable me to know exactly who I was and who I am so that I can take ownership of the person I will be in the future" [27, p. 186].
3,438.4
2020-01-01T00:00:00.000
[ "Education", "Sociology", "Computer Science" ]
An accurate integral equation method for Stokes flow with piecewise smooth boundaries Two-dimensional Stokes flow through a periodic channel is considered. The channel walls need only be Lipschitz continuous, in other words they are allowed to have corners. Boundary integral methods are an attractive tool for numerically solving the Stokes equations, as the partial differential equation can be reformulated into an integral equation that must be solved only over the boundary of the domain. When the boundary is at least C1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^1$$\end{document} smooth, the boundary integral kernel is a compact operator, and traditional Nyström methods can be used to obtain highly accurate solutions. In the case of Lipschitz continuous boundaries, however, obtaining accurate solutions using the standard Nyström method can require high resolution. We adapt a technique known as recursively compressed inverse preconditioning to accurately solve the Stokes equations without requiring any more resolution than is needed to resolve the boundary. Combined with a periodic fast summation method we construct a method that is O(NlogN)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}(N\log N)$$\end{document} where N is the number of quadrature points on the boundary. We demonstrate the robustness of this method by extending an existing boundary integral method for viscous drops to handle the movement of drops near corners. Introduction The Stokes equations are used to model slowly moving, highly viscous, fluid flow. They can be thought of as the zero Reynolds number limit of the Navier-Stokes equations. Of the many applications of the Stokes equations, they are often used to model particle suspensions including solid particles [6,3], drops [21,26,27], or vesicles [28]. In addition they often describe very well the flow near a solid boundary, and can therefore be used to derive effective slip boundary conditions for problems at higher Reynolds numbers [1,7]. An advantage of using the Stokes equations over the Navier-Stokes equations is that the Stokes equations are linear and elliptic, allowing us to recast them as a boundary integral equation (BIE). BIEs have several nice properties. All the information needed to solve a BIE is confined to the boundary of the domain; this leads to an immediate dimension reduction. The Stokes equations can be represented as a second-kind Fredholm equation [23]. Assuming the boundary is sufficiently smooth, after discretization the condition number of the resulting linear system is independent of the number of discretization points used, meaning that very highly accurate solutions are obtainable. Traditionally one major drawback of BIEs was the need to solve dense linear systems. However, by using efficient iterative solvers such as GMRES [30], combined with fast matrix vector products [9,19], the cost to solve the N × N dense linear system can be reduced to O(N ) or O(N log N ), where N is the number of discretization points on the boundary of the domain. In this paper, we will be considering wall-bounded, periodic Stokes flow. For particulate flows, such models are useful because they allow for the computation of various time averaged quantities over a relatively small reference cell, without the need to simulate an unfeasibly large domain. BIEs have been successfully applied to such problems [20,27,35]. Solving PDEs on Lipschitz domains to high accuracy everywhere in the domain is in general quite challenging, independent of the numerical method used; see for example the discussion in [8]. When solving boundary integral equations on domains with corners, the standard Nyström method fails to achieve optimal accuracy [5]. While the underlying equation has a unique solution, the layer density defined on the boundary can become weakly singular at corner points, thus reducing the accuracy of regular quadrature rules, such as composite Gauss-Legendre quadrature. One approach to solving this problem is to cluster additional quadrature points near the corners. This of course dramatically increases the number of unknowns, while at the same time the accuracy of such an approach is limited [5,12]. A recent paper [34] demonstrates an approach that automates both the spatial adaptivity and the order of the quadrature (similar to hp adaptivity) to achieve high accuracy for complicated domains containing several corners. Other methods have been developped, some of which involve elegant kernel-dependent custom quadrature rules [31,29]. We will use a kernel-independent method known as recursively compressed inverse preconditioning [12,13]. This method has the advantage of being relatively simple to implement on top of existing code, and does not introduce any additional unknowns to the linear system that must be solved. We will demonstrate the robustness of this method when applied to periodic Stokes flow by solving problems involving moving viscous drops. Governing Equations The governing equations are the steady incompressible Stokes equations, where µ is the viscosity of the fluid. In this paper we will restrict our attention to periodic channel flow as depicted in Figure 1. For boundary conditions, we will prescribe Dirichlet conditions on the velocity, and enforce periodicity in the x 1 direction on the velocity. In addition we impose a constant pressure drop across the reference cell, so that the pressure itself is not periodic, but the gradient of the pressure is, Figure 1: Sketch of a periodic channel with a Lipschitz boundary. The period of the channel is L in the x 1 direction, and the minimum height of the channel is h. The normal vector on the channel walls points into the bulk fluid. Boundary Integral Formulation For clarity of exposition, in this section and the next we will present the boundary integral formulation, and the treatment for the corners on a nonperiodic domain. The periodicity will be addressed in Section 5. Consider the Stokes equations (1) defined inside a Lipschitz domain, along with Dirichlet boundary conditions on the velocity. For the non-periodic case, we will not be concerned with the pressure. The normal vector along the boundary Γ always points into the fluid. See Figure 2 for a sketch of such a domain. As given in [24] [Chapter 2], a solution to the forced Stokes equations is given by the velocity and pressure pair The stress tensor σ j at a point x is given by the combination of the velocity and pressure, Here and in the remainder of this paper we have used the Einstein summation convention, where summation over repeated indices is implied. The letters i and k are reserved for later use and are therefore not used as indices. In R 2 , we have that Here we have used r to denote the Euclidean norm of r. The tensors G j and T j m are called the Stokeslet and stresslet respectively. From the Stokeslet and stresslet we can define the single-and double-layer potentials, S[q](x) and D[q](x), as a convolution of the Stokeslet and stresslet with a density function q(x) defined over Γ. Explicitly, where n is the unit normal vector pointing into the fluid. Following [11], we will express the solution to (1) as a combination of the single-and double-layer potentials where η > 0 is an arbitrary constant which governs the relation between the single and double layer potentials. To obtain a well-conditioned system η cannot be too large; we will always take η = 1. This equation is valid for x ∈ Ω. To obtain a boundary integral equation (BIE) we will take the limit of (3) as x approaches a point x 0 ∈ Γ. To do this, we will need the limiting values of the single-and double-layer potentials, Applying these limits to (3) and matching it to the boundary condition g yields to BIE If Γ is Lyapunov smooth, then the operator β is a compact operator, with eigenvalues accumulating at zero. In this case (4) can be analyzed using Fredholm theory. In particular the Fredholm alternative applies, and in [11] this is used to demonstrate the existence and uniqueness of solutions. If, as in our case, Γ is only Lipschitz smooth, then β is not compact so Fredholm theory cannot be applied. Furthermore, the double-layer potential involves the normal vector, meaning that the double-layer kernel cannot be evaluated pointwise. Nonetheless, it can be shown that (4) has a unique solution, even when Γ is only Lipschitz continuous [5]. In this case the density function q is singular at the corner points, however (3) can still be evaluated for any x ∈ Ω, and (4) can be evaluated anywhere on Γ except at the corner points. Numerical Methods To evaluate (4), we use the Nyström method, as described in [4] [Chapter 4]. Let γ(s), s ∈ [0, 2π] parameterize Γ. The BIE (4) can be written in the abstract form The kernel K in this case is the kernel of β, i,e., We will approximate the integral in (5) using a composite Gauss-Legendre quadrature scheme of n pan panels and n q quadrature points per panel, where N = n pan n q is the total number of quadrature points, and w n is the quadrature weight corresponding to the quadrature point s n . We will then enforce (7) at the quadrature points s m , m = 1, . . . , N to get the linear system Here the point values of the density function q are unknown. When setting up the composite quadrature rule, care must be taken to ensure that each corner on Γ is at the intersection of two panels. In this way we avoid difficulties arising from trying to evaluate the normal vector on a corner, as the Gauss-Legendre quadrature points cluster near but are never located at panel endpoints. Singular Quadrature When t = s, the kernel of the double-layer potential T j m (0)n m (s) has a removable singularity. The Stokeslet however must be handled using a specialized quadrature technique. We will use the approach given in [22]. We begin by writing the single-layer potential in complex variables, where we have introduced a slight abuse of notation to denote q(z) as density function q written as a complex number, i. e., z = x 1 + ix 2 and q(z) = q 1 (x) + iq 2 (x). The only term in the complex formulation of the Stokeslet that does not have a finite limit as τ → z is the term with the logarithm. We will consider this integral over a single panel, Γ k . The panel extends from α 1 to α 2 , with α 1 , α 2 ∈ C, and is parameterized by α ∈ [α 1 , α 2 ]. Let α be parameterized as, where t ∈ [−1, 1] and ψ = (α 2 − α 1 )/2. The log term in the integral above can be rewritten in the form, where τ (α z ) = z. Define t z to be such that α(t z ) = α z . Then we have that which allows us to rewrite the log integral in (8) as The first integral can be evaluated using the fact that To evaluate the second integral, we expand the density q(t) as a polynomial series to obtain, allows us to write I in the compact form The integrals in h can all be computed analytically using a recursion relation. The vector c = {c j } nq−1 j=0 can be computed by solving the Vandermonde system where q = {q(t 0 ), . . . , q(t nq−1 )}, and V is the Vandermonde matrix. Thus I can be approximated by where ω(t z ) is the solution to the linear system h(t z ) = V T ω(t z ). The stability of these computations is discussed in [14], here we simply note that the computations are stable. Note that this linear system is independent of q. For now we will need the values of I only at the quadrature points t 0 , . . . , t nq−1 . Since we have rescaled and rotated the panel Γ k to run from -1 to 1, the corrected weights ω(t z ) can be precomputed for each of the quadrature points t 0 , . . . , t nq−1 . Quadrature points on adjacent panels will also need to be corrected. How many points to correct depends on accuracy considerations, but for n q = 16, numerical results have shown that if all the panels are the same size (in parameter space), then correcting the four closest quadrature points in each adjacent panel is sufficient. Again, these corrections can be precomputed. Recursively Compressed Inverse Preconditioning As previously mentioned, in the case of Lipschitz domains, (4) has a unique solution. However, at the corners, the density function q becomes singular. Therefore Gauss-Legendre quadrature fails to accurately integrate the integrands in (4) near the corners. Local panel refinement around the corners is one way to mitigate this issue, however, this approach adds a potentially large number of new unknowns, see Figure 3. In addition, the condition number of the resulting locally refined linear system grows with the number of quadrature points. An alternative approach, recursively compressed inverse preconditioning (RCIP) [12,13], can achieve the same accuracy as local refinement by performing a simple precomputation. As the desired accuracy requirements increase, the precomputation grows linearly with n sub , however both the size and the condition number of the final linear system remain fixed. The main idea behind RCIP is to transform the density function q in (4) into a transformed densityq that is piecewise smooth everywhere on Γ. Then Gauss-Legendre quadrature can be effectively used. To create this transformation, the operator β is written as where β • is compact away from the corners, and β * describes the corner interactions. We then define the transformed density as where I is the identity operator. Using the split (9), and the transformed density (10), we can convert (4) to the transformed BIE forq, where R = (I+β * ) −1 . If we assume g is piecewise smooth, it can be immediately seen thatq must also be piecewise smooth. Since β • is a smoothing operator, then β • Rq will be smooth everywhere. Then, by contradiction, in order for (11) to hold, Iq =q must be piecewise smooth. It remains to discretize (11). One possibility for discretization is to use two meshes: a coarse mesh Γ coa on which β is discretized, and a fine mesh Γ fin , on which R is discretized. To get Γ fin from Γ coa we first designate the two panels on either side of corner j to be the subset Γ * j ⊂ Γ coa , and define Γ • to be Γ coa \ ∪ nc j=1 Γ * j . The panels on either side of corner j in Γ * j are then dyadically refined n sub times to get Γ * j,fin . The fine mesh is defined as Γ fin = Γ • ∪ ∪ nc j=1 Γ * j,fin . An example of this is shown in Figure 4. We will define β • and β * to be the operator β restricted to the domains Γ • and Γ * respectively. Note that β • is a compact operator. The operator β • can Figure 4: Panel discretization using two meshes. The boundary Γ is first discretized with a coarse mesh, Γcoa. The two panels on either side of each corner are denoted Γ * , and the remaining panels are denoted Γ • . The panels in Γ * closest to the corners are dyadically refined n sub times to obtain Γ fin . Each panel, regardless of its size, contains nq Gauss-Legendre quadrature points. be discretized on Γ coa to get the matrix B • . This matrix is equivalent to the discretization of β on Γ coa , but with the entries corresponding to both source and target being outside Γ • set to zero. The operator R could be discretized on Γ fin . This would however not be much use, since it would introduce a large number of unknowns. Instead we will exploit a forward recursion relation to construct R on a sequence of meshes covering larger and larger portions of Γ * j . To define the recursion relation, we will need to provide some definitions of different types of meshes. For each corner j, define a sequence of meshes Γ and Γ * j,n sub = Γ * j . On each Γ * j, we will have a six panel type-b mesh, Γ * j, b , and a four panel type-c mesh Γ * j, c . The type-b and type-c panels will be related as shown in Figure 5. To interpolate between type-b and type-c meshes defined over the same interval, we introduce the prolongation matrix P bc . In addition, defining W b and W c to be a diagonal matrix containing quadrature weights on the type-b and type-c meshes respectively, we can define the weighted prolongation matrix P W bc as . This matrix will be of size 6n q for scalar problems, or 12n q for vector problems in R 2 . Define R j,1 as We then compute the sequence R j,2 , . . . , R j,n sub using the recursion relation where I • b and B • j, b are the identity matrix and B j, b with the entries in the two panels around the corner zeroed out respectively. The operator F{·} zero pads the argument to turn a matrix defined on Γ * j, c into one defined on Γ * j,( +1)b . Once we have the matrices R j,n sub , we can constructR, the discretization of R on Γ coa . Outside of Γ * , β * is zero, so from the definition of R, we obtain thatR must be the identity matrix over Γ • . Over Γ * j , the discretization of R is just R j,n sub . To handle the log singularity when assembling the matrix B j,1b and B • j, b , the same techniques as described in Section 4.1 can be used. Some bookkeeping is needed to account for the fact that the panels are not equally sized in parameter space. In some cases adjacent panels will be double or half the size, however the modifications for the quadrature weights can still be precomputed for each case. Fast multiplication The fully discrete transformed BIE defined on Γ coa is which can be solved using GMRES. To accelerate the required matrix-vector products from O(N 2 ) to something computationally feasible, fast summation methods are a necesity. To do this, we will rewrite B andR as the block matrices It follows that B • can be expressed as Then (13) can be rewritten as The matrix vector productRq can be done directly, sinceR is just the identity matrix with one 8n q × 8n q block for each corner. The remaining matrix vector products BRq and B * Rq can be computed with a fast summation method, reducing the cost from O(N 2 ) to O(N log N ) or O(N ), depending on the choice of fast summation method. We will use an O(N log N ) method that can naturally handle periodic problems, as will be described in Section 5. Post-processing After we have computedq, we evaluate the velocity at a point u ∈ Ω by using the regular quadrature rule whereq =Rq, and K j is defined in (6). This quadrature is as accurate as if we had access to the fine density defined on Γ fin . From the definitions ofq and R, it is clear that outside of Γ * ,q and q are equivalent. On Γ * , the transformed densityq can be thought of as a weight corrected density function. Both the double-layer potential and the single-layer potentials contain integrals that become near-singular when t − s is small, i.e. the numerical errors grow large when evaluating the integrals at points close to any boundary. To handle this, the integrals are treated in the manner described in [27]. In short, the main idea is similar to that explained in Section 4.1 where the density is expanded as a polynomial series, and the integrals computed analytically using recursion relations. Periodicity We now address the periodicity in (2). The periodic single-and double-layer potentials, S P [q](x) and D P [q](x) are defined as an infinite sum of the singleand double-layer potentials defined in Section 3, x ∈ Ω, (14a) With these operators we can define the periodic operator β P [q], Note that periodic sum is over p ∈ Z 2 ; in other words we are implying periodicity in both spatial dimensions, even though in our actual problems the periodicity will be only in one direction. The reason for this is that it allows us to exploit a more efficient fast-summation method. To enforce the periodicity in one dimension we will embed the channel, which is only periodic over a length L in the x 1 direction, in a doubly periodic box of size L × L. A similar approach is used in [35] where a periodic flow in one direction is embedded in two dimensional periodic box. As mentioned in Section 2, a constant pressure drop p 0 −p 1 in the x 1 direction is applied. In order to impose this this we will add on an unknown mean velocity u to the layer formulation (4), Note that · denotes a volume average over a full periodic cell, including the parts outside Ω. This quantity has no physical meaning beyond imposing a pressure gradient. An alternative approach would be to add a predetermined background flow, as done in [27]. The mean pressure gradient ∇p must balance the net effect of the wall friction [35]. From (15) the wall friction force is equal to η Γ q dΓ [11]. The system we have to solve is thus given by The splits for the two dimensional Stokeslet and stresslet, as well as truncation estimates are derived in [26]. Here we list the results. To compute the infinite sums in (14) we will use the spectral Ewald method [18,19]. In the spectral Ewald method the infinite sums in S P and D P are split into two parts based on a so-called Ewald decomposition: one that decays exponentially fast in real space, and one that decays exponentially fast in Fourier space. The real space sum can be truncated in real space, while the Fourier sum can be truncated in Fourier space. The splits for the two dimensional Stokeslet and stresslet, as well as truncation estimates are derived in [27]. Here we list the results. Ewald splits The discretized periodic single-and double layer potentials are given by where the * in the summations over p indicates that we are excluding any p that makes x − x m − Lp = 0. The discretized single-layer potential can be rewritten as the split The k = 0 mode in the Fourier expansion has been shown to be 0 [25]. The decomposition parameter ξ > 0 is called the Ewald parameter and determines the relative sizes of the real and Fourier parts of u G (x). Note that u G (x) itself is independent of ξ. When applying the Nyström method to solve for the unknown pointwise density values, we do not wish to evaluate singular cases where x −x n +Lp = 0. This contribution to u G (x) can be skipped in the real space sum, but for the Fourier sum we will have to subtract off the limiting value given by where γ is the Euler-Mascheroni constant. For the discretized double-layer potential, we use the split, For the k = 0 mode, the choicê guarantees zero mean-flow though the reference cell [2]. For the stresslet, the limit lim r→0 (T j m (r) − T R j m (r))q n m w n = 0, so no limiting value needs to be subtracted when r = 0. To compute these sums, it is necessary to truncate them. Fortunately, both these sums decay exponentially fast, and they can be truncated to a desired tolerance following the estimates in [27]. With appropriate scaling of the decomposition parameter, the real space part is computed in O(N ) time and the Fourier space part is accelerated to O(N log N ) using fast Fourier transform. In order to use FFTs, the source points are spread to a uniform grid where the computations for the Fourier space sum are carried out. The result is then gathered from the uniform grid to the target points. The spreading and gathering is done using truncated Gaussians whose shape parameter is selected to minimize the approximation error for a given support. For efficiency, fast Gaussian gridding [10] can be used in both the spreading and the gathering steps. For more details see [27]. A numerical difficulty in the periodic formulation lies in the fact that the Ewald representation of the stresslet is not translation invariant due to the zero mode (16). This means that the submatrices R j are different for each corner, even if the corners have the exact same shape. To avoid roundoff error, for large n sub , we would like to have a local coordinate system for each corner centered at x = 0. To assemble these matrices, we first assemble the translation invariant part of the matrices R j , and then add on (16) only at the end. Note that this is necessary only for quite large n sub . Numerical Examples To test the periodization scheme, we can test with an exact solution. Pressure driven pipe flow creates the well known parabolic flow profile in the x 2 direction. The exact solution for flow through a flat channel with top wall at x 2 = 1 and bottom wall at x 2 = 0 is given by Note that this flow is constant and therefore periodic in the x 1 direction. As can be seen in Table 1, using only 4 panels allows us to achieve a relative error of 10 −7 everywhere up to a distance of 10 −3 from the boundary. Another test is to prescribe Dirichlet boundary conditions on the solid walls. We will prescribe a constant shear flow u = x 2 , 0 as boundary conditions on the top and bottom wall. The bottom wall is now only piecewise smooth. To Table 1: Maximum relative errors in a pipe flow simulation along lines parallel to the top wall. As the target points get closer to the wall, the integrands become more challenging to evaluate accurately. The minimum error we are able to achieve is limited by the tolerances of our GMRES solver and Ewald summation. recover a shear flow in the interior from the BIE solution, we must prescribe no pressure drop from inlet to outlet, i.e., ∇p = 0. We will use this simple example to test the RCIP algorithm for various values of n sub . Figure 6 shows the results. If we use the standard Nyström discretization without RCIP, we get quite large errors everywhere in the domain; the errors are not localized around the corner. For n sub = 20 the RCIP algorithm allows us to achieve 11 digits accuracy everywhere in the domain, except for close to the corners.To gain additional accuracy, it is possible to recover the actual fine density and use it along with the special quadrature described in Section 4.4 [12] [Section 10]. In every case 20 panels on both the top and the bottom wall are used. Without any special treatment, the recovered solution differs quite a bit from the exact solution everywhere. With n sub = 20 the BIE can achieve 11 digits accuracy everywhere except very close to the corners. If additional accuracy is required near the corners, the fine density function can be recovered and used in conjunction with the special quadrature described earlier. The number of refinements needed to achieve a desired accuracy depends heavily on the geometry. For example Figure 7 shows an example where n sub = 20 achieves only a maximum of 7 digits of accuracy. For wedge shaped corners, the domain segments Γ j, b are self similar for = 1, . . . , n sub and j = 1, . . . , n c . If the operator β is scale invariant, then B • j, b will be independent of , and the recursion relation (12) becomes a fixed point iteration which can be iterated to find R j to a desired tolerance without specifying n sub [12] [Section 12] . Unfortunately, in our case, because the single-layer potential contains a logarithmic term, β is not scale invariant, so we cannot use this idea, and n sub must be specified a priori. A formulation involving just the double-layer potential [23] would not suffer from this drawback and could be formulated as fixed point iteration. We have chosen the formulation in [11] because numerical experiments have shown it to be more stable for simulations of squeezing drops [36]. Adding Viscous Drops Earlier work [27] has looked at modelling the movement of drops inside confined periodic geometries. We now demonstrate the robustness and usefulness of RCIP by extending the method in [27] to model the movement of drops near sharp interfaces. A drop is a packet of fluid that is does not mix with the fluid surrounding it. Surface tension forces prevent the mixing of the drop with the bulk fluid. Both the fluid inside each drop, and the bulk fluid, satisfy the incompressible Stokes equations, where µ denotes the viscosity inside the region bounded by Γ . For convenience we will define the viscosity ratio λ = µ /µ 0 . As λ increases the drop behaves more like a rigid particle. At the interfaces the velocity is continuous, however in general the surface forces acting on the interface from the inside and the outside of the drop are not equal. We will denote the jump in the normal force across interface as δf (x). This jump is related to the curvature κ (x), and the surface tension σ (x) by [24] [Chapter 5], where e denotes the rate of strain tensor in Ω and ∇ s is the gradient along the interface. To nondimensionalize this problem, we will define h to be the minimum height of the channel. We would like the maximum velocity of an empty channel to be 1. To do this, we will introduce a pressure scale h ∇p /8, a length scale h, a velocity scale h 2 ∇p /(8µ 0 ), and a surface tension scale σ 0 to obtain the full nondimensional form of the problem, where the Laplace number Lp is the dimensionless quantity given by h 2 ∇p /(8σ 0 ) [17]. The time scale is now h/U = 8µ 0 /(h ∇p ). The Laplace number serves the same purpose as the Capillary number in other drop simulations [27], but for pressure driven flows it is more natural to nondimensionalize according to the pressure gradient as opposed to a maximum velocity. For the remainder of this paper we will assume a constant surface tension, so ∇ s σ = 0. Allowing for non constant surface tension would not impact the RCIP algorithm in any way. In [26,27] chemical surface active agents are allowed to change the surface tension of the drops. A BIE formulation [36] for the problem is given by where S P Γ and D P Γ are the periodic single-and double-layer operators defined on Γ . Taking the limit as and taking the limit As in Section 5 we close the system by relating the density function q, still defined only on the solid walls Γ 0 , to the (nomdimensionalized) average pressure gradient, After computing u(x) on the drop interfaces, the drops move and deform according to the velocity, To evaluate this system of ODEs we will use the fourth order adaptive time stepper described in [16]. Further details on time stepping, including a way to maintain consistent acrclength spacing as the drop perimeter changes, can be found in [26,27]. It is important to note that since the solid walls are not moving, the RCIP matrices R j , j = 1, . . . , n c need only be computed at the start of the simulation. This means that after this precomputation the actual time needed each time step to solve the linear system is independent of n sub . In fact, as using RCIP improves the accuracy of the simulation, an adaptive time stepper may find it easier to meet a desired tolerance allowing larger time steps to be taken. Therefore the overall computation time may well be lower if RCIP is used. Investigation of Accuracy To test our method, we will perform a numerical experiment. Figure 9 shows snapshots of a simulation. We begin by creating a high resolution reference solution with RCIP for λ = 1 and λ = 5. The high reference solution will be discretized using n pan = 140 on both the solid walls and the drop, and n sub = 20. The time stepping tolerance for the high resolution simulation is 10 −10 . t = 0 t = 0.5 t = 1 Figure 9: Snapshots of a drop moving inside a channel containing a corner. A pressure drop is applied from left to right. By varying the number of points on the drop and the wall we can perform a numerical convergence study, the results of which are shown in Figure 10. We have run two simulations, one with λ = 1 (blue drop), and another with λ = 5 (red drop). As expected, the drop with λ = 5 deforms less than the λ = 1 drop. Lp = 1 for all simulations. To compute an error, we will look at the ∞ difference of a numerical solution with the computed reference solution. As described in [27], when updating the positions of the drops it is advantageous to work on a uniform grid, instead of the panel Gauss-Legendre points. Using a uniform grid also allows us to easily upsample the non-reference solutions using an FFT to obtain a solution at the same discretization points as the reference solution. As can be seen in Figure 10, without RCIP, as we refine the number of points on both the drop and the wall, the numerical solutions for both λ = 1 and λ = 5 converge very slowly towards the high resolution reference solution. When we use RCIP with n sub = 20, the numerical solutions are much more accurate, around the level of the time stepping tolerance for the low resolution simulations of 10 −8 . λ = 1, n sub = 0 λ = 1, n sub =20 λ = 5, n sub = 0 λ = 5, n sub =20 Figure 10: Spatial error study with and without RCIP for the simulation shown in Figure 9. Using RCIP gives much lower errors, and these are near the time stepping tolerance of 10 −8 . Lp = 1 for all simulations. Multiple Drops To demonstrate the robustness of the code, we will investigate the movement of multiple drops of different sizes confined in a periodic channel containing a narrow constriction. Snapshots of such a simulation are shown in Figure 11. As the snapshots make clear, there is a clear, visible difference in the modelled drops depending on whether or not we use RCIP. Further proof of the necessity of the proper handling of corners is shown in Figure 12. Since the fluid inside the drops is incompressible, the area of each drop should be conserved. Note that this is not enforced explicitly, nor is area conservation used in the criteria for the temporal adaptivity. Without RCIP, the area after each drop has passed one periodic channel length is conserved only up to around 10 −4 . When applying RCIP, it is below 10 −10 , a full factor of 10 6 improvement. Figure 11 as a function of time. The error eventually plateaus around 10 −4 without RCIP, but remains below 10 −10 when RCIP with n sub = 50 is used. Conclusion Domains with sharp corners pose challenges for boundary integral methods. The layer density defined on the boundary becomes weakly singular at the corners, and therefore standard quadrature rules cannot be accurately applied. We have demonstrated that a technique known as Recursively Compressed Inverse Preconditioning (RCIP) can be used to accurately solve the Stokes equations on two-dimensional domains with corners. This method requires a small amount of precomputation, however it does not add any additional unknowns to the problem, nor does it increase the condition number of the linear system. Numerical experiments have demonstrated its robustness and usefulness for postprocessing the velocity anywhere in the domain. Additionally we have shown that it can be added to an existing drop model [27] to accurately model the movement of drops near corners. Future work could include extending this model to three dimensions. A boundary integral method for three-dimensional drops in free space has been developed [32,33]. In three dimensions Lipschitz domains admit both corners and edges, so the RCIP algorithm described in Section 4.2 must be further developed. In [15] such an approach is used to compute the capacitance of a cube.
8,754.2
2019-11-12T00:00:00.000
[ "Computer Science" ]
ENERGY MINIMIZATION IN TWO-LEVEL DISSIPATIVE QUANTUM CONTROL: THE INTEGRABLE CASE The aim of this contribution is to refine some of the computations of [6]. The Lindblad equation modelling a two-level dissipative quantum system is investigated. The control can be interpretated as the action of a laser to rotate a molecule in gas phase, or as the effect of a magnetic field on a spin 1/2 particle. For the energy cost, normal extremals of the maximum principle are solution to a three-dimensional Hamiltonian with parameters. The analysis is focussed on an integrable submodel which defines outside singularities a pseudo-Riemannian metric in dimension five. Complete quadratures are given for this subcase by means of Weierstraß elliptic functions. Preliminary computations of cut and conjugate loci are also provided for a two-dimensional restriction using [9]. Introduction. We are concerned with the bilinear Lindblad equations describing the dynamics of a two-level dissipative quantum system, ẋ1 = −Γx1 + u2x3, (1) ẋ2 = −Γx2 − u1x3, (2) ẋ3 = γ̃ − γx3 + u1x2 − u2x1, (3) where 2Γ ≥ γ ≥ |γ̃| are dissipation parameters modelling the interaction with the environment (e.g., molecular collisions). The state x ∈ R represents in suitable coordinates the density matrix of the quantum system. The control u = (u1, u2) ∈ R can be an electric or a magnetic field. The recent interest in such problems comes from several applications. Among them, we can mention molecular alignment in gas phase using a laser field, and control of the dynamics of spin 1/2 particles in liquid phase using nuclear magnetic resonance. The conservative case (that is without dissipation) has been addressed in several papers (see, e.g., [7, 10]). We focus here on the more complex dissipative 2000 Mathematics Subject Classification. Primary: 49K15, 81Q05. (Communicated by the associate editor name) Abstract. The aim of this contribution is to refine some of the computations of [6]. The Lindblad equation modelling a two-level dissipative quantum system is investigated. The control can be interpretated as the action of a laser to rotate a molecule in gas phase, or as the effect of a magnetic field on a spin 1/2 particle. For the energy cost, normal extremals of the maximum principle are solution to a three-dimensional Hamiltonian with parameters. The analysis is focussed on an integrable submodel which defines outside singularities a pseudo-Riemannian metric in dimension five. Complete quadratures are given for this subcase by means of Weierstraß elliptic functions. Preliminary computations of cut and conjugate loci are also provided for a two-dimensional restriction using [9]. Introduction. We are concerned with the bilinear Lindblad equations describing the dynamics of a two-level dissipative quantum system, where 2Γ ≥ γ ≥ | γ| are dissipation parameters modelling the interaction with the environment (e.g., molecular collisions). The state x ∈ R 3 represents in suitable coordinates the density matrix of the quantum system. The control u = (u 1 , u 2 ) ∈ R 2 can be an electric or a magnetic field. The recent interest in such problems comes from several applications. Among them, we can mention molecular alignment in gas phase using a laser field, and control of the dynamics of spin 1/2 particles in liquid phase using nuclear magnetic resonance. The conservative case (that is without dissipation) has been addressed in several papers (see, e.g., [7,10] situation. While final time minimization is studied in [5], we consider the so-called energy criterion, without any bound on the control. Existence results for this problem are given in [6], as well as preliminary computations of optimal trajectories in a particular integrable subcase of the model. We propose here a straightforward algebraic derivation of the latter. This will lay the emphasis on the classification of optimal curves by a single integer, the genus of the complex algebraic curve behind the computations. It will also provide quadratures well suited for further studies, in particular estimates of cut and conjugate points in relation with global and local optimality of trajectories. In the first section, we recall Pontryagin maximum principle and reduce the study to an integrable Hamiltonian submodel (with parameters) on the two-sphere. Quadratures of the resulting extremal flow are given in section 2. The last section is devoted to a preliminary analysis of cut and conjugate loci of the submodel. 1. Normal extremals. According to Pontryagin maximum principle, optimal trajectories are projection on the state space of solutions (extremals) in T * R 3 of the following Hamiltonian, Here, p 0 ≤ 0 is a parameter, (x, p) are coordinates on the cotangent bundle, and H i = p, F (x) , i = 0, 2, are Hamiltonian lifts of the vector fields defining the dynamics (1-3),ẋ Moreover, the Hamiltonian has to be maximized almost everywhere with respect to u along the extremal. It is homogeneous in (p 0 , p) and there are two situations: The normal case p 0 = 0, and the abnormal case p 0 = 0. Restricting to normal extremals (see [6] for the abnormal one) and normalizing p 0 to −1/2, the maximization condition leads to u = (H 1 , H 2 ) which allows to express the control as a function of (x, p). Plugging this function into H defines the true Hamiltonian of the problem, We make a change of variables both on the state and on the parameters, passing to suited spherical coordinates (x 1 , x 2 , x 3 ) = e r (sin ϕ cos θ, sin ϕ sin θ, cos ϕ), and setting δ = Γ − γ. Then, H 0 = −(δ sin 2 ϕ + γ)p r − δ cos ϕ sin ϕ p ϕ − γe −r (p r cos ϕ − p ϕ sin ϕ). Proof. As both coordinates r and θ are cyclic, p r and p θ define two additional linear first integrals. The Hamiltonian is quadratic in (p r , p θ , p ϕ , δ, γ, γ), and easily checked to be everywhere degenerate as a form in dimension six. Nevertheless, restricting to the integrable submodel, Parameters δ and γ can be interpretated as duals to cyclic variables, and the following holds. Proposition 2. The integrable submodel defines a (3, 2) pseudo-Riemannian metric in dimension five with a singularity at ϕ = π/2. The restriction to p r = 0 is Lorentzian in dimension three outside the singularity. Proof. The determinant of the quadratic form in (p r , p θ , p ϕ , δ, γ) is equal to cos 4 ϕ. 2. Integration of the flow. On the level set H = h, integrability for γ = 0 is also clear as the system can be rewritten in the mechanical form where the potential is Setting X = sin 2 ϕ and Y =Ẋ, one has the parameterization by the algebraic complex curve As the degree in the right-hand side is less or equal to four, the genus is at most one so ϕ is rational or elliptic. Using the bi-rational transform u = 1/(1 − X) to send the fix root X = 1 (that is ϕ = π/2, equator) to infinity, with ξ 2 = 2h + 2(δ + γ)p r . Obviously, Lemma 2.1. The points (0, δ 2 ) and (1, −p 2 θ ) belong to the elliptic curve (4). Proposition 3. Assume δ > 0 and p θ = 0. (ii-a) When ξ 2 > 0, the curve is elliptic and parameterized by the unbounded component of the elliptic curve. (ii-b) When ξ 2 < 0, the curve is elliptic and parameterized by the bounded component of the imaginary elliptic curve. Proof. Inspecting the graph of the cubic in (4) right-hand side, it is clear using the previous lemma that for ξ 2 > 0 (resp. < 0) the unbounded component of the real elliptic curve alone (resp. the bounded component of the imaginary elliptic curve) is admissible since u ≥ 1 (u = 1/(1 − X) with X = sin 2 ϕ). Disregarding the simpler rational situation, we assume ξ = 0 and use affine coordinates for the homogeneous parameter [ξ : p r : p θ : δ : γ] to get where w = Y u 2 /ξ, p r = p r /ξ, etc. In Weierstraß form, we finally have and invariants g 2 , g 3 rational in the parameters. 1 Proposition 4. Trajectories of the normal flow in the integrable case are the following, with Weierstraß invariants and a defined according to (5)(6). Let 2ωZ+2ω Z denote the real rectangular lattice of periods, and τ ( resp. T ) denote the period of X = sin 2 ϕ ( resp. of ϕ). 3. Conjugate and cut loci. We recall the following standard notions of Riemannian geometry [1] and optimal control. A cut point is the first point (if any) along an extremal such that the extremal ceases to be minimizing. Given an initial condition x 0 , the cut locus is the set of such points on extremals departing from x 0 . A point x(t c ) on an extremal z = (x, p) is conjugate to x 0 if there exists a Jacobi field δz = (δx, δx), solution of the linearized system along the extremal, which is non-trivial (δx not ≡ 0) and vertical at t = 0 and t c , δx(0) = δx(t c ) = 0. The conjugate locus is the set of such first points on extremals departing from x 0 . Conjugacy is classicaly related to local optimality of extremals in the relevant topologies. We focus on the restriction to p r = 0 of the integrable case. According to Proposition 2, the resulting Hamiltonian defines a three-dimensional Lorentzian metric with a singularity at ϕ = π/2, and describes the control system when the r-coordinate is not taken into account. The metric is actually Riemannian on S 2 when δ = 0 (with the same equatorial singularity) and has been studied in [2,4], so that (7) can also be interpretated as a Zermelo-like deformation (presence of a drift) of this Riemannian situation when δ > 0. (i) When ϕ 0 = π/2, the cut locus is a single antipodal branch and the conjugate locus is astroidal with two horizontal and two vertical cusps. (ii) When ϕ 0 = π/2, the cut locus is the equator minus the initial point and the conjugate locus is double-heart shaped with four vertical cusps. For δ > 0, we exclude the singular case ϕ 0 = π/2 and provide some preparatory numerical insight into the structure of cut and conjugate loci. The following is clear (see Fig. 3). Beside intersections of small extremals, another new phenomenon compared to the Riemannian case (δ = 0) is the existence of extremals intersecting with same cost (and time) belonging to different Hamiltonian level sets, as illustrated by Fig. 4. In the Riemannian situation, one can restrict to the level H = 1/2 and parameterize so geodesics by arc length. Equivalently, one may fix the final time and obtain geodesics by varying the level set. In the Lorentzian case, the second approach still makes sense. We normalize the final time to t f = 1 and consider the h-curves generated by varying the level set. More precisely, restricting to p r = 0 and having fixed δ > 0 and ϕ 0 = π/2, we parameterize T * (0,ϕ0) S 2 (θ 0 is set to zero) according to p θ 0 tan ϕ 0 + i(p ϕ 0 − δ cos ϕ 0 sin ϕ 0 ) = ρe iα , 2h + δ 2 cos 2 ϕ 0 sin 2 ϕ 0 = ρ 2 . (8) To any direction α of initial adjoint vector is associated an h-curve, h → exp ϕ0,δ (h, α) where the exponential is the projection on the (θ, ϕ)-space of the integral curve of H with initial condition (0, ϕ 0 , p θ 0 , p ϕ 0 ), the initial adjoints being defined by (h, α) in accordance with (8). These h-curves are evaluated numerically, and conjugate points occuring at t f are computed by [9] along them. We are thus able to obtain the section at t = t f of the conjugate locus, as well as the isocost lines or wavefront. The results displayed Figs. 5-9 provide a first insight into the structure of cut and conjugate loci for positive δ. In particular, antipodal cut points labeled I.a analogous to those of the Riemannian case are preserved. The same symmetry on small extremals generate cut points I.b, provided δ is big enough. 2 (Compare δ = 4 and δ = 5.6 in the aforementioned figures.) The results for larger values of δ indicate a more intricate structure of both cut and conjugate loci (see Figs. 8-9) that will be the subject of future investigation. Figure 2. Cut and conjugate loci, pr = 0, δ = 0, ϕ0 = π/2. Geodesics in blue, isocost lines green, cut locus black, conjugate locus red. For the initial condition at singularity, the cut locus is the whole equator minus the starting point. The conjugate locus has four vertical cusps (double-heart shaped locus on the sphere, see [4]). Figure 3. Intersecting extremals, ±φ(0) symmetry. Pair of tall extremals (ξ 2 > 0) on the left, small extremals (ξ 2 < 0) on the right, both intersecting with same cost for t f equal to the period of X = sin 2 ϕ (tall ones are obtained by symmetrically unfolding small ones reaching ϕ = π/2). In both cases, intersecting extremals belong to the same Hamiltonian level. These intersections generate cut points I.a (tall ones) and I.b (small ones), see Figs. 5-9. These points belong respectively to the antipodal line ϕ = π − ϕ0 and to ϕ = ϕ0. The conjugate locus (red) is the enveloppe of h-curves (blue). The four cusps (two horizontal and two vertical) of the conjugate locus are preserved (astroidal part compare Fig. 1, δ = 0) while a new smile-shaped component with two cuspidal singularities of the locus appears. Small parts of the second conjugate locus are also portrayed. Left, the antipodal part I.a of the cut locus (black) is preserved (compare Fig. 1, δ = 0). The new component II with extremities located at the singularities of the smile-shaped part of the conjugate locus appears. Right, isocost lines (green) defining the wavefront are portrayed. Selfintersections of the front define cut points II (black) while its swallowtail singularities run along the conjugate locus (red) obtained as a caustic. Left, the astroidal and smile-shaped parts of the conjugate locus are preserved (compare Fig. 6, δ = 4). Two new components (symmetry wrt. θ = 0) of the conjugate locus appear (detail on the rightmost picture), slightly deforming the smiling-shaped part in their neighbourhood. Right, detail of h-curves (blue) generating a new component (the right one, θ ≥ 0) of the conjugate locus (red) with two horizontal cusps. The corresponding right new component of the cut locus (black) appearing has its extremities located at these singularities. It is a single branch included in the parallel ϕ = ϕ0 formed by cut points I.b (intersections of symmetric small extremals, see Fig. 3). The same is observed on θ ≤ 0. Figure 9. H-curves, conjugate locus and wavefront, pr = 0, δ = 6, ϕ0 = π/4 (detail). The smile-shaped part of the conjugate locus (red) now self-intersects and has three additional cusps on its right part (the same is observed for θ ≤ 0), suggesting a more complicated structure of the cut locus in its neighbourhood. The right new component of the conjugate locus observed for δ = 5.6 now just has one cusp. The swallowtail singularities of the wavefront (green) suggest that part of the cut points of type I.b persist on ϕ = ϕ0.
3,714.2
2011-10-01T00:00:00.000
[ "Physics" ]
Probing the Role of Cysteine Residues in the EcoP15I DNA Methyltransferase* Chemical modification using thiol-directed agents and site-directed mutagenesis has been used to investigate the role of cysteine residues of EcoP15I DNA methyltransferase. Irreversible inhibition of enzymatic activity was provoked by chemical modification of the enzyme by N-ethylmaleimide and iodoacetamide. 5,5′-Dithiobis(2-nitrobenzoic acid) titration of the enzyme under nondenaturing and denaturing conditions confirmed the presence of six cysteine residues without any disulfides in the protein. Aware that relatively bulky reagents inactivate the methyltransferase by directly occluding the substrate-binding site or by locking the methyltransferase in an inactive conformation, we used site-directed mutagenesis to sequentially replace each of the six cysteines in the protein at positions 30, 213, 344, 434, 553, and 577. All the resultant mutant methylases except for the C344S and C344A enzymes retained significant activity as assessed by in vivo and in vitro assays. The effects of the substitutions on the function of EcoP15I DNA methyltransferase were investigated by substrate binding assays, activity measurements, and steady-state kinetic analysis of catalysis. Our results clearly indicate that the cysteines at positions other than 344 are not essential for activity. In contrast, the C344A enzyme showed a marked loss of enzymatic activity. More importantly, whereas the inactive C344A mutant enzyme boundS-adenosyl-l-methionine, it failed to bind to DNA. Furthermore, in double and triple mutants where two or three cysteine residues were replaced by serine, all such mutants in which the cysteine at position 344 was changed, were inactive. Taken together, these results convincingly demonstrate that the Cys-344 is necessary for enzyme activity and indicate an essential role for it in DNA binding. EcoP15I DNA methyltransferase (EcoP15I DNA MTase) 1 catalyzes the transfer of a methyl group from S-adenosyl-Lmethionine (AdoMet) to the second adenine nucleotide in the canonical site 5Ј-CAGCAG-3Ј (1) to form N 6 -methyladenine. The enzyme is part of the type III restriction-modification (R-M) system (2). Type III R-M enzymes are multifunctional proteins that exert both methylation and restriction activities (2). Type III R-M systems contain two subunits, the Res subunit encoded by the res gene and the Mod subunit encoded by the mod gene. Although the Mod subunit alone can catalyze the methylation reaction, both the Res and Mod subunits are necessary for DNA cleavage (2). The enzymes have an absolute requirement for ATP for restriction, and recently we and others (3,4) showed that ATP hydrolysis was required for DNA cleavage. It has been shown that only the Mod subunit is involved in DNA sequence recognition in both the restriction and modification reactions (5). We had earlier shown by gel mobility shift assays that EcoP15I DNA MTase binds about 3-fold more tightly to DNA containing its recognition sequence 5Ј-CAG-CAG-3Ј than to nonspecific sequences in the absence or presence of cofactors. Interestingly, in the presence of ATP, the discrimination between specific and nonspecific sequences increased significantly (6,7). Based on the type of methylation catalyzed and amino acid sequence analysis, DNA MTases are divided into three classes (8). m4C-MTases are enzymes that methylate the exocyclic amino group of cytosine to form N 4 -methylcytosine, and m6A-MTases methylate the exocyclic amino group of adenine to form N 6 -methyladenine. The third class contains the m5C-MTases, that methylate cytosine residues at the C-5 position to form C 5 -methylcytosine. Comparative analyses have shown that m5C-MTases share an ordered set of sequence motifs that alternate with non-conserved regions (9 -13). Among the well conserved motifs, motif I (FXGXG) can be seen in all three classes of DNA MTases as well as in protein and RNA MTases. All methyltransferases utilize AdoMet as methyl donor, and it was proposed that motif I is involved in AdoMet binding (9,14). The tertiary structures of the HhaI and HaeIII DNA MTases (belonging to m5C-MTases) and TaqI DNA MTase (member of the N6A-MTase) bound to AdoMet (15)(16)(17) clearly indicate that motif I forms a part of the AdoMet binding pocket. Structural analysis has found striking similarity between DNA MTases of the two classes, namely the m5C-and m6A-MTases. This suggested that many AdoMet-dependent MTases may share a common catalytic domain structure. Guided by this common catalytic domain structure, a multiple sequence alignment of 33 m6A-and 9 m4C-MTases revealed that these two classes of MTases were more closely related to one another and to the 5mC-MTases than was expected (18). Based on this analysis, m4C-and m6A-MTases do not group separately from one another. The amino MTases belong to three groups distinguished by differences in the linear orders of conserved motifs in their primary sequences. The three groups are named ␣, ␤, and ␥ (18). To date only two DNA amino MTases have been structurally characterized, the group ␥ N6mA MTase M.TaqI (17) and the group ␤ N4mC MTase, M.PvuII (19). EcoP15I DNA MTase as mentioned earlier, is an N 6 -adenine MTase and belongs to the ␤ group of amino MTases. We have recently demonstrated that altering amino acid residues in the motif I of EcoP15I DNA MTase resulted in loss of AdoMet binding but left DNA target recognition unaltered (20). A second motif characteristic of m6A-MTases and m4C-MTases, (N/ D/S) PP (Y/F) (motif IV) (21), is well conserved in EcoP15I DNA MTase. Substituting tyrosine in motif IV of EcoP15I DNA MTase by site-directed mutagenesis resulted in loss of enzyme activity although we observed enhanced cross-linking of AdoMet and DNA. These results reinforce the importance of motif IV in catalysis (20). Cysteine residues are particularly important for studying the structure and function of enzymes. In m5C-MTases, motif IV consisting of amino acids FPCQ has been shown to be the catalytic center. The invariant Pro-Cys dipeptide is known to be involved in methyl group transfer (22)(23)(24). In the catalytic mechanism of m5C-MTases, it has been shown that the cysteine residue of motif IV and C-6 of the target cytosine form a covalent intermediate during the methyl group transfer from AdoMet (22). Mutation of this cysteine in many C5-MTases abolishes the enzyme activity without affecting DNA recognition and cofactor binding (23)(24)(25). Although it has been firmly established that the thiol in the only conserved cysteine among m5C-MTases that carries out the attack at C-6 of cytosine and is important for the methylation reaction (22,25), very little is known about the roles of cysteine residue(s) in any N6A-Mtasecatalyzed reaction. Rubin and Modrich (26) demonstrated that EcoRI MTase rapidly lost activity upon cysteine modification by exposure to N-ethylmaleimide (NEM). Everett et al. (27) showed that NEM modification of cysteine 223 in EcoRI MTase was responsible for the loss of enzyme activity. Initial experiments done in our laboratory suggested that the absence of reducing agents in buffers used during purification of EcoP15I DNA MTase resulted in loss of enzyme activity; when purified enzyme preparations were dialyzed against buffers not containing any reducing agent, the enzyme lost activity on storage. These results suggested that cysteine residues in the protein could have a role in stabilization or in catalysis. EcoP15I DNA MTase, a 645-amino acid protein, contains six cysteine residues at positions 30, 213, 344, 434, 553, and 577 as deduced from the DNA sequence of the mod gene (28) (Fig. 1). By using oligonucleotide-directed site-specific mutagenesis, each of the six cysteines were substituted with another amino acid, and the mutant enzymes were purified and characterized. As a part of an investigation to study structure-function relationships in this enzyme, we were interested in assessing the role(s) of cysteine residues, if any, in DNA recognition, cofactor binding, or in catalysis. In the present investigation we have used chemical modification and site-directed mutagenesis studies to elucidate the role of the cysteines in the activity of EcoP15I DNA MTase. Our findings indicate that cysteine at position 344 is required for enzyme activity. EcoP15I restriction enzyme (R. EcoP15I) was purified from Escherichia coli cells harboring a pBR322-based plasmid containing the res-mod genes of p15B, a resident plasmid of E. coli 15 T Ϫ . All other chemicals used were of the highest purity reagent grade. Sources for all other chemicals used in this study have been described earlier (6,20). Bacterial Strains and Plasmid Vectors-E. coli JM109 was used as a transformation host for plasmid pDN8 carrying M. EcoP15I gene under lambda phage P L promoter (29). E. coli B strain BL21(DE3) with a phage lysogen (imm 21 int) that contains the phage T7 RNA polymerase gene under the lac UV5 promoter was used as a host for propagating the plasmid pGEM3Zf(Ϫ) M. EcoP15I-C344A and C344S. Plasmid pDN8 was used for overexpression of wild-type M. EcoP15I (29). The EcoRI-HindIII fragment from pDN8, carrying the entire M. EcoP15I gene, was subcloned into plasmid pGEM3Zf(Ϫ). This construct is referred to as pGEM3Zf(Ϫ) M. EcoP15I (20). E. coli CJ236 (dut Ϫ ung Ϫ ) was used as a host for preparation of single-stranded DNA templates for mutagenesis. JM109 was used as a host for transformation of plasmid constructs derived from pUC18 as well as for overexpression and purification of mutant methylases. JM109 cells harboring the plasmid pSHI182 (carrying the gene for EcoP15I restriction enzyme) were used for in vivo restriction assays. General Recombinant Techniques-Restriction enzymes, T4 DNA ligase, Klenow fragment of DNA polymerase I, and T4 polynucleotide kinase were purchased and used according to the manufacturers' recommendations. Digestions with type II restriction enzymes, ligations, transformations and DNA electrophoresis were done as described by Sambrook et al. (30). Plasmid DNA (pUC18, pUC19, pBR322,or pGEM3Zf(Ϫ)) was prepared as described by Sambrook et al.(30). Construction of Cysteine Substitution Mutants and Purification of Mutant EcoP15I DNA MTases-Site-directed mutagenesis was done to replace the cysteine residues at positions 30, 213, 344, 434, 553, and 577 by serine, asparagine or serine or tryptophan, alanine or serine, serine, or tyrosine or alanine, and serine and serine, respectively, using suitable primers A to K. The sequence of primer A was designed to change cysteine at position 30 to serine. Single-stranded DNA template containing uracil residues was made from E. coli strain CJ236 that har- bored pGEM3Zf(Ϫ) M. EcoP15I. Primer A was hybridized to this singlestranded DNA, and oligonucleotide-directed mutagenesis was performed essentially according to the method of Kunkel (31). The resultant plasmid was termed pGEM3Zf(Ϫ) M. EcoP15I-C30S. The mutants were then identified by dideoxy chain termination sequencing. The sequence of primer B was designed to change cysteine 213 to asparagine and to create an HpaI restriction site. By using this primer, mutagenesis was carried out as described above. The resultant plasmid was called pGEM3Zf(Ϫ) M. EcoP15I-C213N. The mutants were then identified by digesting the plasmid DNA with HpaI. Another round of mutagenesis was done using single-stranded DNA from pGEM3Zf(Ϫ) M. EcoP15I-C213N as a template and primer C as the mutagenic oligonucleotide. The resultant plasmid pGEM3Zf(Ϫ) M. EcoP15I-C213S lost the HpaI site, and therefore mutants could be easily screened. The sequence of primer D was designed to change cysteine 344 to tryptophan and to create a restriction site NcoI. Mutagenesis was carried as described earlier using primer D. The resultant plasmid was termed pGEM3Zf(Ϫ) M. EcoP15I-C344W, and mutants were scored by digesting the plasmid DNA with NcoI. Two separate rounds of mutagenesis were done using single-stranded DNA from pGEM3Zf(Ϫ) M. EcoP15I-C344W as a template and primers E and F. The resultant plasmids pGEM3Zf(Ϫ) M. EcoP15I-C344S and pGEM3Zf(Ϫ) M. EcoP15I-C344A lost the NcoI sites, and hence screening of mutants was easy. Similarly, the sequence of primer G was designed to change the cysteine at position 434 to tyrosine and to create a restriction site NdeI. Mutagenesis, as described above, was carried out, and the resultant plasmid was pGEM3Zf(Ϫ) M. EcoP15I-C434Y, and mutants were screened by digesting the plasmid DNA with NdeI. Two separate rounds of mutagenesis were performed using single-stranded DNA from pGEM3Zf(Ϫ) M. EcoP15I-C434Y as a template and primers H and I. The resultant plasmids pGEM3Zf(Ϫ) M. EcoP15I-C434S and pGEM3Zf(Ϫ) M. EcoP15I-C434A lost the NdeI site, and this was used for scoring mutants. Primer J was designed to change the cysteine at position 553 to serine. By using this primer and single-stranded DNA from pGEM3Zf(Ϫ) M. EcoP15I wild type, mutagenesis was carried out, and the resultant plasmid was pGEM3Zf(Ϫ) M. EcoP15I-C553S. In order to alter the cysteine at position 577 to serine, primer K was designed so that a restriction site AccI could also be created. This enabled us to screen for mutants. The resultant plasmid was referred to as pGEM3Zf(Ϫ) M. EcoP15I-C577S. All the mutants were identified by DNA sequencing. The double mutants were constructed using single-stranded DNA from plasmid pGEM3Zf(Ϫ) M. EcoP15I-C213N as a template. Mutagenesis reactions were carried out using two primers in each case. In five separate mutagenesis reactions primer C was used. The second primer used in these were primer A, E, H, I, and J, respectively. As mentioned earlier, primer C was designed such that the asparagine at position 213 was changed to serine, and in addition the HpaI restriction site would be lost. The second primers used in the five mutagenesis reactions were designed to change the cysteine residues at positions 30, 344, 434, 553, and 577 to serine and either introduced a new restriction site or resulted in loss of an existing restriction site. The resultant plasmids were pGEM3Zf(Ϫ) M. EcoP15I-C553S and pGEM3Zf(Ϫ) M. EcoP15I-C577S were separately cloned into EcoRV-HindIII sites of pGEM3Zf(Ϫ) M. EcoP15I-C30S/ C213S. All mutations were confirmed by dideoxy chain termination sequencing, and the entire mutant mod genes were sequenced by automated DNA sequencing method. DNA fragments containing the individual mutations were released from the respective pGEM3Zf(Ϫ) constructs (excluding the double and triple mutants) using suitable restriction sites on either side of the mutations (Fig. 1), and these fragments were separately swapped into the pUC18 vector containing the wild-type M. EcoP15I gene. The resultant plasmids, for instance pC30S, pC213N, pC344S, etc., were used for expression and purification of mutant EcoP15I DNA MTases. pGEM3Zf(Ϫ) M. EcoP15I-C344A and pGEM3Zf(Ϫ) M. EcoP15I-C344S were used to purify C344A and C344S mutant enzymes. Overexpression and Purification of Wild-type and Mutant EcoP15I DNA Methyltransferases-Wild-type and mutant EcoP15I DNA meth-yltransferases were purified according to the method of Rao et al. (29) to near homogeneity. The hosts from which the wild-type and all cysteine mutants, except C344A and C344S, were purified were the same. A different host had to be used for the purification of C344A and C344S mutant enzymes. Plasmids pGEM3Zf(Ϫ) M. EcoP15I-C344A and pGEM3Zf(Ϫ) M. EcoP15I-C344S were transformed into BL21(DE3) cells. Cells were grown to an A 600 of 0.7 and then induced by adding isopropyl-1-thio-␤-D-galactopyranoside to a final concentration of 0.5 mM. The cells were harvested after induction for 3 h. Sulfhydryl Modification-Aliquots (1 ml) of EcoP15I DNA MTase were dialyzed overnight (16 h) at 4°C against nonreducing buffer (10 mM potassium phosphate (pH 7.0), 10 mM NaCl, 0.1 mM EDTA and 10% glycerol). Unless otherwise mentioned, all experiments were conducted in nonreducing buffer. Following exchange into nonreducing buffer, the MTase concentration was determined using a Bradford protein assay (32), standardized with known amounts of MTase. Dialyzed wild-type EcoP15I DNA MTase was incubated each with DTNB (5 mM), NEM (5 mM), and iodoacetamide (5 mM) at room temperature for 30, 5, and 30 min, respectively, and the percent activity remaining was measured. NEM was dissolved in absolute alcohol and kept at Ϫ20°C as a stock solution (200 mM). A given amount of the purified dialyzed enzyme was incubated at 25°C for 5 min with various amounts of NEM in 10 mM potassium phosphate buffer (pH 7.0) containing 1 mM EDTA. After incubation, the reaction mixture was transferred to the methylation assay buffer that contained excess 2-mercaptoethanol to quench the unreacted NEM, and the residual enzyme activity was determined. These experiments have been performed three times using different enzyme preparations. The variation in the values was in the range of 3-5%. Determination of Sulfhydryl Groups-The amount of free sulfhydryl group in the wild-type M. EcoP15I was determined spectrophotometrically at 412 nm using 5,5Ј-dithiobis-(2-nitrobenzoic acid) (33). The enzyme (final concentration 1.3 M subunits) was incubated at 25°C with 12.1 mM DTNB in 0.1 M potassium phosphate buffer (pH 7.0). In separate experiments under the same assay conditions, reaction with DTNB was followed more directly by continuous monitoring of absorbance at 412 nm over 3 h. The stoichiometry of the reaction was calculated by using the extinction coefficient of 14,150 M Ϫ1 cm Ϫ1 for the thionitrobenzoate (TNB 2Ϫ ) anion in the absence of any denaturant and a value of 13,700 M Ϫ1 cm Ϫ1 in the presence of guanidinium chloride or SDS. In yet another experiment, wild-type EcoP15I DNA MTase was incubated with either 6 M guanidinium chloride or 1% SDS for 4 h at 40°C, and DTNB titration was carried out as described above, and the change in absorbance at 412 nm was monitored. In order to determine the number of sulfhydryl groups in the reduced denatured enzyme, the enzyme was first incubated with 1% SDS for 4 h at 40°C in the presence of 100 mM DTT. Subsequently, the DTT was removed rapidly by passing the mixture twice through a PD-10 gel filtration column. Each assay was run 2 to 4 times, and the averages are presented. Assays for Methylation Activity-To monitor the methylation activity of wild-type and mutant EcoP15I methylases, three types of assays were used. In Vivo Restriction Assay-Modification in vivo by mutant Mtases was assessed by the effectiveness with which they protected lambda () phage from the EcoP15I restriction-modification system. The efficiency of plating (EOP) of these phages on an r ϩ m ϩ (JM109 cells transformed with pSHI182, plasmid encoding EcoP15I restriction enzyme) relative to r Ϫ m Ϫ (JM109 cells) reflects the level of in vivo methylation. Sensitivity to Restriction Endonuclease-Plasmid DNAs, carrying wild-type or mutant MTases were isolated using the alkaline lysis method (34) and then digested with EcoP15I restriction enzyme. Typically 1.0 g of plasmid DNA was digested with purified EcoP15I restriction enzyme for 60 min at 37°C followed by Proteinase K/SDS treatment at 56°C for 60 min. The digestion products were analyzed by 0.8% (w/v) agarose gel electrophoresis in the presence of ethidium bromide. In Vitro Methylation Activity-MTase activity was monitored by incorporation of tritiated methyl groups into pUC18 DNA, and the specific activity of the enzyme was measured as described (6). All assays were repeated at least three times. Initial rate data were fitted by nonlinear regression to the Michaelis-Menten equation. Steady-state kinetics of methyl transfer was performed using saturating concentrations of pUC18 DNA. Reaction rates were determined at different AdoMet concentrations. Analysis of kinetic data was done using methods described (29). The enzyme concentrations refer to the amount of subunits based on a molecular mass of 75 kDa. Circular Dichroism Measurements-Circular dichroism (CD) measurements were taken using a Jasco J20C spectropolarimeter. All exper-iments were done at 25°C in 20 mM potassium phosphate buffer (pH 7.0). The protein solutions were incubated for 10 min in 1-mm path length quartz cells in a final volume of 400 l prior to recording the CD spectrum at the wavelengths indicated. The protein samples were dialyzed extensively against 20 mM potassium phosphate (pH 7.0) before recording the measurements. The observed ellipticities were converted to mean residue ellipticity [ MRW ], by using Equation 1 (50). where obs is the observed ellipticity in degrees; mrw is the mean residue molecular weight based on a molecular mass of 75 kDa and 645 amino acids; c is the protein concentration in grams/ml, and l is the path length of the cell in centimeters. Electrophoretic Mobility Shift Analysis-Binding reactions were performed in 50 mM Tris-HCl (pH 7.5), 20 mM NaCl, 10 mM MgCl 2 , 7 mM 2-mercaptoethanol, 10% (v/v) glycerol, and 1 mM EDTA. Typical reactions of 10 l were incubated for 10 min on ice and loaded onto a 6% polyacrylamide gel. Electrophoresis was performed as described (6). The gels were dried on Whatman 3MM paper and subjected to autoradiography to Kodak XAR film. Chemical Cross-linking of Wild-type and Mutant EcoP15I DNA MTases-Cross-linking reactions of the proteins with glutaraldehyde were carried out by incubating the enzymes (2 g) in 0.1 M phosphate buffer (pH 8.0) containing 0.25 mM EDTA. Glutaraldehyde was added to a final concentration of 0.1% to the above mixture. Cross-linking was carried out at 4°C for 5 min. Reactions were stopped by adding SDSloading dye and boiling the samples for 2 min. The reactions were analyzed on a 2.5-8% gradient polyacrylamide gel containing 0.1% SDS. The gel was silver-stained to visualize the protein bands. Immunoblotting-Polyclonal antibodies to the denatured wild-type EcoP15I DNA MTase were raised in a rabbit. For Western blot analyses, E. coli lysates or the purified recombinant enzyme were subjected to polyacrylamide gel electrophoresis (PAGE) after solubilization with 1% SDS in the presence or absence of 1% 2-mercaptoethanol. The proteins were electrophoretically transferred to a poly(vinylidene difluoride) membrane (Millipore, Bedford, MA). The membranes were immunostained with polyclonal rabbit anti-EcoP15I DNA MTase antibodies and horseradish peroxidase-conjugated goat anti-rabbit IgG antibody. Miscellaneous Methods-Protein quantity was estimated using the method described by Bradford (32), with bovine serum albumin as a standard. For analysis of purity, proteins were separated on a 0.1% SDS, 10% polyacrylamide gels according to the method described by Laemmli (36). The proteins were detected with Coomassie Brilliant Blue R-250 (Sigma). AdoMet was purified using a Bio-Rex 70 cationexchange column and stored at Ϫ20°C in 0.1 M HCl (37). Effect of Thiol Reagents on EcoP15I DNA Methyltransferase Activity-Incubation of purified EcoP15I DNA MTase with thiol reagents N-ethylmaleimide and 5,5Ј-dithiobis(2-nitrobenzoic acid) inactivated the enzyme. More than 95% activity was inhibited by these reagents. Incubation of the enzyme with iodoacetamide (5 mM) failed to significantly inhibit the activity. However, higher concentrations (100 mM) of iodoacetamide did result in loss of activity (data not shown). These results demonstrate that sulfhydryl groups in EcoP15I DNA MTase may be necessary for enzyme activity. Titration of Wild-type EcoP15I DNA MTase with DTNB-The free sulfhydryl content of EcoP15I DNA MTase was quantified, initially without denaturation, with the objective of confirming that the cysteines were exposed on the surface of the protein. A solution of the purified EcoP15I DNA MTase (1.3 M) was placed in a 3-ml spectrophotometer cell and kept at 25°C. To this solution, 100 l of 20 mM DTNB solution was added and mixed rapidly. The concentration of DTNB (625 M) in the reaction mixture was 80.1-fold excess of the total concentration of sulfhydryl groups (7.8 M sulfhydryl group). The concentration of TNB 2Ϫ , which is released when DTNB reacts with sulfhydryl groups, increased rapidly in the first 30 min and then remained almost constant through 60 min. From the concentration of TNB 2Ϫ released for 150 min, the number of sulfhydryl groups titrated with DTNB was calculated (Table I). About 4 mol of DTNB reacted with 1 mol of subunit of the EcoP15I DNA MTase; that is among six sulfhydryl groups present in a subunit, only four sulfhydryl groups reacted with DTNB. EcoP15I DNA MTase activity rapidly decreased for the first 30 min and then was completely lost at 60 min by the reaction with DTNB (data not shown). The total number of thiols present in the enzyme was quantified by reaction with DTNB under nonreducing conditions in the absence and presence of 6 M guanidinium chloride or 1% SDS at pH 7.0. It is clear that under nondenaturing conditions, four of the six cysteines in the protein reacted with DTNB (Table I). In the presence of 6 M guanidinium chloride, the remaining two cysteines react with DTNB (Table I). These results clearly indicate the possible absence of disulfides in the enzyme and also corroborate the amino acid sequence data of the protein which shows the presence of six cysteine residues (28). To examine whether disulfide bonds were present in M. EcoP15I and, if present, were important for activity, we tested the effects of DTT, a strong disulfide bond reducing agent. It was found that the enzyme did not lose more than 2% activity when incubated for 5 h with 10 mM DTT (data not shown). Treatment of the enzyme with DTT had no effect on the electrophoretic mobility of the protein clearly indicating the absence of disulfide bonds. These experiments were carried out at least three times using different enzyme preparations with essentially the same results. SDS-PAGE analysis in the absence or presence of 2-mercaptoethanol revealed no differences in the electrophoretic mobility of the wild-type enzyme (data not shown). These results suggest that EcoP15I DNA MTase contains no disulfide bonds. Treatment of the DTNB-treated enzyme with DTT (100 mM) to remove the thiol-modifying reagent restored the enzyme activity (Ͼ80%), indicating that the changes caused by DTNB were reversible (data not shown). NEM Modification of EcoP15I DNA Methyltransferase-Of the available sulfhydryl reagents, N-ethylmaleimide has consistently been used for cysteine modification because of its high selectivity for ϪSH groups. The reaction with NEM involves the nucleophilic attack upon its olefinic bond by the reactive sulfhydryl (ϪSH) group of cysteine in the active site of the enzyme. This leads to the formation of a covalent adduct. To study the effect of NEM on the activity of EcoP15I DNA MTase, the enzyme was first dialyzed against 10 mM potassium phosphate buffer (pH 7.0) containing 0.1 mM EDTA, 10% glycerol, and 10 mM sodium chloride. This is because NEM is known to modify lysine residues at pH greater than 8.0, although the reaction is very slow. The absence of 2-mercaptoethanol in this buffer resulted in inactivation of the enzyme upon storage. As mentioned earlier, for M. EcoP15I, it was necessary to add 2-mercaptoethanol to all buffers to stabilize the enzyme during purification and storage (29). Inactivation kinetics were carried out with freshly dialyzed enzyme at 0.5, 1.0, 2.0, and 3 mM NEM. The modification reaction was arrested by the addition of an excess of 2-mercaptoethanol. The inactivation curves show that only concentrations as high as 3 mM NEM brought about significant inactivation. This suggested that probably a slow reacting cysteine residue was involved in catalysis. The linear plots of the logarithm of residual enzyme activity against the reaction time indicate that the time-dependent decrease in activity displayed first-order kinetics ( Fig. 2A). The apparent first-order rate constants K (app) were calculated from the slopes of the lines obtained from the first-order plot. These values were plotted against log[NEM] to obtain a straight line. The slope of this line gave the number of cysteine residues modified. For M. EcoP15I, the value obtained was 1.0, which suggested that a single species of cysteine was modified by NEM (Fig. 2B). The linearity of a secondary plot of log K (app) against log [NEM] with the data from the primary plot of Fig. 2A indicated that NEM binding takes place through a two-step mechanism of inactivation, where a rapid reversible binding of NEM to the enzyme precedes the covalent modification to an inactive enzyme-inhibitor complex (38). Decrease in activity could be ei-ther due to the modification of a cysteine residue involved in methyl group transfer or because of modification at the substrate-binding sites. To investigate the latter possibility, the enzyme was incubated with AdoMet or DNA, prior to modification with NEM. In both cases, there was no significant protection offered by the substrates against NEM inactivation (data not shown). Substrate protection was also investigated through a binding assay as described under "Experimental Procedures." For AdoMet cross-linking experiments, the formation of a stable adduct was first demonstrated for the MTase dialyzed against phosphate buffer devoid of 2-mercaptoethanol (Fig. 3A, lane 2). When this enzyme was treated with 5 mM NEM prior to cross-linking, there was a drastic decrease in the intensity of the adduct formed (Fig. 3A, lane 4). However, upon preincubation of the enzyme with the radioactive substrate, adduct formation was partially restored (Fig. 3A, lane 3). To test whether NEM modification eliminates M. EcoP15I binding to its recognition sequence, we tested the inactivated protein for its ability to bind an oligonucleotide duplex (duplex I) containing the EcoP15I recognition sequence. Gel mobility shift assays indicated that M. EcoP15I that had been preincubated with NEM binds to form a complex with DNA just like the untreated or mock-treated enzyme which produced a retarded complex (Fig. 3B). Circular dichroism spectra (200 -250 nm) were collected for both the native and the NEM-modified enzyme and were used to calculate the secondary structure of the enzyme (Fig. 4A). The data suggested that the secondary structure of the enzyme was not significantly perturbed by NEM modification. The oligomeric nature of the NEM-modified enzyme was determined by glutaraldehyde cross-linking. As can be seen from Fig. 4B, treatment with glutaraldehyde of both the unmodified and NEM-modified enzyme clearly indicates the dimeric nature of these proteins. Activities of Mutant EcoP15I DNA Methyltransferases-Two tests were done to assess the functional ability of the M. EcoP15I mutants to modify DNA. First, modification in vivo by mutant MTases was assessed by the effectiveness with which they protected non-modified lambda phages from restriction by EcoP15I R-M system. Cells harboring pGEM3Zf(Ϫ)-derivative plasmids (described above) were infected with phage vir , at a titer high enough (10 5 plaque-forming units/ml) to give confluent lysis on plating. Phage lysate was prepared from these plates by standard protocol (30). The titer of the resulting lysate was determined on an r Ϫ m Ϫ strain. The EOP of these phages was calculated as the ratio of plaque-forming units/ml obtained in an r Ϫ m Ϫ strain to r ϩ m ϩ strain. It is evident from Fig. 5A that an EOP of 0.8 was obtained in case of wild-type MTase, indicating almost complete protection against restriction. Although five mutants (C30S, C213S, C434S, C553S, and C577S) had an EOP value similar to the wild type, four mutants C213N, C344A, C344S, and C434Y had an EOP of 1 ϫ 10 Ϫ7 . These results indicated that EcoP15I DNA MTase activity was lost when Cys-344 was changed to serine or alanine. Due to the presence of large side chain containing amino acid (asparagine) or with a bulky amino acid (tyrosine) at positions 213 or 434, respectively, there was loss of enzyme activity. Among the three double mutants, namely C30S/C213S, C213S/ C344S, and C30S/C344S, only the C30S/C213S gave an EOP value similar to the wild-type construct (Fig. 5B) clearly suggesting that the cysteines at positions 30 and 213 are not required for enzyme activity. More importantly, it can be seen from the analysis of the double mutants that if the cysteine at position 344 was changed, methylation activity was almost completely abolished. All three triple mutants, namely C30S/ C213S/C434S, C30S/C213S/C553S, and C30S/C213S/C577S were active (EOP values were similar to the wild-type value) (Fig. 5B). Second, plasmid DNA from cells expressing the wild-type or mutant MTases were isolated, and their sensitivity to digestion by EcoP15I restriction enzyme was determined in vitro. Active MTases should methylate the plasmid in vivo thereby protecting it from digestion in vitro by R. EcoP15I. Fig. 6 shows that the plasmid DNA from cells expressing the mutant M. Characterization of Mutant EcoP15I DNA MTases-Nine of the 11 mutant MTases were purified as described under "Experimental Procedures." The purified enzymes were analyzed using SDS-PAGE (Fig. 8A) and by Western blotting (Fig. 8B) for alterations in the electrophoretic mobilities, and no apparent changes were detected vis à vis the wild-type protein. As determined by SDS-PAGE of lysates under reducing conditions, all mutant enzymes were efficiently expressed in E. coli, and a single immunoreactive protein with anti-M. EcoP15I antibodies at a position of 75,000 daltons was seen (data not shown). M. EcoP15I-C344S enzyme proved difficult to purify. Although overexpression of this mutant MTase was seen (data not shown), the protein degraded rapidly during DEAE-Sephacel ion-exchange chromatography, and only a 50-kDa protein could be observed. Western blot analysis confirmed that the proteolyzed protein was indeed the mutant EcoP15I DNA MTase (data not shown). On the other hand, M. EcoP15I-C344A could be only partially purified (see below). Methylation activity of the wild-type and mutant MTases was also determined by measuring their ability to transfer the 3 H-labeled methyl group from AdoMet to pUC18 DNA. In vitro methylation activity measurements clearly showed that purified preparations of all mutant MTases except the C344A enzyme (see below) had the level of activity comparable to the wild-type MTase (data not shown). Effect of NEM on Mutant EcoP15I DNA Methyltransferases-Each purified mutant enzyme (0.1 mg in 1 ml of 0.1 M phosphate buffer pH 7.0) was incubated with 3 mM NEM at 25°C. All purified mutant EcoP15I DNA MTases except C344A mutant enzyme were inactivated at the wild-type rate in the first 10 min (data not shown). These results clearly suggest that the mutant MTases tested were NEM-sensitive. Interaction of Mutant EcoP15I DNA Methyltransferases with Oligonucleotides Containing the EcoP15I Recognition Sequence-Although all of the mutant MTases were as active as the wild-type enzyme, it was necessary to establish whether the DNA binding properties of these mutant enzymes were different from the wild-type enzyme. All mutant MTases that were active bind DNA to same extent as the wild-type enzyme (data not shown). The ability of these cysteine substitution mutants to bind DNA indicates that these cysteine residues are unlikely to be directly involved in MTase-DNA interactions. Photolabeling of Mutant EcoP15I DNA Methyltransferases-In order to correlate the lack of MTase activity of M. EcoP15I-C344A with an inability to bind AdoMet, AdoMet cross-linking experiments were performed. When purified M. EcoP15I was incubated at 4°C with [methyl-3 H]AdoMet and then subjected to short wavelength UV irradiation for 60 min, the enzyme was labeled as detected by SDS-PAGE followed by fluorography and autoradiography (Fig. 9). Exposure of purified mutant MTases to UV light in the presence of labeled AdoMet resulted in cross-linking of radioactivity to these enzymes including the C344A enzyme. Kinetic Properties of the Mutant EcoP15I DNA Mtases-To compare the catalytic capabilities of mutant and wild-type EcoP15I DNA MTases more fully, some kinetic parameters were evaluated for the enzymes. At saturating DNA concentrations, linear primary plots (data not shown) were obtained for variation of the AdoMet concentration (eight concentrations over a 20-fold range) for both the wild-type and the mutant enzymes. Initial velocities were plotted against AdoMet concentration, from which V max and K m were determined. There was no evidence of nonlinearity in the dependence of the activity on pUC18 DNA concentration and thus no suggestion that the three methylation sites on this substrate were not equivalent. It is possible that differences between rates of methylation of, or binding to, each of the three sites might be too subtle to be detected by a kinetic analysis. Specific activity values for wild-type and mutant enzymes were in the range of 45-60 pmol of methyl group transferred per min/mg. The substitution of cysteines at positions 30, 213, 434, 553, and 577 by serine in the EcoP15I DNA MTase did not significantly affect the V max value. The Michaelis constants (K m ) for AdoMet were not significantly altered, suggesting that the mutant enzymes were able to bind AdoMet almost as efficiently as the wild-type enzyme. When compared with the wild-type, the C213S enzyme displayed an AdoMet concentration that gave a K m about half that of wild type. Although the k cat values for all mutant enzymes were similar to the wild-type enzyme, the C30S had a value one-fifth of the wild-type (Table II). The other parameter generally used to compare engineered mutant enzymes is the "specificity constant," k cat /K m . From Table II, it is clear that the specificity constant for AdoMet is the same in the case of all mutant enzymes except the C30S mutant enzyme which has a 5-fold lower value. Physical Behavior of EcoP15I DNA Methyltransferase Mutants-Cysteine residues are known to affect protein folding and conformational stability due to the formation of disulfide bonds. SDS-PAGE analysis of mutant MTases revealed that they all behave indistinguishably from wild-type EcoP15I DNA MTase (Fig. 8A). We had earlier reported that the wild-type enzyme exists as a dimer of molecular mass 150,000 Da in solution (6,7). We determined the oligomeric nature of the mutant enzymes by employing glutaraldehyde cross-linking of the subunits. The purified mutant methylases exist as the same molecular species as the wild-type enzyme, as is evident from glutaraldehyde cross-linking of the proteins (data not shown). All purified mutant MTases were subjected to PAGE under nondenaturing conditions without prior treatment with reducing agent. All of them behaved exactly as the wild-type enzyme suggesting that the oligomeric nature of the enzymes did not change as a result of amino acid substitution (data not shown). During purification each of the cysteine mutants, except C344S mutant MTase, behaved predictably in all the steps of the purification protocol normally employed for the isolation of the wild-type enzyme. Characterization of EcoP15I-C344A Mutant Enzyme-Several attempts were made to purify EcoP15I-C344A and EcoP15I-C344S mutant enzymes using the same strategy that was employed to purify the other mutant MTases. However, each time the proteins rapidly degraded either during the dialysis step or during chromatography on DEAE-Sephacel matrix. We therefore expressed these mutant genes in another host background, E. coli B strain BL21(DE3) instead of JM109. The former is a lon Ϫ strain, and we therefore assumed that proteolysis could be minimized. However, we could not purify the C344S mutant enzyme because the protein degraded during the dialysis step. As can be seen from Fig. 10A, the C344A mutant enzyme preparation appeared to be substantially pure (about 70% homogeneous). Western blot analysis (Fig. 10B) confirmed the presence of the enzyme protein in this preparation. This enzyme preparation did not show any significant methylation activity as compared with the wild-type enzyme (Fig. 10C). In order to find out if loss of activity was due to loss of AdoMet or DNA binding, we performed UV cross-linking of AdoMet and the C344A mutant enzyme and studied its DNA binding properties. It is evident from Fig. 11A that the extent of UV cross-linking of AdoMet to wild-type and mutant C344A enzyme was similar ( compare lanes 1 and 3). Interestingly, there was very little difference in AdoMet cross-linking, when NEM-treated C344A mutant enzyme was used (Fig. 11A compare lanes 3 and 4). On the other hand, when NEM-treated wild-type enzyme was used, there was no significant crosslinking (Fig. 11A, compare lanes 1 and 2). The C344A mutant enzyme failed to bind to DNA containing the EcoP15I recognition sequence (Fig. 11B, compare lanes 3 and 6). In order to find out if replacement of Cys-344 might have led to a structural abnormality in the MTase, limited proteolysis was used to probe the accessibility of glutamic acid residues to V8 protease in the wild-type and C344A enzymes. The results gave no indication of a major structural change; both enzymes displayed essentially identical rates and patterns of degradation when incubated at a V8 protease:protein ratio of 1:100 for 0 -30 min (data not shown). DISCUSSION Cysteines can serve as specific points for covalent labeling by radioactive, fluorescent, and spin-labeled ϪSH-reacting compounds because of their reactivity. These can then be used to probe three-dimensional structures and to detect intramolecular conformational changes. With this aim in mind, the current study combines results from protein chemistry and mutagenesis in order to elucidate the role of cysteine residues in the EcoP15I DNA methyltransferase. Whereas preincubation of M. EcoP15I DNA MTase with DTNB resulted in the loss of enzymatic activity, addition of DTT to DTNB-labeled EcoP15I DNA MTase under native conditions removed about 90% of the attached probe with concomitant recovery of MTase activity. Treatment of the enzyme with DTT showed that no disulfides were essential to maintain its activity. These observations clearly demonstrated that covalent modification of a cysteine residue was directly responsible for the observed inactivation. In the present work, the involvement of cysteine residues in the catalytic function of EcoP15I DNA MTase was also indicated by the inactivation of the enzyme by the thiol-specific reagent NEM. Kinetic analysis of the reaction of NEM demonstrates that modification of only 1 cysteine/single subunit resulted in loss of activity (Fig. 2). This comprehensive inhibition with a range of thiol reagents of different size and complexity argues strongly in favor of a significant structural or functional role for cysteine residues in EcoP15I DNA MTase. Inactivation could be caused by the steric obstruction of the active site of the enzyme, conformational alterations of the enzyme, or modification of a cysteine residue essential for the catalytic process. It is possible that the introduction of the large relatively hydrophobic N-ethylsuccinimidyl group of NEM and not the loss of the sulfhydryl group was the cause of the inactivation that we observed. Our results clearly show that although the NEMmodified enzyme binds specifically to DNA (Fig. 3B), AdoMet binding was significantly decreased (Fig. 3A). There are a number of instances where it has been shown that the modification or oxidation of cysteine residues in DNA-binding proteins affects the ability of these proteins to bind DNA and therefore enzymatic activity. For instance, Aiken et al. (39) have shown that NEM inactivated RsrI endonuclease by producing a modified enzyme that was unable to bind its recognition sequence. The sensitivity of T4 DNA-[N 6 -adenine]methyltransferase (40), M. Eco Dam (41), and M. EcoRI (27) to NEM indicated that one or more cysteine residues was important for activity. It was observed that M.BspRI, a C5-Mtase, was able to accept the methyl group from AdoMet in the absence of DNA. Self-methylation was, however, inhibited by sulfhydryl reagents, and two cysteines were identified that bind the methyl group in form of S-methylcysteine (42). Chemical modification studies on EcoP15I DNA MTase using thiol reagents suggest that cysteine(s) is the likely target of this reaction. Modification results in a fast inactivation process even though cysteine(s) may not be involved in the catalytic mechanism. Inactivation of an enzyme as a result of modification accompanied by protection against inactivation by competitive inhibitors or substrates for the enzyme is generally used as a criterion to assess whether modification is active sitedirected. We were unable to do substrate protection experiments because at high concentrations of AdoMet, M. EcoP15I exhibits substrate inhibition (29). We therefore carried out site-directed mutagenesis to define more precisely the role of the six cysteine residues in substrate binding and catalysis. It has been observed in some instances that replacement of cysteine by site-directed mutagenesis led to conclusions different from those reached by chemical modification studies, especially when a bulky thiol reagent such as NEM was used (43). This turned out to be true in our case (see below). Although cysteine and serine are chemically similar in the sense that they are both nucleophilic, they possess significant differences in nucleophilicity and polarity. The hydroxyl group of serine is highly polar, and the sulfhydryl group of cysteine is relatively non-polar. Assuming that cysteine occupies a position in a hydrophobic environment, the introduction of a polar residue like serine could significantly perturb the active enzyme structure. In order to address this possibility, a second mutation was generated that replaced cysteine at position 344 by the non-polar residue alanine. Mutation to alanine was chosen as this is the most conservative change in terms of size and charge, and any resulting functional change can be largely attributed to the loss of the thiol group. The six cysteine residues of EcoP15I DNA MTase located in positions 30, 213, 344, 434, 553, and 577 were each individually and in combination mutated to serine. All substitutions except at position 344 resulted in an active phenotype as assessed both by in vivo and in vitro assays (Figs. 5-7). Replacement of cysteine at position 344 either with serine or alanine resulted in an inactive enzyme. This therefore suggested that all other cysteines were relatively benign. As in the case of the C344S mutant, the C344A mutant enzyme was inactive clearly indicating the importance of cysteine at position 344 in maintaining the activity of EcoP15I DNA MTase. Replacing the thiol of Cys-344 by hydrogen (C344A) or substitution of a hydroxyl group for the thiol (C344S) had a drastic effect, resulting in loss of activity. These observations therefore suggest that EcoP15I DNA MTase appears to be sensitive to the change in polarity and/or differences in hydrogen bonding properties caused by the substitution of serine or alanine residue at position 344. Gabbara et al. (44) have shown that replacement of the catalytic cysteine in EcoRII methyltransferase, a m5C-MTase, by serine resulted in a mutant enzyme with a catalytic efficiency about 10,000 times less than that of wild-type. NEM inhibited each of the purified mutant enzymes except the C344A enzyme in the same manner as the unaltered enzyme, demonstrating that these altered proteins contain a residue modifiable by NEM. The observation that the NEMtreated C344A mutant enzyme behaves identically to the NEM-modified wild-type enzyme in terms of AdoMet binding convincingly demonstrates that Cys-344 is modified by NEM. Although the activity of C344A mutant enzyme was almost negligible (Fig. 10C), it can be argued that the insignificant amount of activity could be possible due to occasional misreading of serine codon by a cysteinyl tRNA or could be the result of contamination by the wild-type enzyme during purification. This possibility can be eliminated by the fact that the C344A mutant enzyme behaves almost identically to the wild-type when both are subjected to NEM modification followed by AdoMet cross-linking (Fig. 11A). C344A mutant enzyme could be catalytically inactive for a variety of reasons, including those associated with conformational changes. If AdoMet or DNA did not bind, or if the dimeric nature was not maintained, an inactive enzyme would be formed. It is clear from Fig. 11 that the C344A mutant enzyme binds to AdoMet but not to DNA, and therefore the loss of activity is due to the inability of the enzyme to bind one of its substrates. The observation that the C344A mutant enzyme was unable to bind DNA (Fig. 11) is in contrast to the observation that the NEM-modified enzyme was able to bind DNA (Fig. 3). However, these two contrasting effects can be explained if we assume that cysteine at position 344 is crucial for DNA-MTase interaction. Modification of this residue did not alter this interaction, but replacement of this residue with alanine abolished the interaction. The hydrogen bonding capabilities of the side chains of cysteine and alanine are different, and therefore substitution of alanine in the presence of cysteine affects DNA-MTase interactions. The dimeric nature of the mutant enzyme was not altered as is evident from both limited proteolysis patterns as well as elution profile on gel filtration chromatography. There was, therefore, no evidence for a global change in the mutant enzyme. Thus, it appears that the activity of the enzyme is due to the functions of Cys-344 at or near the DNA-binding site. Both wild-type and mutant enzymes that were active exhibited comparable kinetic parameters in K m and k cat . The specificity constant (k cat /K m ) which is a measure of enzyme efficiency was not drastically different for the wild-type and the mutant enzymes (Table II) except in the case of the C30S mutant enzyme. Although the C30S mutant enzyme was catalytically active, it is clear from Table II that the specificity constant for the enzyme was 5-fold less than the other mutant enzymes. This was mainly due to a 5-fold decrease in the turnover number of the enzyme. The kinetic data confirm the nonessential character of cysteines at positions 30, 213, 434, 553, and 577 in the catalytic mechanism. The striking loss of activity as measured by both in vivo and in vitro assays argues strongly for a specific role for cysteine 344. To confirm this conclusion further, double and triple mutants were generated, some of which had the cysteine at position 344 replaced with serine. Again, all double and triple mutants that had a serine at position 344 did not show wild-type activity, whereas all other combinations did not result in loss of methylation activity (Fig. 7). Collectively, these results strongly suggest that cysteine 344 plays a significant role in enzyme function. Substitution of a single amino acid residue can sometimes result in a decrease of enzyme activity, even if the residue is not involved in the active site. This could be due to changes in the higher order protein structure. Such critical residues play a basic role in protein folding and/or in supporting correct protein structure. Replacement of such residues could lead to decreased protein stability and also to a decreased level of expression due to higher accessibility of mutant proteins to proteases. Similar results obtained on the oligomeric nature of the wild-type and C344A mutant enzyme suggest that loss of methylation activity in the case of the latter was not due to changes in the higher order protein structure. The structural similarity among the active sites of M.PvuII, M.TaqI, and M.HhaI reveals that catalytic amino acids essential for cytosine N-4 and adenine N-6 methylation coincide spatially with those for cytosine C-5 methylation suggesting a mechanism for amino methylation. Based on the chemical and structural similarity of the DNA-adenosyl and AdoMet-adenosyl moieties and the structural similarity of the AdoMet binding and catalytic regions of the MTase, Malone et al. (18) have proposed analogous MTase-adenosine interactions in the two regions. Chemical model studies suggest that methyl transfer reactions with amine nucleophiles and methyl sulfonium compounds require significant activation (45,46). Malone et al. (18) have proposed a model that suggests that methylation of the exocyclic amino group results from a direct attack of the activated adenine N-6 on the AdoMet methyl group, in analogy with the previously proposed mechanism for DNA adenine methylation (47)(48)(49). It has been suggested that the N-6 amino nitrogen of the target adenine is the donor in a hydrogen bond to the side chain of aspartic acid in motif IV and possibly to one of the main chain oxygens of the adjacent two proline residues. This would negatively polarize N-6, activating it for direct transfer of the CH 3 ϩ from AdoMet. The relative positions of the activating hydrogen bond acceptor, target amino group, and AdoMet methyl group must be precisely maintained. Although it has been suggested that increasing the nucleophilic nature of the exocyclic amines of adenine (N-6) with cysteine may be a common strategy for MTases, cysteine residues might, however, still have a role as general base in catalysis of the methylation. It was earlier observed that few adenine MTases contain a cysteine flanked by asparagine akin to a PC doublet seen in m5C-MTases (27). Whereas it is firmly established that the PC motif is the catalytic center in m5C-MTases, no such role has been identified for the CN or NC motif seen in few m6-MTases. In any case, such dipeptide sequences are not present in the case of EcoP15I DNA MTase. The key finding of the present work is the identification of the single cysteine in M. EcoP15I involved in DNA binding and that Cys-344 is modified by NEM. Taken together, our experiments unambiguously show the consequences of replacing the cysteine residue at position 344 with alanine or serine. Whether this residue actually constitutes part of the active site or influences that site indirectly through conformational mechanisms will be the subject for future studies.
11,893
1998-09-11T00:00:00.000
[ "Biology", "Chemistry" ]
Identifying Multiple Potential Metabolic Cycles in Time-Series from Biolog Experiments Biolog Phenotype Microarray (PM) is a technology allowing simultaneous screening of the metabolic behaviour of bacteria under a large number of different conditions. Bacteria may often undergo several cycles of metabolic activity during a Biolog experiment. We introduce a novel algorithm to identify these metabolic cycles in PM experimental data, thus increasing the potential of PM technology in microbiology. Our method is based on a statistical decomposition of the time-series measurements into a set of growth models. We show that the method is robust to measurement noise and captures accurately the biologically relevant signals from the data. Our implementation is made freely available as a part of an R package for PM data analysis and can be found at www.helsinki.fi/bsg/software/Biolog_Decomposition. Introduction Biolog Phenotype Microarray (PM), not to be confused with RNA expression microarrays, is a commercially available 96-well format test system capable of multiple parallel testing of the bacterial growth responses to different nutrients and/or supplements [1]. The standard Biolog PM plates contain a variety of different substrates, such as carbon and nitrogen sources, heavy metals, antibiotics, etc. The substrates are pre-dispensed and dried, requiring only inoculation with bacteria and a buffer containing a dye (usually tetrazolium violet). Bacterial metabolism during growth leads to the irreversible reduction of the dye in the well with production of a purple colour which can be read as the change in absorbance over time [2]. The level of colouration is generally determined by a scanner at 15 minute intervals during the experiments, which are usually carried out over 48-72 hours depending on the studied bacterial species. The measurements of the colouration are recorded in arbitrary units. The levels of colouration measured in a single well during one experiment are here referred to as signal. Signals are normally consistent between experimental replicates ( Fig 1A) and depend on bacterial adaptation to a substrate and experimental conditions. Preferred or utilisable substrates support active metabolism which is reflected by a rapid signal growth (Fig 1, substrates B03, A03). In contrast, toxic substrates inhibit bacterial growth or lead to cell death in which case only a small amount of colour is produced (Fig 1, substrate H04). Bacteria may often undergo several cycles of metabolic activity, seen as differences in the rates of colour accumulation during growth (Fig 1, substrate A03). Multiple cycles may represent different metabolic pathways sequentially used by bacteria as they switch between Fig 1. Kinetics of accumulation of the colour production may represent metabolic cycles in bacteria. Panel A-colour production of three replicates of the bacterial E. coli strain IMT17887 during growth on plate PM1 in the substrates H03 (Tyramine), B03 (Glycerol) and A03 (N-Acetyl-D Glucosamine). Panel B-lagged difference L of the colouration of these signals. Panels C-growth rate S (smoothed lagged difference of the colouration) of these signals. The smoothing coefficient is set to b = 0.5. nutrients, undergo depletion of substrates, or after excretion of end-products followed by reutilisation. They may also represent different subpopulations in the growing cultures. A number of methods have been used for analysing the Biolog metabolic signals and comparing the metabolic activity triggered by different substrates. The simplest approaches describe metabolic signals with a single summary statistic, e.g. maximum intensity reached or the area under the curve [3,4]. Some methods split signals into growth or no-growth curves by using an arbitrary cut-off or comparison to a reference signal [5,6]. However, describing a time-series with only a single summary statistics leads to a loss of information, and may introduce bias in the results [7]. If more than two samples are provided, the differences in summary statistics can be tested, e.g. by using t-test or ANOVA. In addition to simple summary statistics, model-based methods are widely applied to Biolog data [7][8][9][10]. They are able to utilize more information by fitting growth models such as logistic, Gompertz and Richard, to the metabolic profiles. An R package opm is a widely used tool for reading in, processing and visualizing Biolog data [9]. It fits Gompertz's and Richard's models by using the grofit R package, and enables the comparison of the curves based on the 95% confidence intervals of the model parameter estimates. The most recent software by Gerstgrasser et al. [7] fits several models at once, and chooses the most suitable one utilizing Bayesian inference in parameter estimation and model selection. The signals are then compared against each other by defining maximum colour change, steepest slope and length of lag phase based on the fitted models. However, none of these growth models, or other methods [5,6,[11][12][13][14] are able to capture more than one potential metabolic cycle at a time. To address the problem, we propose an algorithm for identifying multiple potential metabolic cycles of bacteria by decomposing the PM well signal into multiple growth models. In addition, we propose a method for comparing signals with each other using summary statistics gained from the growth models. We show that the method is robust to measurement noise and captures accurately the biologically relevant information from the data, thus increasing the potential of PM technology in microbiology. To illustrate the proposed algorithm, we use Biolog metabolic signals from three E. coli strains: IMT17887, PCV17887 and T17887. Three biological replicates per strain were tested on plate PM1. Strain number IMT (Institut für Mikrobiologie und Tierseuchen) 17887 was isolated from a horse with wound infection. It is an extended-spectrum beta-lactamase (ESBL)producing E. coli of sequence type (ST) ST648. ESBL-plasmid extraction using a heat technique resulted in the ESBL-plasmid-"cured" variant PCV17887 [15]. Transformant T17887 contains the ESBL-plasmid, which was transferred into PCV17887 via electroporation. The data is provided in S1 File. Signal decomposition The proposed algorithm is applied separately to each of the 96 wells on the PM plates. Here, the time-series R = (R 1 , R 2 . . . R T ) contain the raw signal, i.e. the sequence of T integers between 0 and 400 representing the measured intensity of the colouration in one particular well at several time points. In theory, as the production of the purple colour is irreducible, R would be an increasing sequence. In practice, R is subject to low-frequency observational noise and can show a decreasing pattern due to measurement errors. As the metabolic activity is represented not by the absolute level of colouration but rather by its change due to growth of the cultures, we are not interested in the values in R as such, but the increments in the process. However, the lagged difference of the signal L = (R 1 , is typically noisy (Fig 1B), which calls for a statistical approach to analyse the curves. To filter the noise the lagged difference is smoothed with a Gaussian kernel with a predefined smoothing coefficient b (Fig 1C). We define the target signal S as: To identify metabolic cycles, the target signal S is approximated with the sum of n components S % P n i¼1 C ðiÞ , where each component C ðiÞ ¼ ðC ðiÞ 1 ; C ðiÞ 2 . . . C ðiÞ T Þ represents one period of colour accumulation (e.g. potential metabolic cycle). We focus on three basic types of components based on the following growth models (Fig 2): a Gaussian sequence: a brick sequence: and a slope sequence: Here A, μ, t 0 , t 1 and v are the component parameters. Each component is defined by exactly three parameters. To compensate for the smoothing, all sequences C are also processed by the Gaussian kernel with the same smoothing coefficient b. Gaussian and brick sequences represent a logistic and a linear growth of a colouration respectively, while slope sequences represent a dynamics with initial growth slowing down with time (Fig 2). The various types of components are not intended to represent strictly different biological processes, but are used to increase the capability of matching the data patterns well. Our decomposition algorithm consists of the following three steps: pre-processing converts the raw signal into the target signal; initial decomposition specifies the number of components required for optimal decomposition; calibration minimizes the distance between the components and the target signal and separates the components. To retain biological interpretability and prevent overfitting we set the constraint ∑C t ! δ (where δ is a predefined threshold). Pre-processing. During the pre-processing step the target signal S is obtained by smoothing the lagged difference of the raw signal (see Eq 1). The smoothing is required to remove highfrequency observation noise. Initial decomposition. During the initial decomposition step a crude decomposition is proposed using a greedy algorithm. The first component C (1) is fitted to the whole signal S, the second component is fitted to the residuals S − C (1) , the third-to the residuals S − C (1) − C (2) and so on. The fitting is done by optimizing ith component's type and parameters to minimize squared error between the target signal and the proposed components: Optimization could in principle be done with any optimization algorithm capable of finding a global minimum in the restricted space of the parameters. In our implementation we use a grid method combined with the built-in R function optimize [16]. To choose the component type, we fit three different types separately and choose the one with a minimal squared error. The iterations continue while fitted components satisfy the condition P C ðiÞ t ! d. When the first small component with P C ðiÞ t < d is encountered, the initial decomposition step is stopped and the i − 1 components are set as the initial decomposition. If the first proposed component is small P C ð1Þ t < d we conclude that no periods of colour accumulation have occurred (for example, no growth or the initial bacterial inoculum contained no live bacteria), the decomposition is not continued and a stale process is reported as a result. Calibration Initial decomposition may be imprecise, as first components are obtained ignoring the later ones. If two or more components are found during the initial decomposition (n ! 1), the components are calibrated to achieve concordance between them. Components are calibrated sequentially: C (1) ,C (2) ,. . .C (n) ,C (1) ,C (2) . . . until a pre-determined condition is reached. It could be based on the number of iterations or a change in the distance between the components and the target signal. When a component C (i) is calibrated, its type and parameters are updated by minimizing the function: The first part of the function is the squared error between the proposed components and the target signal. The second part penalizes the correlation among the calibrated component and the rest of components. The latter is scaled with the correlation weight γ. The same optimization method as in the initial decomposition step is used. During the pre-processing raw signal is converted to the target signal. During the initial decomposition three components (putative cycles of metabolic activity) are revealed: two Gaussian sequences and a slope sequence. During the calibration these components are refined: the first component changes its type to a brick sequence, the second and the third are slightly adjusted. Calibrating may change the type of a component. If a component ceases to satisfy the constraint C ðiÞ t ! d after the calibration, it is removed and the rest of the components are recalibrated. Using decomposition to compare signals The summary statistics of the identified components could be used to measure a similarity between two signals and thus among replicates or different strains. We suggest three summary statistics: maxðCÞ reflects the peak growth speed caused by the component; Using these summary statistics we can define a similarity measure between two components A and B: Here δ (max) , δ (size) and δ (center) are coefficients scaling the importance of the differences in summary statistics. sim(A, B) varies between 0 and 1 and does not depend on components' types. Finally we propose the following similarity measure for two decompositions A and B consisting of m and n components, respectively: A ¼ ðA ð1Þ ; A ð2Þ . . . This similarity metric varies between 0 and 1 and depends on summary statistics of the components. Since components in decompositions A and B may be in different order, and since decompositions may have incorrectly identified small false components, it is important to check all possible pairings between the decomposition and choose the best one. If decompositions have no components to compare, the metric is either set to 0 (one signal is active and one is non-active) or 1 (both are non-active). Similarity between two decompositions can be used to cluster signals. Fig 5 shows the example of decomposition for signals of three E. coli strains (IMT17887, PCV17887 and T17887). The decompositions are consistent between experimental replications and vary between strains. Fig 6 shows the similarity measures computed for the same data based on the Euclidean distance (Panel A) and similarity between decompositions (Panel B). In the second case, the difference between the three E. coli strains is more pronounced. We recommend comparing the decompositions obtained with the same parameter values. Performance analysis Sensitivity test. We tested the algorithm's ability to identify the correct number and type of components and related summary statistics using a synthetic data set. First, we generated one component or a pair of negatively correlated components C Ã(i) using randomly sampled parameters. The raw signal was constructed as a cumulative sum of components: Non-normal noise was added to the raw signal: The non-normal noise was chosen to reflect the apparent non-normality of the Biolog observational noise. We then estimated the component (or components) C (i) from S. We repeated the simulations 1000 times and measured a probability of correctly guessing the number of components n. If n was correct, we measured a probability identifying the component type and the mean absolute difference in the summary statistics max(C), size(C) and center(C). The results are shown in Table 1. The code used for testing is available at www.helsinki.fi/bsg/software/ Biolog_Decomposition. We used T = 50 hours, blur strength b = 1, threshold δ = 20, correlation weight γ = 2 and the observation noise λ = 10. Single components were almost always (in 97-99% cases) identified correctly. In 20% of the cases brick and slope components were misidentified because smoothing during the pre-processing step blurs the distinctions between the types. Narrow components are especially susceptible for this. The errors in the summary statistics were insignificant. Identifying two components correctly was more challenging: the number of components was correctly identified in 56-74% of the cases. If the components were located close to each other, decomposition algorithm often mistook them as one or separated them in an incorrect position. The type of the components was correctly estimated in 53-79% of the cases. The errors in the summary statistics were larger in this setting as well. Robustness to parameter choice. To assess the robustness of the decomposition algorithm to measurement errors and parameter choices with simulations, we applied the same protocol Discussion The proposed algorithm decomposes Biolog Phenotype Microarray data into potentially biologically meaningful components, i.e. components that could be interpreted directly as bacterial metabolic cycles and/or population changes. Identification of these components could be useful for further investigations, such as identifying sub-populations within bacterial cultures. Different signals (metabolic cycles) may arise after the initial death or growth stasis of a subpopulation of bacteria followed by growth of a second sub-population. Also, metabolites generated during growth on the initial substrate might result in a second decomposition signal in a later phase of the experiment. Among the most promising future applications would be a direct link to concurrent RNA-sequencing data to detect different metabolic pathways. The decomposition of growth kinetics and comparison of similarity among replicates and different strains is a meaningful tool for analysing the growth of different bacteria in a manner of high resolution, in contrast to methods only analysing the respiration kinetics data as endpoint assays. Performance analysis revealed that the presented method has a lowered sensitivity if there are several correlated components. Due to the observational errors, it is only possible to identify evident metabolic cycles. While the probability of correctly inferring the component type was low, the summary statistics were estimated accurately. Therefore any further analysis should rely on the summary statistics rather than on the component types. The algorithm requires several pre-defined options to determine the sensitivity and level of smoothing. A user-specified tuning may be required to obtain an optimal fit for a particular data set. For some parameters, such as the component size threshold δ prior knowledge may also be used. We have investigated different modifications of the basic algorithm. We tested to use an L1 norm instead of the L2 norm (Euclidean distance) to identify the components and using a sliding mean smoothing instead of a Gaussian kernel. Our analysis suggested (data not shown) that the presented version of the method provides the best sensitivity, specificity and robustness among the considered alternatives. We also considered including a fourth component type: a right half of a Gaussian bell (C t = 0 for t < μ, C t ¼ A e À ðmÀ tÞ 2 v for t ! μ). However, this half-Gaussian sequence was almost never observed in a sample data sets, as similar patterns are better described with a slope sequence. The time-series generated by the Biolog PMs are inevitably subject to measurement noise. In addition to the signal-level noise (which is handled by the smoothing), there are plate-level biases, such that two plates with the same substrates in the same conditions may produce different amount of colouration. To handle the plate-level noise all time-series representative of the same array may be analysed concurrently and normalized a priori. The presented algorithm does not require extensive computational resources. The runtime depends on the number of components identified and takes typically about 20 minutes to complete for a single plate in a standard single CPU desktop computing environment. The code is written in R and can be downloaded at www.helsinki.fi/bsg/software/Biolog_Decomposition. It is a part of a pipeline for analysing Biolog PM data [8] (www.helsinki.fi/bsg/software/R-Biolog) built upon the opm package [9]. Supporting Information S1 File. Sample data. Biolog metabolic signals from E. coli IMT17887, PCV17887 and T17887 tested on plate PM1. Three biological replicates per strain. (ZIP)
4,284
2016-09-27T00:00:00.000
[ "Biology", "Computer Science" ]
Simulation of the Mineração Serra Grande Industrial Grinding Circuit Mining Mineração Increasing throughput during the mining cycle operation frequently generates significant capital gains for a company. However, it is necessary to evaluate plant capacity and expand it for obtaining the required throughput increase. Therefore, studies including different scenarios, installation of new equipment and/or optimization of existing ones are required. This study describes the sampling methodology, sample characterization, modeling and simulation of Mineração Serra Grande industrial grinding circuit, an AngloGold Ashanti company, located in Crixás, State of Goiás, Brazil. The studied scenarios were: (1) adding a third ball mill in series with existing two ball mills, (2) adding a third ball mill in parallel with existing mills, (3) adding a vertical mill in series with existing mills and (4) adding high pressure grinding rolls to existing mills. The four simulations were carried out for designing the respective circuit, assessing the interference with existing equipment and installations, as well as comparing the energy consumption among the selected expansion alternatives. Apart from the HPGR alternative, all other three simulations resulted in the required P80 and capacity. Among the three selected simulations, the Vertimill alternative showed the smallest installed power. Introduction Mineração Serra Grande is a gold mining operation located in Crixás, State of Goiás, Brazil.The beneficiation plant processes gold ore from three underground and one open pit mines.The current process includes multi-staged crushing, followed by ball milling in closed configuration with hydrocyclones.A gravity concentration circuit is fed by part of the circulating load, while the grinding circuit product is thickened and leached with sodium cyanide.After leaching, the pulp is filtered, clarified and precipitated with zinc (Merrill Crowe process).The solid tailings are pumped to the tailings dam.Gold is thus produced from both Merrill Crowe and gravimetric circuits.Figure 1 shows the current Serra Grande plant flow sheet.Mineração Serra Grande (MSG) started its operation in October 1989 with a single ball mill, processing 1,200 t of ore per day.Currently, plant capacity is approximately 3,600 t/day. In 2008, the circuit was expanded by installing new equipment, together with various other actions, such as employing a better pumping system, hydrocyclone optimization, adequate ball charge, installing grates in the existing ball mill, as well as automation in the circuit.Further production increase was then focused on installing new equipment. Figure 2 shows plant production and gold grade from 1990 to 2015.The chart shows a step change in gold production when the second ball mill was installed ( 2009), followed by a steady increase in following years resulting from optimization,together with a declining gold feed grade. Figure 2 Plant production and gold grade history of Mineração Serra Grande. MSG is currently studying alternatives for increasing current plant capacity from 1.3 MTPY to 2.0 MTPY.Apart from a 54%, increase in the current production, such an expansion would also result in further performance improvement by reducing operating costs. Sampling and data collection This study began with a literature review to perform a survey campaign on the existing grinding circuit.The aim of sampling was to reduce the mass of a lot, without assigning significant changes to its properties.Data collection followed the sampling rules as proposed by Gy (1982). Each selected stream was sampled for two hours during a steady-state period of the grinding plant.In some streams, automatic sampling systems were used, while manual sampling was carried out at all remaining selected points, as shown in Table 1.Table 3 shows the equipment main characteristics as currently installed at MSG industrial grinding circuit. Table 3 Equipment main characteristics. Further information about sampling of this work can be found in Leite (2016). Ore characterization Samples obtained in the survey campaigns were sent to the Laboratory of Simulation and Control (LSC) of the University of São Paulo for screening, as well as for specific gravity assess-ment and comminution testing, which included the Bond Work Index, Drop Weight Test, Piston Press Test and Jar Mill Grinding Test. The Bond Work Index (BWI) was performed to estimate energy requirements for ball milling using the Bond equation shown below, together with the Rowland (1982) efficiency factors -EF. Jar Mill Grinding Test (JMGT) was performed to estimate energy consump-tion for an industrial vertical mill.The energy calculation for the JMGT was carried out through equation 3, following Metso procedures described by Wills, 2016.(1) (2) (3) Equipment and process models The Nageswararao (2004) model was used for modeling the industrial hydrocyclones.The model includes both operation and design data, together with partition curve parameterization.Calibration constants were back calculated for model fitting exercises. The adapted Perfect Mixing Model proposed by Whiten (1976) was used to model industrial ball milling. The grinding kinetic parameter (r/d*) was determined for each ball mill during the model fitting exercises, as described by Napier-Munn (1996). The HPGR model proposed by Morrell/Tondo/Shi (1997) includes three break-age zones i.e. the pre-crusher zone, the edge effect zone and the compression zone.The throughput model component uses a standard plug flow model version that has been used extensively by manufacturers and researchers.Power consumption is based on throughput and specific comminution energy input.(Morrell et al., 1997). Ore characterization The BWI test performed in the surveyed grinding circuit feed sample resulted in 11.6 kWh/sht.Such a value was used to estimate the overall grinding circuit energy consumption.The combination between such an energy consumption and the stipulated 2.0 MTPY resulted in 624 kW power to be installed in the additional parallel ball mill. The appearance function and breakage parameters as obtained from DW T, carried out on surveyed samples are shown in Tables 4 and 5. 6 and 7. JMGT was carried out for 3, 5 and 10 min grinding periods.Table 8 shows the results obtained in terms of specific energy and resulting product P 80 . Model calibration The obtained sample values are similar to the calculated data and resulted in a consistent mass balance, as well as adequate fitted models. Figure 3 shows experimental and calculated size distributions obtained for each individual stream around the MSG industrial grinding circuit. Simulations Four circuit alternatives were assessed through simulations for increasing the current 1.3 MTPY capacity to the stipulated 2.0 MTPY for the expansion project.Each alternative was simulated to obtain the respective mass balance and equipment design, together with the installed power and energy consumption. Simulations were carried out with calibrated models using JKSimMet 6.0 software. Each simulated alternative is described as follows. Alternative 1 -Additional ball milling line in series The first alternative consisted in simulating an additional ball mill in the existing grinding circuit.The third ball mill would regrind the product of the two existing ball mills, as shown in the Figure 4 The two existing ball mill lines were thus simulated for the 2.0 MTPY increased throughput, therefore producing a relatively coarser product, in this case a P 80 equals to 165 µm.The third ball mill was thus designed to grind such an intermediary product to the stipulated P 80 of 109 µm. The designed ball mill showed 3.2 m in diameter and 4.6 m in length, operating at 35% ball charge, 70% critical speed and 60 mm steel ball top size.The calculated ball mill installed power was 618 kW-. Alternative 2 -Addition ball milling line in parallel The second alternative comprised of simulating an additional ball milling line in parallel with the two existing ones, as shown in the Figure 5 Alternative 3 -Additional vertical mill The third alternative consisted in simulating a vertical mill to regrind the product from the existing two ball mills.Figure 6 shows the simulated circuit flow sheet.As per Alternative 1, the existing ball mill circuit product showed a P 80 of 165µm for processing 2.0 MTPY. In order to calculate the required energy for a vertical mill in reducing the P80 from 165µm (feed) to 109 µm (product), the graph showed in Figure 7 was used.Such a graph resulted from the JMGT carried out specifically for such a purpose.According to Figure 7, the required energy for such an operation was calculated as 1.71 kWh/t, which resulted in 416 kW for a 243 t/h throughput.A Metso VTM-800 was selected considering safety factor suggested by the manufacturer Wills, 2016. Alternative 4 -Additional HPGR The fourth alternative included a HPGR in a single pass (open circuit) for providing a finer size distribution to the existing ball mills.Such a finer size distribution would thus increase the installed ball milling capacity to the required 2.0 MTPY. Figure 7 shows the simulated circuit flow sheet.Based on simulation results, the selected equipment was one that had 1200 mm in roll diameter by 750 mm in roll length, with a 6.35 mm working gap, 324 ts/m³h specific throughput (m dot) and 1.48 m/s roll speed. Even though the simulations indicated that the existing grinding circuit would only achieve the required capacity of 2.0 MTPY for a finer feed, HPGR benchmarking indicated that a realistic product would not be finer than a 2500 µm P 80 .For such a feed size distribution, the existing grinding circuit product would show P 80 of 141 µm, therefore coarser than the required value (P 80 of 109 µm). Alternative comparison A summary is shown in Table 11of the equipment selected for the simulated alternatives with required power, installed power and P80 of the product for each case. E = specific energy consumed during JMGT; D = mill internal diameter; V p = mill volume fraction filled with grinding media; Cs = fraction of the mill critical speed; t = time jar operation; m b = media mass; m m = ore mass Mass balancing was carried out using experimental data obtained during the sampling period.This procedure included estimating best flow rates and size distri-butions around the entire grinding circuit. Figure 3 Figure 3 Experimental and simulated size distributions as obtained for individual grinding circuit streams. flow sheet. Figure 4 Figure 4 Additional ball milling stage flow sheet. flow sheet. Figure 5 Figure 5 Additional ball milling line flow sheet.The designed ball mill resulted in the same dimensions as obtained in Alternative 1, i.e. 3.2 m in diameter and Figure 6 Figure 6 Vertical mill flow sheet. Figure 7 Figure 7 Estimated P 80 as a function of the specific energy for different grinding times -JMGT. Table2shows the grinding circuit operating data as obtained during the sampling period for mass balance calculations. Table 9 , while Table10shows the parameters obtained from ball mill modeling.
2,408
2017-09-01T00:00:00.000
[ "Materials Science" ]
On the Influence of Soret and Dufour Effects on MHD Free Convective Heat and Mass Transfer Flow over a Vertical Channel with Constant Suction and Viscous Dissipation The present paper investigates the combined effects of Soret and Dufour on free convective heat and mass transfer on the unsteady one-dimensional boundary layer flow over a vertical channel in the presence of viscous dissipation and constant suction. The governing partial differential equations are solved numerically using the implicit Crank-Nicolson method. The velocity, temperature, and concentration distributions are discussed numerically and presented through graphs. Numerical values of the skin-friction coefficient, Nusselt number, and Sherwood number at the plate are discussed numerically for various values of physical parameters and are presented through tables. It has been observed that the velocity and temperature increase with the increase in the viscous dissipation parameter and Dufour number, while an increase in Soret number causes a reduction in temperature and a rise in the velocity and concentration. Introduction The phenomenon of coupled heat and mass transfer by free convection in a fluid saturated porous medium occurs in many engineering and technological and manufacturing industries such as hydrology, geosciences, electronic devices cooled by fans, geothermal energy utilization, petroleum reservoirs, and design of steel rolling and nuclear power plants. A comprehensive account of the available information is provided in the recent books Neild and Bejan [1] and Ingham and Pop [2]. In recent years, considerable attention has been devoted to study the MHD flows of heat and mass transfer because of the applications in geophysics, aeronautics, and chemical engineering. Palani and Srikanth [3] studied the MHD flow of an electrically conducting fluid over a semi-infinite vertical plate under the influence of the transversely applied magnetic field. Makinde [4] investigated the MHD boundary layer flow with heat and mass transfer over a moving vertical plate in the presence of magnetic field and convective heat exchange at the surface. Additionally, Duwairi [5] analyzed viscous and joule-heating effects on forced convection flow from radiate isothermal surfaces. The effect of viscous dissipation is usually characterized by the Eckert number and has played a very important role in geophysical flow and in nuclear engineering that was studied by Alim et al. [6]. It also plays an important role in free convection in various processes on large scales or for large planets. The effects of suction on boundary layer flow also have greater influence over the engineering application and have been widely investigated by numerous researchers. Various authors have studied the effects of viscous dissipation and constant suction in different surface geometries. Uwanta [7] studied the effects of chemical reaction and radiation on heat and mass transfer past a semi-infinite vertical porous plate with constant mass flux and dissipation. Mansour et al. [8] described the influence of chemical reaction and viscous dissipation on MHD natural convection flow. The effect of chemical reaction and heat and mass transfer along a wedge with heat source and concentration in the presence of suction or injection has been examined by Kandasamy et al. [9]. Govardhan et al. [10] presented a theoretical study on the influence of radiation on a steady free convection 2 International Scholarly Research Notices heat and mass transfer over an isothermal stretching sheet in the presence of a uniform magnetic field with viscous dissipation effect. Sattar [11] analyzed the effect of free and forced convection boundary layer flow through a porous medium with large suction. Similarly, Mohammed et al. [12] investigated the effect of similarity solution for MHD flow through vertical porous plate with suction. Jai [13] presented the study of a viscous dissipation and chemical reaction effects on flow past a stretching porous surface in a porous medium. In another article, a detailed numerical study on the combined effects of radiation and mass transfer on a steady MHD two-dimensional marangoni convection flow over a flat surface in presence of joule-heating and viscous dissipation under influence of suction and injection is studied by Ibrahim [14]. Khaleque and Samad [15] described the effects of radiation, heat generation, and viscous dissipation on MHD free convection flow along a stretching sheet. When heat and mass transfer occur simultaneously in a moving fluid affecting each other causes a cross diffusion effect, the mass transfer caused by temperature gradient is called the Soret effect, while the heat transfer caused by concentration effect is called the Dufour effect. Soret and Dufour effects are important phenomena in areas such as hydrology, petrology, and geosciences. The Soret effect, for instance, has been utilized for isotope separation and in a mixture between gases with very light molecular weight (He, H 2 ) and of medium molecular weight (N 2 , air). The Dufour effect was recently found to be of order of considerable magnitude so that it cannot be neglected, Eckert and Drake [16]. Many researchers studied Soret and Dufour effects; for example, Postelnicu [17] analyzed the effect of Soret and Dufour on heat and mass transfer. Chamkha and El-Kabeir [18] presented a theoretical study of Soret and Dufour effects on unsteady coupled heat and mass transfer by mixed convection flow over a vertical cone rotating in an ambient fluid in the presence of a magnetic field and chemical reaction. Usman and Uwanta [19] have considered the effect of thermal conductivity on MHD heat and mass transfer flow past an infinite vertical plate with Soret and Dufour effects. Similarly, Uwanta et al. [20] have analyzed MHD fluid flow over a vertical plate with Dufour and Soret effects. The effects of Soret and Dufour on an unsteady MHD free convection flow past a vertical porous plate in the presence of suction or injection have been investigated by Sarada and Shankar [21]. A numerical approach has been carried out for the study of Soret and Dufour effects on mixed convection heat and mass transfer past a vertical heated plate with variable fluid properties by Nalinakshi et al. [22]. Subhakar and Gangadhar [23] investigated the combined effects of the free convective heat and mass transfer on the unsteady two-dimensional boundary layer flow over a stretching vertical plate in the presence of heat generation/absorption and Soret and Dufour effects. In another article, Srinivasacharya and Reddy [24] examined Soret and Dufour effects on mixed convection in a non-Darcy porous medium saturated with micro polar fluid. Additionally, Sivaraman et al. [25] considered Soret and Dufour effects on MHD free convective heat and mass transfer with thermopheresis and chemical reaction over a porous stretching surface. Recently, Srinivasacharya and Upendar [26] analyzed the flow and heat and mass transfer characteristics of the mixed convection on a vertical plate in a micropolar fluid in the presence of Soret and Dufour effects. Most recently, a boundary layer analysis has been presented to study heat and mass transfer in the laminar, viscous, and incompressible fluid past a continuously moving plate saturated in a non-Darcy porous medium in the presence of Soret and Dufour effects with temperature dependent viscosity and thermal conductivity by El-Kabeir et al. [27]. Finally, Olanrewaju et al. [28] have investigated Dufour and Soret effects on convection heat and mass transfer in an electrically conducting power law flow over a heated porous plate. In view of the above studies, the purpose of current investigation is to examine the influence of Soret and Dufour effects on MHD free convective heat and mass transfer flow over a vertical channel with constant suction and viscous dissipation. Mathematical Formulation Consider the flow of an unsteady laminar coupled free convective heat and mass transfer of an incompressible fluid past a vertical channel in a porous medium under the influence of a uniform transverse magnetic field and constant suction with viscous dissipation in the presence of Soret and Dufour effects. The -axis is taken on the finite plate and parallel to the free stream velocity which is vertical and the -axis is taken normal to the plate. All fluid properties are assumed to be constant. The magnetic field of small intensity is induced along the direction. The fluid is assumed to be slightly conducting; hence, the magnetic Reynolds number is much less than unity and therefore the induced magnetic field is neglected in comparison with the applied magnetic field. Under the above assumptions, the general equations governing the flow can be expressed as follows: International Scholarly Research Notices The corresponding initial and boundary conditions are prescribed as follows: The geometry of the problem is shown in Figure 1. From continuity equation, it is clear that the suction velocity is either a constant or a function of time. Hence, on integrating (1), the suction velocity normal to the plate is assumed in the form where V 0 is a scale of suction velocity which is nonzero positive constant. The negative sign indicates that the suction is towards the plate and V 0 > 0 corresponds to steady suction velocity normal at the surface. The fourth and fifth terms on the right hand side of (2) denote the thermal and concentration buoyancy effects, respectively, and V are the velocity components in the -and -directions, respectively, is the time, ] is the kinematic viscosity, is the acceleration due to gravity, is the coefficient of volume expansion, is the density, * is the volumetric coefficient of expansion with concentration, ( ) is the thermal conductivity, is the specific heat capacity at constant pressure, * is the permeability of the porous medium, is the coefficient of mass diffusivity, 0 is the thermal conductivity of the ambient fluid, is a constant depending on the nature of the fluid, is the coefficient of molecular diffusivity, is the coefficient of temperature diffusivity, * is the dimensionless joule-heating parameter, is the electric conductivity, and 0 is the magnetic field of constant strength. and 0 are the temperature of the fluid inside the thermal boundary layer and the fluid temperature in the free stream, respectively, and 0 are the corresponding concentrations. On introducing the following nondimensional quantities: Applying (8), the set of (2), (3), (4), (5), and (6) reduces to the following: with the following initial and boundary conditions: where Ec is the Eckert number, Pr is the Prandtl number, Sc is the Schmidt number, Sr is the Soret number, Du is the Dufour number, is the Magnetic field parameter, Gr is the thermal Grashof number, Gc is the Solutal Grashof number, is the porous parameter, 1 is the joule-heating parameter, is the variable thermal conductivity, and is the variable suction parameter while and V are dimensionless velocity components in -and -directions, respectively, and is the dimensionless time. International Scholarly Research Notices The skin friction, Nusselt number, and Sherwood number are important physical parameters for this type of boundary layer flow and are given by Numerical Solution Procedure The set of coupled nonlinear governing boundary layer equation (9) together with boundary conditions (10) are solved numerically by using the implicit finite difference method of Crank-Nicolson type. The finite difference approximations equivalent to (9) are as follows: The initial and boundary conditions take the following forms: where corresponds to 1. Equations (12) are simplified as follows: The index corresponds to space and corresponds to time . Δ and Δ are the mesh sizes along -direction and time -direction, respectively. The finite difference equations Knowing the values of , , and at a time = , calculate and at time = + 1 using the finite difference equations (15) and (16) and solving the tridiagonal system of equations by using Thomas algorithm as discussed by Carnahan et al. [29]. Knowing the values of and at time = and = + 1 and the values of at time = , solve (14) using tridiagonal matrix inversion, to obtain the values of at time = + 1. This process is repeated for various levels. Thus the values of , , and are known at all grid points in the rectangular region at ( + 1)th time level. Computations are carried out until the steady state is reached. The Implicit Crank-Nicolson method is a second order method (Δ ) 2 in time and has no restrictions on space Δ and time step Δ ; that is, the method is compatible. Hence the finite difference scheme is unconditionally stable and therefore compatibility and stability ensures the convergence of the scheme. Computations are carried out for different values of physical parameters involved in the problem. Results and Discussion In this paper, numerical values are assigned physically to the embedded parameters in the system in order to report on the analysis of the fluid flow structure with respect to velocity, temperature, and concentration profiles. Numerical results for velocity, temperature, and concentration profiles are presented on graphs, while the skin-friction coefficient, Nusselt number, and Sherwood number are shown in tabular form. The Prandtl number is taken to be (Pr = 0.71, 7.0) which corresponds to air and water. The value of Schmidt number is taken as (Sc = 0.22, 0.66, 0.94, 2.62) representing diffusing chemical species of most common interest in air for hydrogen, oxygen, carbon dioxide, and propyl benzene, while for Soret numbers (2.50) and (6.89) corresponding to thermal diffusion ratio of (H 2 -CO 2 ) and (He-Ar) are chosen, respectively, while other parameters in the flow are chosen arbitrarily. The influence of thermal Grashof number Gr and solutal Grashof number Gc on the velocity is presented in Figures 2 and 3. The thermal Grashof number signifies the relative effect of the thermal buoyancy force to the viscous hydrodynamic force in the boundary layer, while the solutal Grashof number defines the ratio of the species buoyancy force to the viscous hydrodynamic force. As expected the fluid velocity increases due to the enhancement of thermal and species buoyancy forces. The velocity distribution increases rapidly near the porous plate and then decreases smoothly to the free stream value. For different values of magnetic parameter and porous parameter , the velocity profiles are plotted in Figures 4 and 5, respectively. It can be seen that as increases, the velocity decreases. This results agrees with the expectations since the magnetic field exerts a restraining force on the fluid which tends to impede it is motion. Figure 5 depicts the effect of the porous parameter on the velocity profile. An increase in increases the resistance of the porous medium, which will tend to accelerate the flow and therefore increase the velocity. For various values of suction parameter , the velocity, temperature, and concentration profiles are plotted in Figures 6(a)-6(c). It is found out that an increase in the suction parameter causes a fall in the velocity and concentration profiles throughout the boundary layer, while increase in the temperature profiles. This is due to the fact that suction parameter stabilizes the boundary layer growth. The effect of Soret number Sr on velocity, temperature, and concentration profiles is illustrated in Figures 7(a)-7(c) respectively. The Soret number defines the effect of the temperature gradients inducing significant mass diffusion effects. It can be seen that the velocity and concentration profiles increase with an increase in Sr, while a rise in Sr causes a fall in the temperature profiles within the boundary layer. These behaviors are evident from Figures 7(a)-7(c). The influence of viscous dissipation parameter, that is, Eckert number Ec, on the velocity profiles is depicted in Figure 8. The Eckert number expresses the relationship between the kinetic energy in the flow and the enthalpy. It embodies the conversion of kinetic increase in the velocity and temperature profiles. For different values of joule-heating parameter 1 and thermal conductivity parameter , the temperature profiles are plotted in Figures 12 and 13, respectively. It is observed that increasing the joule-heating parameter and thermal conductivity parameter produces significant increase in the thermal conduction of the fluid, which is physically true because as the thermal conductivity increases the temperature within the fluid increases. Figure 14 describes the behavior of various values of Scmidt number Sc on the concentration profiles. The Schmidt number characterizes the ratio of thicknesses of viscous to the mass diffusivity. The Scmidt number quantifies the relative effectiveness of momentum and mass transport by diffusion in the velocity and concentration boundary layers. It is observed that increase in the values of Sc causes the species concentration and its boundary layer thickness to decrease significantly. The effects of various governing parameters on skinfriction coefficient , Nusselt number Nu, and Sherwood number Sh are shown in Tables 1 and 2. In order to highlight the contributions of each parameter, one parameter is varied while the rest take default fixed values. It is observed from Table 1 that an increase in any of the parameters and causes reduction in the skin-friction, while increasing any of the parameters, Gr, Gc, , and resulted in corresponding increase in the skin-friction coefficient. It is also seen that as and increase, there is a rise in Nusselt number and Sherwood number, respectively. From Table 2, it is observed that an increase in Ec, 1 , and Du leads to a rise in the skinfriction coefficient, Nusselt number, and Sherwood number, respectively, while an increase in Pr leads to a fall in the skinfriction coefficient, Nusselt number, and Sherwood number, respectively. It is also seen that as Sr and Sc increase there is a fall in the skin-friction coefficient and a rise in Nusselt and Sherwood numbers, respectively. Conclusions The present paper analyzes the influence of Soret and Dufour effects on MHD free convective heat and mass transfer flow over a vertical channel with constant suction and viscous dissipation. The resulting partial differential equations With fixed values of (Gr are nondimensionalised, simplified, and solved by implicit finite difference method of Crank-Nicolson type. From the present numerical study the following conclusions can be drawn. (1) Velocity profiles increased due to increase in thermal Grashof number, solutal Grashof number, porous parameter, Eckert number, Soret number, Dufour number, and dimensionless time while it decreased due to increase in magnetic parameter and suction parameter. (2) An increase in temperature profiles is a function of an increase in thermal conductivity, suction parameter, Eckert number, joule-heating parameter, Dufour number, and dimensionless time while it decreased due to increase in Soret number. (3) Concentration profiles decreased due to increases in Schmidt number and suction parameter while it increased due to increase in Soret number and dimensionless time. (4) There is a rise in the skin-friction coefficient, Nusselt number, and Sherwood number due to increase in Eckert number, thermal conductivity parameter, joule-heating parameter, and Dufour number while a fall is observed in skin-friction coefficient with increase in Soret number and Prandtl number. Subscripts : Condition at wall.
4,314.8
2014-10-28T00:00:00.000
[ "Engineering", "Physics" ]
Foreign Exchange Exposure of Korean Firms The purpose of this study is to examine the relationship between the movements of exchange rate and value of Korean firms, so-called foreign exchange rate exposure using newly devised model to find the strong evidence. I use weekly data on Korean Firms that are listed on Korea Stock Exchange (KSE) for the period from January 1997 to December 2000. I find that about 70% Korean Firms are actually exposed to Won-dollar exchange rate movement at 10% significance level and these results are substantially different from the previous empirical study where little statistical significance was found. In comparing the foreign exchange exposures with three different exchange rates, in Won-dollar and Won-yen exchange exposures, value of Korean firms is positively related to depreciation of Korean Won and negatively related to depreciation of Korean Won with Won-euro exchange exposure. With magnitude of three exposures, results can be interpreted that Dollar exposure seems to be the most significant among three foreign exchange exposures and Korean Firms' value is more sensitive to Won-dollar exchange rate. I also find that exchange exposure is strongly related to firm size and industry especially Electricity & Gas industry is most significantly related. I. Introduction In the early 1970s, the U.S. government abandoned the fixed exchange rate system and adopted floating exchange rate regime. Since that time, there have been tremendous changes and fluctuations in the foreign exchange market and in international financial market. 1 As the degree of exchange rate fluctuation was getting increased in the globally integrated financial capital market, many countries concerned about change of their countries' return which is affected by the fluctuation. So seeking ways to hedge the foreign exchange rate risk became a main issue and many researchers started to study the relationship between the exchange rate and return of companies, which is so-called foreign exchange rate exposure. According to Chung (1997), the KRW-USD exchange rates were allowed to fluctuate freely through the 1990s the exchange rate has increased accordingly. Through financial crisis in 1997, the volatility of the exchange rate proved itself to be so severe as to lead to major crises or even to defaults of some economies, and the importance of estimating the foreign exchange exposure came up to the surface again. For the past decade, several researchers like Adler and Dumas (1984), Jorion (1991), Banda & Gentry (1993), and Campa (1997) have been empirically investigating the foreign exchange exposure of corporations. Up to date, it is widely believed that the movements of exchange rate affect value of companies, which means their returns are significantly exposed to exchange rate movements; however, there has been weak or low statistical evidence. The statistical inactivity is because, first, most of the previous empirical studies estimating the foreign exchange exposure focused on economy-leading countries, which have small portion of foreign operations. Second, most of researchers used the uniform or similar Capital Asset Pricing Model (CAPM) regression model that includes market return as an explanatory variable, and single currency in their empirical studies. Third, in actual capital market, market return is correlated with the movement of exchange rate, which is a point many researchers connived at. It is contrary to the fundamental that market return should not have correlations with independent variables in any kind of models, and it, after all, reduces statistical significance. Inclusion of the market portfolio return variable allows researchers to control market value-relevant factors and to improve the precision of the exposure estimates, but it is faulty since market return is correlated with the exchange rate over the estimation period. 2 it is relevant to compare the three foreign exchange exposures. However, in using Euro per dollar exchange rate, due to data availability, German Mark-dollar exchange rate was used for the first two years out of the 1997 to 2000 period instead of Euro-dollar rate. Lastly, to identify the determinants of foreign exchange exposure, foreign exchange exposures were classified into twenty-one industry categories and firm size. Definition and classification of foreign exchange rate exposure opens the section II. In section III, available and relevant data set for empirical study are introduced. Section IV presents empirical study including regression model of previous study and newly devised econometric model and its empirical findings that are estimated exchange exposure of Korean firms and three different exchange exposures. Section V reports the related factors' statistical significance in the explanation of exchange exposure. Section VI includes summary and concluding remarks. II. Defining Exchange Rate Exposure Exchange exposure, defined as the sensitivity of corporation's value to a change of exchange rate, is classified into three categories; Transaction, Translation and Economic Exposure. 4 (1) Transaction Exposure Transaction Exposure originates from the possibility when future income, which is expected to be earned by foreign currency denominated contract, changes during the time period of commitment to a transaction and an actual transaction. However, this kind of exposure usually is well defined and it can be hedged quite easily using derivatives. (2) Translation Exposure Translation exposure or accounting exposure is the difference between assets and liabilities that are exposed to the fluctuation of a certain currency. Generally, to evaluate the balance sheet of subsidiaries that are operating in foreign countries in the 4 See Jorin (1990) and Stefan Nydahl (1999) foreign currencies, some constant exchange rates would have to be applied to each item in the balance sheet. At this moment, the value of subsidiaries varies on account of applying current or historical exchange rate. (3) Economic Exposure Economic exposure measures the degree to which exchange rate movements affect a firm's value. So, economic exposure depends on the operations of the firm, but is much more important and complicated than transaction exposure or translation exposure in terms of long-term management of firms. However, it is very difficult and complex to distinguish the difference between transaction exposure, translation exposure and economic exposure. 5 So in this paper, economic exposure will be regarded as the combination of transaction exposure and translation exposure. 6 III. Data Set The data for the empirical research in this paper contains five sets of variables: weekly is structural change before and after the crisis. For that purpose, each period is designed to have three sub-periods that are pre-crisis (Dummy crisis variable equals to zero), in-crisis (Dummy crisis variable equals to one) and post-crisis (Dummy crisis variable equals to two), respectively. Firm size: Large firms are expected to be more significantly exposed to exchange rate movements, so firm size was chosen as an explanatory variable. Total market value was calculated with the data from KSE by multiplying the number of outstanding shares with market price, and the companies' size were sorted by total market value. We define the top 10% companies of total market value as a large firm and the bottom 10% as a small firm. Industry variables: To identify the determinant of exchange exposure, industry variables were considered with the expectation that all the industry does not have the same level of exposure. Each company was put into twenty-one industries classification, 10 and the industry codes are presented in Table 6. The economic exposure is a coefficient (β 1 ) of exchange rate and can be obtained from following regression model, where R t is the return on the individual firm's rate of return, ΔS t is the percentage change of exchange rate, and R mt is the return on market portfolio and e t is the error term. β 1 refers to the economic exposure coefficient explaining relationships between change of exchange rate and value of firm. However, in this regression model, it raises interaction problem between the market return and the exchange rate and it reduces statistical significance. 11 The result of the Usual regression Table 1 and Figure 3 summarize the sign and magnitude of the KRW-USD exchange exposure profile using usual regression model. 79 firms out of 790 (10%) are significantly exposed to movements of KRW-USD exchange rate at the 10% level. And among the firms with significant coefficients, 59 firms (75%) (2) Newly devised econometric model To mitigate this interaction problem between market return and exchange rate, the exposure coefficient β 1 was estimated from newly devised regression model. In the new econometric model, ∧ ε it is used as an independent variable. The below shows the process of deriving newly adjusted regression model. First process is the estimation of coefficients through simple but intuitive Ordinary Least Squares (OLS). The next is the calculation of the residual ( ∧ ε it ) from the below numerical formula, where ∧ ε it is the remainder that exclude foreign exchange rate factors from the factors that have effect on market return. The final step is to put the calculated error terms into the model as dependent variables and regress them using OLS. And the coefficient of exchange rate change can be said to be the degree of exchange rate exposure. The result of newly devised regression For the KRW-euro exchange exposure, 55 of 791 firms (7%) are significantly exposed to exchange rate movements at 1%, 112(14%) at 5% level and 171(22%) at 10% significance level. Compared to KRW-USD exposure and KRW-JPY exposure, the number of significant coefficients of KRW-euro exchange rate is small and also the magnitude of exposure is relatively small. Totally different thing is that most of KRW-euro exposures have positive signs. Figure 4b shows that most exposure coefficients are concentrated on positive signs. That means appreciation of Korean won against euro leads to increase of Korean firms' value. Even though 22% significance is not really small, compared to previous results, value of Korean firms is less affected by euro. It might be trade volume and portion of investment in EURO is increasing but still small. are significantly exposed to the rate's movement at 1% significance level. 255(32%) firms and 353(45%) were proven to be significant at the 5% and 10% significance level, respectively. It also has negative sign on exposure coefficients, but as shown in Figure 4c, magnitude of exposure is less severe than KRW-USD exposure. With the results focusing only on the number of companies having significant exposure, Dollar exposure can be said to be the most significant among the three exchange exposures and are negatively affected by the depreciation of the KRW against USD. Search for the extent and sign of significant exposure coefficient was done, but it is relevant to consider total and insignificant exposure coefficients altogether. The figures indicate that the magnitude of exposure -6.11429 ~ 7.46443 in KRW-USD, -1.12613 ~ 1.175248 in KRW-euro, and -2.02008 ~ 3.38385 in KRW-JPY. Korean and other Asian Crises, it is out of interest in this paper. We more focus on structural change before and after the crisis on exchange exposures. In this section the question "Is there any structural change before and after crisis in exchange exposure?" will be answered. To find out structural change before and after crisis, dummy variable was put into the newly adjusted model and get new regression model. where the dummy variable D equals to 0 for pre-crisis period (from January 1997 to October 1997), and 1 for in-crisis period (from November 1997 to December 1998) and 2 for post-crisis period (January 1999 to December 2000). The full period was divided into three sub-periods on the basis of change of exchange rate and market return. In Figure 2a and 2c, KRW-USD, euro and JPY start to fluctuate abruptly from November 1997 and Figure 5 also shows the lowest KOSPI in In-crisis period, thus in-crisis period start in that month. Result of newly devised model with crisis variable. Table 3 and Figure 6 report the estimates of KRW-USD exchange exposure for the three sub-periods and distribution of exposure coefficients, respectively. The first thing to note is the change of sign on exposure coefficients and its implication that there actually was some structural change before and after crisis. Before the crisis, number of firms with positive exposure coefficients was 304(40%). However, after the crisis, the number went down to 159(21%). That can be interpreted that before the crisis, depreciation of KRW affected the value of 40% firms negatively, and after crisis most value of Korean firms are affected positively by depreciation of Korean won. This can also be explained by numerical evidence in Table 3b But there is some difference in number due to the size of sample selection. Table 3a Estimates of KRW-USD Exposure Coefficients β 3 with Crisis Variable 581(79%) Parenthesis is percentage of positive and negative. V. Determinant of Exchange Exposure In the previous section, it has been proved that the estimated exposure coefficients varied substantially across companies. The purpose of this section is to identify whether exchange rate exposure is related to the size of firms and industries that the firms are in. Many previous researchers 13 empirically studied the link between exchange exposure and firm size and industry. Some study found systematic relationship but some didn't. 14 But we expect that most of Korean industries that depend on export and import would be highly exposed to exchange rate movements. Each firm was divided by their size into two groups, that is, small and large with the criterion of total market value. Since industry code of Korea was revised on November 6 th 2000 from KSE, market price and the number of listed shares outstanding of November 3rd 2000 were used to keep consistence. Large firms are the companies with greatest market value from the top to upper 10 percent and small firms are the companies that are in the lower 10 percent band. In the Table 4 and Table 5, all exposure coefficients are sorted by firm size and industry level. 13 Dominguez, Chang-Young Chung, Byung-Joo Lee, Gordon M.Bonar M.H. Franco Wong. 14 In , "We find that exposure is not systematically related to firm size. Even though small firms and large firms have the same negative sign and similar magnitude, that is, they are negatively affected by depreciation of KRW-USD exchange rate; size plays a significant role in explaining the KRW-USD exchange exposure. Table 5 reports that 55 out of 70 large companies (83%) are exposed to KRW-USD exchange rate movements and 26 out of 70 small companies (37%) at 10 percent significance level. That can be interpreted that the bigger the firm is, the more exposure to exchange rate movement the firm has, and thus the exchange exposure has positive relationship with firm size. To verify that larger firm is more exposed to exchange rate movement, we conduct two-tailed t-test using absolute mean of exposure coefficients. As the mean is the offsetting value between the positive and negative exposure coefficients, it is relevant to use absolute mean to examine relationship between the magnitude of exposure and firm size. Null hypothesis (H 0 ) is that the mean of small size firms equal to the mean of large and alternative hypothesis (H 1 ) is not equal. If we assume that μS is the Small firms' mean of foreign exchange exposure and μL is the mean of large firms' foreign exchange exposure, the hypotheses can be restated as following. Therefore, the foreign exchange exposure is different by size. And this result is contrary to where they didn't find systematic relationship between the foreign exchange exposure and firm size. reveals that as much as 25% of fishing industry is exposed to KRW-USD exchange rate movement, and, interestingly, communications industry is entirely out of exchange rate movement. And Electricity & Gas industry is wholly exposed to movement of exchange rate due to huge foreign debt and import. 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Year Amount Invested(Mill USD) Exposure # of firms
3,651.2
2002-12-30T00:00:00.000
[ "Business", "Economics" ]
Incremental cardiovascular costs and resource use associated with diabetes: an assessment of 29,863 patients in the US managed-care setting Background Patients with type 2 diabetes are at increased risk of cardiovascular events, and there is an associated economic burden attached to this risk. We conducted a retrospective claims database analysis to evaluate incremental cardiovascular costs in diabetic versus non-diabetic patients hospitalized for a cardiovascular event. Methods Patients hospitalized for a cardiovascular event between January 1, 2001 and June 30, 2005 were identified from a large US managed-care population. Diabetic patients were identified by evidence of type 2 diabetes in the 12 months prior to the index hospitalization. Direct medical costs and resource use - including inpatient expenditures (for the index and first recurrent hospitalizations), as well as outpatient, laboratory, and pharmacy expenditures (during the 3-year follow-up period) - were determined for patients with or without diabetes. Results Of the 29,863 patients identified with a cardiovascular hospitalization, 5,501 patients (18.4%) had a history of diabetes in the pre-index period (mean age, 57.8 years; 42.1% female). The overall mean follow-up period was 22.8 months. The incidence of subsequent cardiovascular events in the first year of follow-up was significantly higher for patients with diabetes compared with non-diabetic patients for all types of cardiovascular events except angina. Compared with non-diabetic patients, patients with diabetes had similar mean direct medical costs per patient for the index cardiovascular hospitalization ($17,435 versus $16,917; P = 0.09), and the first recurrent cardiovascular hospitalization ($18,488 versus $17,481; P = 0.2), yet higher mean total direct medical costs per patient for cardiovascular events during follow-up years (Year 1: $8,805 versus $6,982; Year 2: $13,860 versus $10,056; Year 3: $16,149 versus $12,163; all P ≤ 0.0002). The cost difference between diabetic and non-diabetic patients remained significant after adjusting for age, gender and other potential confounders in multivariate regression analysis. The mean (SD) total period of inpatient cardiovascular hospitalization after 3 years of follow-up was 3.3 (12.4) days for patients with diabetes compared with 1.8 (5.8) days for non-diabetic patients (P < 0.0001). Conclusion Diabetic patients hospitalized for a cardiovascular event incur higher costs for cardiovascular care than their non-diabetic counterparts. This analysis of the incremental cardiovascular cost and resource use provides the basis for greater accuracy and precision when modeling the economic value of initiatives aimed at reducing cardiovascular morbidity in patients with diabetes mellitus. Introduction Approximately 17.5 million people in the United States have recognized diabetes, with an estimated 1 million new cases diagnosed each year [1]. Diabetes is a wellestablished risk factor for future cardiovascular events [2], including coronary heart disease (CHD), ischemic stroke, and peripheral vascular disease. Of the ~284,000 deaths in 2007 attributed to diabetes and its associated complications, nearly 65% were due to cardiovascular or cerebrovascular causes [1]. Indeed, the presence of diabetes can increase the risk of CHD and stroke by as much as 5-fold [3][4][5][6], in addition to conferring a worse prognosis for survival from a cardiovascular event [7][8][9][10]. As such, diabetes is currently regarded as a CHD risk equivalent [11,12] and recommendations for aggressive risk factor management in the diabetic population reflect this high-risk status [13]. Given the prevalence of diabetes and the morbidity and mortality associated with it, the burden the disease imposes on the US economy is considerable: the total cost of diabetes in 2007 was estimated at $116 billion in medical expenditures and an additional $58 billion in lost productivity [1]. Patient-level estimates of the cost of cardiovascular care in diabetic populations are critical for pharmacoeconomic modeling and guiding decisions on disease management in this cohort. Although previous studies have investigated the impact of cardiovascular disease (CVD) on health-care costs for diabetes [14][15][16][17][18][19][20], few contemporary analyses have determined incremental cardiovascular costs among diabetic versus non-diabetic patients with pre-existing CVD, particularly those costs associated with specific types of cardiovascular events such as coronary artery bypass graft (CABG) procedures, myocardial infarction (MI), and ischemic stroke. Therefore, using claims data representative of a large US managed-care population, we assessed the direct medical costs and resource use associated with initial and subsequent cardiovascular episodes in patients with or without diabetes hospitalized for a cardiovascular event. Data source and claims This retrospective cohort analysis assessed transactional billing records from the PharMetrics Patient-Centric Database, which contains fully adjudicated medical and pharmaceutical claims for > 50 million unique patients from > 90 health-care plans across the United States. The database includes both inpatient and outpatient diagnoses (based on International Classification of Diseases, 9th Revision, Clinical Modification [ICD-9-CM] codes) and procedures (based on Current Procedural Terminology 4 and Health Care Financing Agency Common Procedure Coding System [HCPCS] codes), in addition to both retail and mail-order prescription records (including Generic Product Identifier [GPI] codes). Both paid and charged amounts are available for all services rendered, as well as dates of service for all claims. Additional data elements include demographic variables (eg, age, gender, geographic region), product type (eg, Health Maintenance Organization, Preferred Provider Organization), payor type (eg, commercial, selfpay), provider specialty, and start and stop dates for plan enrollment. Only health-care plans that submit data for all plan members are included in the database, ensuring complete data capture and representative samples. Contributions are also subjected to a series of data quality checks to ensure a standardized format and minimal error rates. Study period The overall study period was from January 1, 2001 to June 30, 2006. The patient identification period was from January 1, 2001 to June 30, 2005. The first hospitalization with relevant diagnosis or procedure codes for a cardiovascular event during the identification period was defined as the index hospitalization. The pre-index period was defined as the 12 months prior to the index hospitalization admission date. The post-index (follow-up) period began 1 day after the discharge date of index hospitalization. The follow-up period was stratified to define 3 cohorts of patients with 1, 2, or 3 years of continuous enrollment after the index hospitalization discharge date. Follow-up terminated at the earliest of patient disenrollment or June 30, 2006. Patients who died during the study were included in the analysis regardless of length of follow-up. Identification of study cohorts The patient selection procedure is shown in Figure 1. Patients with a complete hospitalization (admission and live discharge within the identification period) for a cardiovascular event between January 1, 2001 and June 30, 2005 were identified from the claims database on the basis of ICD-9-CM diagnosis codes and CPT-4 procedure codes (see Additional File 1). Inclusion in the final study cohort required patients to be continuously enrolled for the entire pre-index and follow-up periods, to not have a cardiovascular-related claim in the pre-index period, to have an index hospitalization stay ≤ 27 days, and to be ≥ 18 years of age at index hospitalization. Diabetic patients were identified by evidence of type 2 diabetes in the preindex period (diagnosis of type 2 diabetes or antidiabetic medication use; Additional file 1). Cost analyses and resource utilization All medical, laboratory, and pharmacy claims were assessed for the identified patients for the period from January 1, 2000 to June 30, 2006. Direct medical costs included any cost incurred during an inpatient hospitali- The study cohort identification procedure with the final study cohort stratified by history of type 2 diabetes in the 12 months prior to hospitalization. CV, cardiovascular Figure 1 The study cohort identification procedure with the final study cohort stratified by history of type 2 diabetes in the 12 months prior to hospitalization. CV, cardiovascular. Figure provides an overview of the patient identification procedure with reasons for inclusion/exclusion through the selection process. Subsequent cardiovascular events following discharge from the index hospitalization were assessed by identifying complete hospitalizations during the follow-up period with the relevant diagnosis or procedure codes for a cardiovascular event (Additional file 1) to calculate the incidence of subsequent cardiovascular events during the first year of follow-up. Statistical methods Mean and SD were calculated for continuous variables; frequency was calculated for categorical variables. Continuous outcome variables were compared using a t-test of means, and categorical outcome variables were compared using a Chi Square test. Multivariable regression analysis was performed using generalized linear models (GLM) with total cardiovascular-related costs after 1-year followup as the dependent variable, adjusting for age, gender, geographic region, insurance type, physician specialty and relevant co-morbidities, including diabetes status. Statistical analyses were conducted using the SPSS ® (SPSS Inc, Chicago, IL) and SAS ® (SAS Institute Inc, Cary, NC) software suites. Study cohorts A total of 198,961 patients were identified as having a hospitalization for a cardiovascular event between January 1, 2001 and June 30, 2005 ( Figure 1); of these, 29,863 patients were eligible for inclusion in the final study cohort. Exclusion from the study was due primarily to the requirement of continuous enrollment for the entire preindex period and for 360 days in the post-index period, and to not have a cardiovascular-related claim in the pre-index period. Within this overall study population, 5,501 patients (18.4%) were identified as having a history of type 2 diabetes in the 12 months prior to hospitalization ( Figure 1). Baseline demographics and clinical characteristics The baseline demographics and clinical characteristics of the study cohorts are shown in Table 1. Overall, the patient population primarily consisted of commercially insured patients belonging to either a Health Maintenance Organization or Preferred Provider Organization health-care plan. The mean (SD) follow-up period was 22.8 months. Compared with non-diabetic patients and the study population overall, the cohort of patients with diabetes was slightly older, comprised more women, and had a higher incidence of comorbidity on the basis of the Charlson Comorbidity Index score and prevalence of hypertension and dyslipidemia ( Table 1). As such, patients with diabetes were more likely to be taking lipidlowering and antihypertensive medications. The cardiovascular event type at index hospitalization was similar across the study cohorts, with the exception of MI (lower incidence in diabetic patients) and heart failure (higher incidence in diabetic patients). Impact of diabetes on incidence of subsequent cardiovascular events The proportion of patients in each study cohort experiencing a subsequent cardiovascular event during the first year of follow-up is shown in Table 2. Patients with diabetes had a higher incidence of each specific cardiovascular event at 1 year of follow-up than patients without diabetes ( Table 2). The incidence of subsequent cardiovascular events in the first year of follow-up was significantly higher for patients with diabetes compared with non-diabetic patients for all types of cardiovascular event excluding angina (all P ≤ 0.03). Table 3). An analysis of the specific type of cardiovascular event underlying the index hospitalization showed that, compared with those without diabetes, patients with diabetes had higher costs for those hospitalizations associated with CABG procedures, MI, angina, other ischemic heart disease (e.g. coronary atherosclerosis), and peripheral vascular disease ( Mean total cumulative direct medical costs per patient for cardiovascular events across 3 years of follow-up are shown in Table 5 and Figure 2. With the exception of costs associated with CABG procedures in the second year of follow-up, diabetic patients incurred higher costs for follow-up care for all cardiovascular event types assessed than patients without diabetes ( Table 5). The highest follow-up costs for both study cohorts were incurred for MI and other cardiovascular procedures. Follow-up costs were generally driven by inpatient hospitalization costs (LOS) as well as costs associated with outpatient care (eg, physician office visits, imaging tests), particularly in dia- Impact of diabetes on health-care resource use Patients with diabetes had a mean (SD) LOS for the index hospitalization of 4.8 (3.5) days compared with 4.3 (3.0) days for non-diabetic patients (P < 0.0001) ( Table 3). For both study cohorts, the longest periods of index hospitalization were associated with CABG procedures (diabetics: 10.0 [3.9] days; non-diabetics: 9.0 [3.6] days), followed by hospitalizations with a diagnosis of peripheral vascular (Table 3). For the first recurrent cardiovascular hospitalization, mean (SD) LOS was 5.6 (12.6) days for patients with diabetes, compared with 4.6 (7.3) days for non-diabetics (P = 0.0008) ( Table 4). Recurrent hospitalizations with the longest periods of inpatient stay for both study cohorts were those with a diagnosis of transient ischemic attack or other cerebrovascular accident and those involving CABG procedures (Table 4). Mean LOS for cardiovascular hospitalizations across 3 years of follow-up are shown in Table 5. With the exception of inpatient stays with a diagnosis of peripheral vascular disease in the third year of follow-up, where data on only 2 patients with diabetes were available, diabetic patients had longer periods of follow-up hospitalization for all cardiovascular event types assessed than patients without diabetes ( Table 5). The longest periods of followup hospitalization for both study cohorts were those associated with a diagnosis of heart failure (Table 5). Overall, the total period of inpatient cardiovascular hospitalization after 3 years of follow-up was 3.3 (12.4) days for patients with diabetes compared with 1.8 (5.8) days for non-diabetic patients (P < 0.0001). Discussion In this retrospective claims database analysis, we provide quantitative documentation of the differential costs associated with managing patients with diabetes hospitalized for a cardiovascular event within US managed-care settings. Our observations that the diabetic population incurs higher direct medical costs for cardiovascular care during the initial hospitalization and follow-up period than their non-diabetic counterparts is not new; however, in light of advances in medical care and the pertinence of its setting (within a US managed care population), this contemporary assessment is relevant. We confirm that patients with diabetes also experience longer periods of inpatient cardiovascular hospitalization than those without diabetes. The higher costs and resource use in the diabetic cohort likely reflect the combination of a higher incidence of subsequent cardiovascular events observed in this population compared with patients without diabetes as well as a higher overall comorbidity index. In addition to evaluating the incremental costs and resource use associated with cardiovascular events overall, this study is the first of its kind to provide an economic assessment of this nature for specific cardiovascular event types such as CABG procedures, MI, and ischemic stroke. Excess medical costs for the initial cardiovascular hospitalization in the diabetic cohort were primarily driven by large cost differentials for those hospitalizations associated with CABG procedures, other ischemic heart disease (eg, coronary atherosclerosis), and peripheral vascular disease -CVD types commonly associated with diabetes. During the follow-up period, diabetic patients experienced a higher incidence of subsequent cardiovascular events than the non-diabetic cohort for each event type ( Table 2). As such, medical costs for cardiovascular care during the follow-up period were consistently higher in the diabetic cohort. All cardiovascular event types examined in this analysis contributed to excess follow-up costs in patients with diabetes, with cost differentials between the diabetic and non-diabetic cohorts increasing with each successive year of follow-up. Inpatient hospitaliza-tion costs and costs associated with outpatient care, such as physician office visits and imaging tests, were the primary drivers of follow-up cardiovascular costs irrespective of diabetic status. Taking into account expenditures for the initial cardiovascular hospitalization plus all cardiovascular events during [14][15][16]19,20]. For example, a recent model of the lifetime costs of complications associated with diabetes found that macrovascular disease accounted for 85% of cumulative costs in the first 5 years and 52% of costs over 30 years [19]. Furthermore, we and others [14,15,18] have shown that the cost of CVD is notably higher in diabetic patients compared with similar patients without diabetes, likely due to the severity of cardiovascular events experienced by diabetic individuals as indicated by a worse prognosis for survival [7][8][9][10]. A recent study [18] assessing annual medical-care costs in diabetic patients from a large managed-care organization found that patients with both diabetes and CVD had higher medical-care costs than non-diabetic patients with CVD ($10,172 versus $6,396 per patient per year). These estimates are somewhat lower than the costs obtained in our analysis, where annual costs per patient for follow-up cardiovascular events were up to $16,149 for diabetic patients and $12,163 for non-diabetic patients in the third year of follow-up. The higher costs reported in our analysis in part reflects the fact that costs are from 2006 (versus 1999 costs in the previous study). Also, our analysis is based on actual costs paid by healthcare plan providers in the PharMetrics database, as opposed to applying standard unit costs to resource-use profiles in a not-for-profit health maintenance organization, which may have the effect of underestimating the cost of care of these patients relative to our setting. Another study estimating direct medical costs associated with an initial event and 1 year of follow-up care for specific cardiovascular complications in diabetic patients found that the total event cost was $30,364 for MI; $6,024 for angina; $40,209 for ischemic stroke; and $3,874 for TIA [20]. Although closer to our assessment of cardiovascular costs in diabetic patients, these costs are still gener-Total mean direct medical costs per patient over 3 years of follow up for all cardiovascular events (left-hand graph) and for selected cardiovascular event types (right-hand graph) ally lower than those obtained in our analysis, again likely due to the use of standard unit costs from the year 2000. As a retrospective analysis of a health-care plan claims database, this study is not without limitations. The use of diagnosis, procedure, and medication codes to identify patients and assess downstream costs and resource use relies on the accurate assignment of these codes to patient records in order to faithfully capture an individual's medical history. Contributions to the database used in this analysis are subjected to data quality checks to ensure minimal error rates. Furthermore, only health-care plans that submitted data for all plan members are included in the database, ensuring complete data capture and representative samples. Another potential limitation of this study is that the analysis of direct medical costs focuses the interpretation of the results to a payor's perspective and to US managed-care populations. However, the records in this database are representative of the national, commercially-insured population on a variety of demographic measures including age, gender, and plan type. The data are also longitudinal, with an average member enrollment time of 2 years. Hence, the results of this analysis may be generalized to similar managed-care populations at a nation-wide level. Conclusion The results of the current analysis demonstrate that the incidence of subsequent events in the first year following hospitalization for an initial cardiovascular episode is significantly higher for patients with diabetes compared with non-diabetic patients. As a consequence of this increased cardiovascular burden, diabetic patients hospitalized for a cardiovascular event incur higher costs for cardiovascular care than their non-diabetic counterparts. The real-world cost estimates described here will aid the development of future economic models that assess the impact of healthcare initiatives aimed at this growing diabetic population.
4,355.4
2009-09-26T00:00:00.000
[ "Medicine", "Economics" ]
DRPPM-EASY: A Web-Based Framework for Integrative Analysis of Multi-Omics Cancer Datasets Simple Summary With the influx of multi-omics profiling, effective integration of these data remains the bottleneck for omics-driven discovery. Thus, we developed DRPPM-EASY, an R Shiny framework for integrative multi-omics analysis of cancer datasets. Our tool enables the exploration of multi-omics data by providing a simple user interface that minimizes the need for computational experience. Furthermore, the interface can be deployed locally or on a webserver to facilitate scientific collaboration and discovery. Abstract High-throughput transcriptomic and proteomic analyses are now routinely applied to study cancer biology. However, complex omics integration remains challenging and often time-consuming. Here, we developed DRPPM-EASY, an R Shiny framework for integrative multi-omics analysis. We applied our application to analyze RNA-seq data generated from a USP7 knockdown in T-cell acute lymphoblastic leukemia (T-ALL) cell line, which identified upregulated expression of a TAL1-associated proliferative signature in T-cell acute lymphoblastic leukemia cell lines. Next, we performed proteomic profiling of the USP7 knockdown samples. Through DRPPM-EASY-Integration, we performed a concurrent analysis of the transcriptome and proteome and identified consistent disruption of the protein degradation machinery and spliceosome in samples with USP7 silencing. To further illustrate the utility of the R Shiny framework, we developed DRPPM-EASY-CCLE, a Shiny extension preloaded with the Cancer Cell Line Encyclopedia (CCLE) data. The DRPPM-EASY-CCLE app facilitates the sample querying and phenotype assignment by incorporating meta information, such as genetic mutation, metastasis status, sex, and collection site. As proof of concept, we verified the expression of TP53 associated DNA damage signature in TP53 mutated ovary cancer cells. Altogether, our open-source application provides an easy-to-use framework for omics exploration and discovery. Introduction Multi-omics profiling of cancer patient samples and cell lines is becoming a staple of cancer research [1]. These technologies have a high potential for advancing our understanding of tumor biology and, in turn, reveal novel targets for treatment and diagnosis [2,3]. To date, a brief survey of the existing database reveals more than 500K cancer samples from GEO [4,5] and 90K pre-computed cancer expression data from recount3 [6]. Additionally, there are close to 4K mass spectrometry profiling of cancer patient samples from the Clinical Proteomic Tumor Analysis Consortium (CPTAC) data [7]. Large consortium projects, such as the Cancer Cell Line Encyclopedia (CCLE), have also generated many high-throughput datasets, such as transcript expression, RNA splicing, proteome profiling, drug response, and genetic screening data [8]. With the influx of multi-omics profiling, effective integration of these data remains the bottleneck for omics-driven discovery. The development of a simple user interface that minimizes the need for computational experience is of high interest to the community [9]. Several web-based tools are now available to perform general expression analysis of proteomics (e.g., POMAShiny [10]) and transcriptome data (e.g., TCC-GUI [11], START App [12], and GENAVi [13]). Multi-omics approaches for network analysis (e.g., MiBiOmics [14] and JUMPn [15]) are also available as a Shiny app. Web tools also exist for analyzing large datasets from the Gene Expression Omnibus (GEO) data (e.g., shinyGEO [16], ImaGEO [17]) and the cancer dependency map (e.g., shinyDepMap [18]). However, these applications tend to have limited features for analyzing complex heterogeneous phenotypes in cell lines and patients, such as mutation of genomic drivers, cell line characteristics, sex, or metastasis status. Additionally, none of these tools provides a streamlined pipeline to assess similarities and differences between omics datasets, such as transcriptome and proteome comparisons, or comparisons between mouse and human cancer models. To address these challenges, we have developed DRPPM-EASY, a Shiny app built with an open-source R programming language that can be run as a local instance or deployed online. Here, our app is divided into two major modules: (1) a one-stop expression analysis for gene expression analysis and (2) an integrative framework for comparing omics data. As a proof of concept, we further implemented an app for querying and automating extraction of sample groupings of CCLE data for downstream analysis. The source code of our application can be downloaded from https://github.com/shawlab-moffitt/DRPPM-EASY-ExprAnalysisShinY (accessed on 1 February 2022). Module 1. DRPPM-EASY APP Implementation The DRPPM-EASY app is a Shiny web app built with an open-source R programming language (V.4.1.0). The Shiny framework leverages existing RNA-seq analysis packages to put together a one-stop analysis framework ( Figure 1A) for data exploration (Table 1), differential expression analysis (Table 2), and gene set enrichment analysis (Table 3). The data exploration section allows the user to perform unsupervised and supervised hierarchical clustering. Clustering can be further evaluated by different types of distance calculations (i.e., ward, average, complete, centroid) or variable gene ranking strategy (mean absolute deviation or variance). The relative gene expression can be examined across sample groups by a boxplot or scatter plot to examine the gene expression of the positive control associated with the experimental design. Differential gene expression is performed by LIMMA [19] and can be visualized as a volcano plot and MA-plot. The list of differentially expressed genes can be further examined by pathway enrichment analysis ( Figure 1A). Finally, the user can perform gene set enrichment analysis (GSEA), which ranks the genes based on signal-to-noise between the user-selected phenotype to examine enriched genes associated with a gene set signature ( Figure 1A). A complementary strategy to estimate enrichment scores for individual samples can be performed by single-sample GSEA (ssGSEA) implemented in the GSVA library [20]. Finally, these single-sample enrichment scores can be downloaded as a tab-delimited table or visualized as a boxplot. The pipeline takes in input files of an expression matrix, a sample meta-file specifying sample grouping, and a gene set database for GSEA. A GSEA enriched signature table is generated as a preprocessing step, which is used as input to the R Shiny app. The app generates two modes of exploring the data: (1) general differential gene expression analysis and (2) gene set enrichment analysis. The result from the analysis can be downloaded as output tables. (B) Schematic of the integrative analysis with three major features for pathway signature comparison. The app has three modes of integrative analysis: (1) scatter plot mode, (2) correlation plot mode, and (3) paired multiomics analysis. DEA1 Volcano Plot • User selects comparison groups Module 2. The DRPPM-EASY-Integration App Implementation The DRPPM-EASY-Integration provides an explorer for the user to upload normalized RNA expression, proteomic quantification, or ssGSEA scores to evaluate the potential relationship between these features ( Figure 1B). These can be evaluated by either a 1:1 scatter plot or 1:n rank of Spearman correlation rho values ( Table 4). The integrative app also allows the user to perform concurrent differential expression analysis and integration of two expression matrices, for example, to compare RNA and protein expression matrices. The fold change can be compared between the two datasets (Table 4), and differentially expressed genes can be compared by reciprocal GSEA or ssGSEA. Direct overlap between the differentially expressed genes is shown as a Venn diagram and further compared to existing gene set databases by Fisher's exact test, Cohen's kappa score, and the Jaccard index. Installation and User Guide The source code and user guide are available for download on the project's GitHub page. The GitHub page includes the list of individual R packages and their version along with an installation script for all package dependencies. RNA Sequencing Analysis USP7 samples were prepared as described in Shaw et al. [21]. Briefly, human T-ALL cell lines Jurkat (ATCC) cells were transduced with USP7 shRNA lentivirus and sorted for GFP positive cells or selected by puromycin. RNA samples were isolated using RNeasy Mini Kit (QIAGEN) and subjected to paired-end 2 × 151 base-pair RNA-seq sequencing (Illumina), 10 Jurkat samples-of which 6 were treated with shRNA and 4 were treated with a scramble RNA-were profiled by RNA-seq. RNA-seq data were processed by a custom pipeline (WRAP, https://github.com/gatechatl/DRPPM_Example_Input_Output/tree/ master/WRAP:Wrapper-for-my-RNAseq-Analysis-Pipeline (accessed on 1 August 2021. RNA-seq reads were aligned using the STAR 2.7.1a aligner [22] in the two-pass mode to the human hg38 genome build using gene annotations provided by the Gencode v31 gene models. Read count for each gene was obtained with HT-seq [23]. Reads were normalized to fragments per kilobase million (FPKM) for each gene. Whole Proteomics Mass Spectrometry and Data Analysis The 10-plex TMT labeled mass spectrometry experiment was performed with a previously published protocol with slight modification [24,25] (See Supplementary Method, Supplementary Figure S3 for the experimental design). Protein for each sample was digested by trypsin (Promega). The TMT labeled samples were mixed equally, desalted, and fractionated on an offline HPLC (Agilent 1220) using basic pH reverse-phase liquid chromatography (pH 8.0, XBridge C18 column, 4.6 mm × 25 cm, 3.5 µm particle size, Waters). In total, 20 fractions were derived, and the eluted peptides were ionized by electrospray ionization and detected by an inline Orbitrap Fusion mass spectrometer (Thermo Scientific. Waltham, MA, USA). The MS/MS raw files were processed by a tag-based hybrid search engine JUMP [26]. The data were searched against the UniProt human concatenated with a reversed decoy database for evaluating false discovery rate. Searches were performed using a 25 ppm mass tolerance for precursor ions and 25 ppm mass tolerance for fragment ions, fully tryptic restriction with two maximal missed cleavages, three maximal modification sites, and the assignment of a, b, and y ions. TMT tags on lysine residues and N-termini (+229.162932 Da) were used for static modifications, and Met oxidation (+15.99492 Da) was considered as a dynamic modification. MS/MS spectra were filtered by mass accuracy and matching scores to reduce the protein false discovery rate to approximately 1%. Proteins were quantified by summing up reporter ion counts across all matched PSMs using the JUMP software suite [25,26]. Pre-Processing of the GSEA Analysis To optimize the user experience, we provided a script to pre-generate a GSEA result table (Supplementary Figure S1). The GitHub page contains "Getting Started Scripts", which allows the user to pre-process GSEA results for downstream table visualization. Enriched signature tables can take a long time to process depending on the number of samples or the size of the GMT file provided by the user. At the top of the script, there are key input parameters, such as file path and name to the expression matrix, metadata, and gene set file, as well as the preferred output file path of the output table(s). Additionally, the getting started scripts include a script to generate an R Data list of the ssGSEA analysis. Large gene sets may require several minutes, so pre-computing can facilitate a better user experience. DRPPM-EASY Analysis of RNA-seq and Proteomics Data Use Case 1 We previously identified that USP7 knockdown in T-ALL reduces the activity of E-proteins in a TAL1 dependent manner [21]. To highlight the functions of the DRPPM-EASY application, we re-examined the RNA sequencing profiling data of Jurkat cells after USP7 shRNA silencing. RNA-seq sample grouping was assessed by unsupervised hierarchical clustering (Figure 2A). Notably, altering the clustering methods and the number of (selected) top variables did not change the clustering result, suggesting robust grouping of our data (Supplementary Figure S2). Differential gene expression was then performed by LIMMA and visualized as a Volcano and MA plot. As expected, differential gene expression analysis found downregulated USP7 expression after silencing ( Figure 2B,C). Notably, MYC, NOTCH1, TRIB2, and EOMES were upregulated after USP7 knockdown ( Figure 2B). In the pathway analysis view, enriched pathways can be examined with preloaded gene sets from MsigDB, cell marker, and L1000 drug response. By GSEA and single-sample GSEA, we found USP7 knockdown upregulated with MYC and TAL1 associated targets ( Figure 2D,E) and found downregulated apoptotic gene signature from the Hallmark database ( Figure 2F). Overall, the RNA-seq analysis supports our previous finding that USP7 is implicated in the negative regulation of TAL1-dependent leukemia growth [21]. Next, tandem-mass-tagged proteomics profiling was performed on the same set of samples with RNA-seq profiling ( Figure 3A; Supplementary Figure S3). A joint analysis of the transcriptome and proteome data was carried out by the DRPPM-EASY-Integration pipeline, identifying genes with altered protein abundance and unaltered mRNA levels, such as TRIM27, NOTCH2, UBR3, and USP22 ( Figure 3B). Consistent with our previous observation, TRIM27, a known target of USP7 [27], observed decreased protein abundance in T-ALL cell lines with a haploinsufficient USP7 [21]. The altered abundance of UBR3 and USP22 suggests an altered ubiquitin ligase network. Furthermore, our result suggests that USP7 loss-of-function alters NOTCH2 protein abundance. Of note, NOTCH1 [28] pro- Next, tandem-mass-tagged proteomics profiling was performed on the same set of samples with RNA-seq profiling ( Figure 3A; Supplementary Figure S3). A joint analysis of the transcriptome and proteome data was carried out by the DRPPM-EASY-Integration pipeline, identifying genes with altered protein abundance and unaltered mRNA levels, such as TRIM27, NOTCH2, UBR3, and USP22 ( Figure 3B). Consistent with our previous observation, TRIM27, a known target of USP7 [27], observed decreased protein abundance in T-ALL cell lines with a haploinsufficient USP7 [21]. The altered abundance of UBR3 and USP22 suggests an altered ubiquitin ligase network. Furthermore, our result suggests that USP7 loss-of-function alters NOTCH2 protein abundance. Of note, NOTCH1 [28] protein abundance was unaltered after USP7 knockdown ( Figure 3B). Thus, the precise mechanism of USP7 to drive the NOTCH association leukemia signature will need to be carefully examined in future studies. tein abundance was unaltered after USP7 knockdown ( Figure 3B). Thus, the precise mechanism of USP7 to drive the NOTCH association leukemia signature will need to be carefully examined in future studies. The DRPPM-EASY-Integration includes features assessing the consistency between two datasets. Using the RNA-seq and proteomic data as proof of concept, DRPPM-EASY-Integration found 987 genes consistently upregulated, and 622 genes consistently downregulated in both datasets ( Figure 3C-E). A connectivity map-inspired strategy [29,30] was applied to compare the consistency between the two datasets using reciprocal enrichment. Specifically, differential expressed genes in one dataset was used to derive a gene signature for GSEA to test in the other dataset. For example, differentially expressed proteins ( Figure 3F) were applied as a GSEA gene set and tested for enrichment in the transcriptome data ( Figure 3G). Similarly, gene sets derived from differentially expressed transcripts ( Figure 3C) were tested for enrichment in the proteome data ( Figure 3H). We then compared the significance of the overlapping differentially expressed genes against other pathway databases, such as Hallmark and KEGG. The overlap was evaluated by Fisher's exact test, Cohen's kappa, and Jaccard index. Consistently, the RNA and protein were most significantly overlapped compared to other gene sets. Moreover, the spliceosome and ubiquitin-mediated proteolysis pathways from KEGG and the unfolded protein response and MYC pathway from Hallmark were consistently enriched in both datasets (Supplementary Figure S3B,C; Supplementary Tables S1 and S2). DRPPM-EASY-CCLE Use Case 2 To further illustrate the DRPPM-EASY functionality, we developed DRRPM-EASY-CCLE, an extended app with features to select samples from the Cancer Cell Line Encyclopedia (CCLE) data. The app is preloaded with 1379 CCLE samples spanning 37 lineages, 96 lineage sub-types, and 33 diseases. For the genetic characterization, 299 cancer drivers [31] were selected and further divided based on the damaging and non-damaging variant status from DepMap [32] (see Supplementary Table S3 for the complete phenotype categories). As an example, we extracted ovary cancer cell lines and performed expression analysis comparing TP53 mutation status to its wild-type counterpart ( Figure 4A). In TP53 mutated ovary cancer cells, we found a decreased DNA damage response gene signature ( Figure 4B), thereby solidifying the role of TP53 loss-of-function for regulating DNA damage in these ovarian cancer cells. Previously, KRAS was found to be frequently mutated in non-small cell lung cancer (NSCLC) and is associated with drug resistance [33]. Thus, we analyzed NSCLC cell lines and compared KRAS mutation status to its wild-type counterpart ( Figure 4C). By pathway analysis, the MsigDB defined KRAS signature was consistently upregulated in our KRAS mutated samples (Supplementary Figure S4A). Interestingly, top pathways enriched in the KRAS mutated samples are associated with an anti-apoptosis signature (Supplementary Figure S4B). By ssGSEA, amplified expression in KRAS mutated NSCLC cells were enriched with genes that negatively regulate apoptosis ( Figure 4D) and upregulating genes that associated with stress granule assembly and disassembly ( Figure 4E), which is a dynamic process fundamental to surviving under stress [34]. Interestingly, oncogenic KRAS-driven stress granules were previously identified in pancreatic and colorectal adenocarcinoma [35]; thus, our result suggests a similar stress response in NSCLC cells. To further expand our functionality for exploring these large project data, we have also implemented features that enable users to upload their own expression matrix to perform an integrative analysis in CCLE and lung squamous cell carcinoma CPTAC datasets https://github.com/shawlab-moffitt/DRPPM-EASY-LargeProject-Integration (accessed on 1 February 2022) (Supplementary Figures S5A-C). Altogether, our framework provides a user-friendly environment to categorize the samples for downstream analysis with a high potential for novel discovery. Previously, KRAS was found to be frequently mutated in non-small cell lung cancer (NSCLC) and is associated with drug resistance [33]. Thus, we analyzed NSCLC cell lines and compared KRAS mutation status to its wild-type counterpart ( Figure 4C). By pathway analysis, the MsigDB defined KRAS signature was consistently upregulated in our KRAS mutated samples (Supplementary Figure S4A). Interestingly, top pathways enriched in the KRAS mutated samples are associated with an anti-apoptosis signature (Supplementary Figure S4B). By ssGSEA, amplified expression in KRAS mutated NSCLC cells were enriched with genes that negatively regulate apoptosis ( Figure 4D) and upregulating genes that associated with stress granule assembly and disassembly ( Figure 4E), which is Discussion An effective method for visualization and data analysis is key to the analysis of multiomics data that captures the molecular processes of cancer initiation and progression. Several Shiny apps have been published to date and can be categorized into the following three categories: (1) tools that focus on pairwise differential expression and biomarker discovery (e.g., POMAShiny 10], TCC-GUI [11], and START App [12]), (2) tools that perform pathway and network analysis (e.g., iOmics [14] and JUMPn [15]), and (3) tools that facilitate the query of large datasets, such as from public repositories or consortium deposited datasets and deposited expression data (e.g., shinyGEO [16], ImaGEO [17], and GENAVi [13]). While numerous web tools have been developed thus far, there is a lack of tools that directly address challenges associated with multi-data integration, such as evaluating the consistency between omics datasets. Here, we developed an interactive software tool, DRPPM-EASY, that allows users to perform complex omics data integration in both small (pairwise comparison) and large (consortium) projects. DRPPM-EASY puts together an interactive flexible interface that enables the exploration of biomarkers and enriched pathways across multiple datasets. DRPPM-EASY can perform routine gene analysis, such as hierarchical clustering, differential gene expression, pathway analysis, GSEA, and ssGSEA. Additionally, DRPPM-EASY can perform a joint analysis of two expression datasets. As an example, we have highlighted the application's ability to evaluate the consistency between transcriptome and protein datasets. This is made possible by deriving a gene set feature in one dataset (i.e., transcriptomics), which is applied in the GSEA analysis of the other dataset (i.e., proteomics). DRPPM-EASY can be easily adapted for large consortium data, which we highlight as an example in CCLE cancer cell lines and lung squamous cell carcinoma CPTAC proteome data. Finally, to further expand the utility of our tool, the user can upload their own expression data and use it to compare against CCLE cell lines and lung squamous cell carcinoma proteome data. One major limitation of our application requires the user to normalize their gene expression matrix prior to using our application. Existing pipelines are available to streamline the normalization procedure, such as Shiny-Seq [36]. A normalization procedure will be included in future updates of our application. Finally, the ability to run the application with a user interface on a local desktop reduces the need for computational domain knowledge of expression analysis. The DRPPM-EASY application can be set up on the server in real-time, enabling collaborative discussion on potential hypotheses derived from the high-throughput data. Our tool also ensures reproducibility of the data analysis, which is one of the most significant issues in omics research [37]. While the current application is highlighted to work in RNA-seq and proteomics data, our framework could easily be adapted to incorporate drug response, genetic screening, or splicing associated features in future versions of our application. Thus, we believe DRPPM-EASY will be a useful and valuable tool for the biomedical research community. Data Availability Statement: The developed software and processed data can be downloaded from the following GitHub page https://github.com/shawlab-moffitt/DRPPM-EASY-ExprAnalysisShinY (accessed on 1 February 2022).
4,775.4
2022-02-01T00:00:00.000
[ "Biology" ]
Novel Target Exploration from Hypothetical Proteins of Klebsiella pneumoniae MGH 78578 Reveals a Protein Involved in Host-Pathogen Interaction The opportunistic pathogen Klebsiella pneumoniae is a causative agent of several hospital-acquired infections. It has become resistant to a wide range of currently available antibiotics, leading to high mortality rates among patients; this has further led to a demand for novel therapeutic intervention to treat such infections. Using a series of in silico analyses, the present study aims to explore novel drug/vaccine candidates from the hypothetical proteins of K. pneumoniae. A total of 540 proteins were found to be hypothetical in this organism. Analysis of these 540 hypothetical proteins revealed 30 pathogen-specific proteins essential for pathogen survival. A motifs/domain family analysis, similarity search against known proteins, gene ontology, and protein–protein interaction analysis of the shortlisted 30 proteins led to functional assignment for 17 proteins. They were mainly cataloged as enzymes, lipoproteins, stress-induced proteins, transporters, and other proteins (viz., two-component proteins, skeletal proteins and toxins). Among the annotated proteins, 16 proteins, located in the cytoplasm, periplasm, and inner membrane, were considered as potential drug targets, and one extracellular protein was considered as a vaccine candidate. A druggability analysis indicated that the identified 17 drug/vaccine candidates were “novel”. Furthermore, a host–pathogen interaction analysis of these identified target candidates revealed a betaine/carnitine/choline transporters (BCCT) family protein showing interactions with five host proteins. Structure prediction and validation were carried out for this protein, which could aid in structure-based inhibitor design. INTRODUCTION Klebsiella pneumoniae is a Gram-negative, encapsulated, non-motile bacterium belonging to the Enterobacteriaceae family. It is commonly present in soil, water, and animals, including humans. This organism is a part of normal gut flora in human where it does not cause any infection. However, in healthcare environments, it can colonize in medical devices (viz., ventilators and intravenous catheters) and opportunistically infect immunocompromised patients admitted in the intensive care unit (CDC.gov., 2020). Indeed, this bacterium causes several infections, such as urinary tract infection, bacteraemia, pneumonia, and liver abscesses in hospitalized patients (Chung, 2016). Patients infected with Klebsiella can transmit the pathogen via direct contact or indirectly through contaminated medical devices (CDC.gov., 2020). This opportunistic pathogen is able to form biofilms in various biotic and abiotic surfaces like other pathogenic bacteria, including Pseudomonas aeruginosa, Escherichia coli, and Acinetobacter baumannii (Vuotto et al., 2014;de Campos et al., 2016;Riquelme et al., 2018). The formation of biofilms assists the pathogen in withstanding the host defense mechanism and antimicrobial agents. Several outbreaks of K. pneumoniae infections have been reported in hospital settings from different countries, including China, Israel, Poland, Italy, Colombia, and the United States (Kaye and Dhar, 2016;Ocampo et al., 2016;Baraniak et al., 2017;Krapp et al., 2018;Sotgiu et al., 2018;Liu and Guo, 2019). Reportedly, K. pneumoniae accounts for ∼12% of all hospital-acquired pneumonia in the world (Ashurst and Dawson, 2019). In addition to the presence of the beta-lactamase enzyme, which makes K. pneumoniae antibiotic resistant, an alteration in the upregulation of efflux pumps is reportedly making this opportunistic pathogen resistant to multiple drugs, including the last-resort treatment regimen carbapenems. This leads to high mortality rates among the patients (∼50%) (Xu et al., 2017). According to the Center for Disease Control and Prevention (CDC), ∼80% of the reported carbapenem-resistant Enterobacteriaceae infections in 2013 were due to K. pneumoniae (Ashurst and Dawson, 2019). Thus, the present scenario demands the development of novel therapeutic intervention for treating such bacterial infections. In many organisms, the molecular functions of more than 30% proteins are unknown; these proteins are termed as "hypothetical proteins". The functional annotation of hypothetical proteins can enable us to understand their roles in different metabolisms as well as to identify previously unexplored drug targets in an organism (Shahbaaz et al., 2016). Several bioinformatics resources, such as databases and tools, are available for functional annotation of hypothetical proteins. These resources have been successfully used to annotate the functions of hypothetical proteins in different bacterial pathogens, including Borrelia burgdorferi , Chlamydia trachomatis (Turab Naqvi et al., 2017), Helicobacter pylori (Naqvi et al., 2016), Haemophilus influenzae (Shahbaaz et al., 2013), Mycobacterium tuberculosis (Yang et al., 2019), Vibrio cholerae (Islam et al., 2015), and Staphylococcus aureus N315 (Prava et al., 2018). Out of the available proteome of K. pneumoniae MGH 78578, ∼11% is made up of HPs, which can be potential resources to be studied both functionally and structurally. Although bioinformatics studies on a few hypothetical proteins, such as KPN_00953(YcbK) (Teh et al., 2014), KPN_02809 (a Zinc-Dependent Metalloprotease) (Wong et al., 2012), and KPN_00728, KPN_00729 (Chain C and D of Succinate Dehydrogenase, respectively) (Choi et al., 2009), are available, mining and analysis of all hypothetical proteins to shortlist drug/vaccine targets in this pathogen is yet unexplored. In the present study, a series of in silico analyses of 540 hypothetical proteins encoded by the K. pneumoniae genome were carried out to explore novel drug/vaccine candidates in this organism. The annotated target proteins can be further utilized to design and develop novel inhibitors for the treatment of Klebsiella infections. Sequence Retrieval The whole proteome of K. pneumoniae subsp. pneumoniae MGH 78578 (NC_009648.1) was retrieved from the National Center for Biotechnology Information (NCBI), a comprehensive web portal providing access to genomic and biomedical information (Sayers et al., 2019). The hypothetical proteins (HPs) from the whole proteome of K. pneumoniae were obtained using an in-house Perl script (https://gist.github.com/pranavathiyani). Essentiality and Non-homology Analysis The collected HPs of K. pneumoniae were subjected to a similarity search using protein BLAST (BLASTp) (Altschul, 1990) against the essential proteins of bacteria present in the database of essential genes (DEG 15.2) with an e-value ≤ 0.0001 and bit-score ≥100 as cut-off (Jadhav et al., 2014). DEG is a repository of experimentally determined essential genes of various bacteria, archaea, and eukaryotes (Luo et al., 2014). The query genes/proteins that showed a similarity with genes/proteins of DEG were regarded as possible essential genes or proteins. The HPs that had at least one hit in DEG-BLAST were considered as essential proteins in the current study. Furthermore, a similarity search using BLASTp was carried out between identified essential proteins and the human proteome with a cut-off e-value ≥0.0001 (Jadhav et al., 2014). The essential pathogen proteins that showed no hit with the human proteome were considered as non-homologous to human proteins and used for further analysis. Function Prediction Function prediction of essential non-homologous (ENH) proteins of K. pneumoniae, which resulted from the previous analysis, was carried out using bioinformatics resources like InterPro (Mitchell et al., 2019), Pfam (Finn et al., 2016), and NCBI-BLASTp (Altschul, 1990). The ENH protein sequences were submitted to InterPro and Pfam with default parameters to identify their motifs/domain families. InterPro is an integrated online resource of protein databases that provides detailed information about the protein families, domain, and motifs. Pfam uses a Hidden Markov Model-based method to identify the domain families of the proteins. Protein BLAST (BLASTp) search was performed for the ENH proteins to identify homologous sequences with known functions from the NCBI protein database (non-redundant). Moreover, the predicted function of the ENH proteins from InterPro, Pfam, and NCBI-BLASTp search was cross-checked with the function of DEG-hits obtained from an essentiality analysis. The predicted function was also compared and verified using gene ontology (GO) analysis and a proteinprotein interaction analysis. The GO analysis was performed by submitting the annotated ENH proteins to CELLO2GO and GO FEAT. CELLO2GO is a web server that performs a similarity search (BLAST) for a given protein sequence to obtain its homologous sequences with GO annotation (molecular function, biological process, and cellular location) (Yu et al., 2014). GO FEAT is an online platform for functional annotation for genomic as well as transcriptomic data based on similarity search (Araujo et al., 2018). Furthermore, the ENH proteins were given as query to STRING database (version 11.0) with medium confidence (0.40) in order to identify functions based on the homolog hits and the interactions among the proteins. STRING database is an integrated resource of experimental and predicted protein-protein interactions (PPI). Currently, STRING comprises more than 2,000 million interactions of 24.6 million proteins from 5,090 organisms (Szklarczyk et al., 2019). Physicochemical Characterization and Virulence Prediction The physicochemical properties, such as molecular weight, theoretical isoelectric point (pI), instability index, aliphatic index, and grand average of hydropathicity (GRAVY), of the annotated ENH proteins were computed using the ProtParam tool of Expasy. Expasy is a Swiss Institute of Bioinformatics resource portal for tools and databases used in diverse areas of life sciences, including genomics, proteomics, genetics, systems biology, molecular evolution, and transcriptomics (Gasteiger et al., 2005). Virulence proteins among the annotated ENH proteins were identified using a similarity search against the core dataset of virulent proteins from Virulence Factor Database (VFDB) with a cut-off e-value ≤ 0.0001. VFDB is a database for bacterial virulence factors primarily curated from the scientific literature . The annotated ENH proteins were submitted to VICMpred, an SVM-based functional classification server for Gram-negative bacterial proteins that functionally classify the proteins into different categories based on amino acid composition (Saha and Raghava, 2006). Further, an MP3 tool was used for the identification of pathogenic proteins among the functionally annotated ENH proteins. The MP3 tool utilizes an integrated approach based on SVM-HMM in order to better provide efficiency and accuracy in predicting pathogenic proteins (Gupta et al., 2014). Druggability and Subcellular Localization Analysis To assess the druggability of the annotated ENH proteins, DrugBank and ChEMBL databases were utilized. DrugBank is a comprehensive bioinformatics database for cheminformatics comprising detailed information about drugs and the corresponding targets (Wishart et al., 2018). ChEMBL is a curated database of bioactive chemical compounds maintained by the EMBL. It includes manually curated data from the scientific literature on drug-like compounds along with their bioactivity determined based on assays (Gaulton et al., 2012). A similarity search was performed between the annotated ENH proteins and known targets of DrugBank and ChEMBL with a cut-off e-value ≤0.00001. Subcellular localization of the proteins was predicted using CELLO (v.2.5), a multi-class SVM-based classification server which uses amino acid sequence features for predicting subcellular localization. In the case of Gramnegative bacteria, the average accuracy of CELLO in localization prediction is 89% (Chen et al., 2006). Prediction of Host-Pathogen Interactions The host-pathogen interactions of the annotated ENH proteins were predicted using the interlog method. This method relies on a homology search of the query sequence against the known hostpathogen interaction data. If the pathogen protein "A" interacts with host protein, "B" and if a protein "X" is homologous to protein "A", then there is a high chance that protein "X" will also interact with protein "B" (Yu et al., 2004). Based on this principle, a homology search of the annotated ENH proteins was performed against the full database of HPIDB with default parameters (identify >50%, query coverage >50% and e-value = 0.00001) to obtain the proteins that interact with the human host. HPIDB 3.0 is a comprehensive database of curated hostpathogen interaction data (Kumar and Nanduri, 2010). Structure Prediction The annotated ENH proteins were searched against the Protein Data Bank (PDB) using PSI-BLAST (Altschul, 1990) from NCBI. PDB is a repository of three-dimensional (3D) structure data for different biological macromolecules (Berman et al., 2000). Determining the 3D structure of the protein is important for understanding its molecular function at the atomic level and it also facilitates the process of structure-based drug design. Herein, the structure prediction for the selected protein was carried out using an I-TASSER server (Yang and Zhang, 2015) owing to the low structural similarity of the available 3D structure. The predicted structure was validated using SAVES server (https:// servicesn.mbi.ucla.edu/SAVES/). The final 3D structure was visualized using the Pymol tool (version 2.3.3 Schrödinger, LLC). RESULTS AND DISCUSSION In the present study, HPs encoded by the K. pneumoniae MGH 78578 genome were analyzed using the in silico approach to shortlist proteins that can be potential drug and vaccine targets. The workflow adopted in the current study is illustrated in Figure 1. Identification of Essential Pathogen-Specific Proteins The complete genome of K. pneumoniae subsp. pneumoniae MGH 78578 is a single circular chromosome of 5.31 Mb in size. The genome comprises 5,115 protein-coding genes. The corresponding protein sequences were retrieved for the analysis. Out of 5,115 proteins, 540 were found to be HPs in this organism (Supplementary File 1) (at the time of retrieval). The identified HPs were subjected to essentiality and nonhomology analysis in order to find out essential pathogenspecific proteins. The idea behind the essentiality analysis is to shortlist proteins that are crucial for the survival of the pathogen, and targeting those proteins would thus be lethal for the pathogen. The non-homology analysis will result in proteins that are exclusively present in the pathogen and absent in the human host. This approach can aid in designing drugs that target only pathogen proteins without interfering host system, thereby minimizing side-effects. The present study utilized experimentally determined essential proteins of 48 bacterial species deposited in DEG to predict essential proteins among K. pneumoniae HPs. It is believed that the HPs of bacteria that are similar to known essential proteins of DEG are possible essential proteins. Out of 540 HPs, 40 were found to be similar to known essential proteins and were thus considered as possible essential proteins of this pathogen. Subsequently, the non-homology analysis of the 40 essential proteins revealed that 30 proteins did not have any hit with the human proteome, and they were therefore nonhomologous to humans. These 30 essential non-homologous (ENH) proteins (Supplementary File 2) are essential for the survival of the pathogen and are pathogen-specific (present in the pathogen but absent in the human). Theoretically, all the shortlisted ENH proteins have the potential to become good drug/vaccine target(s) in K. pneumoniae. However, insight into the functional annotation, physicochemical properties, virulence, druggability, and subcellular localization of these 30 shortlisted proteins would provide an additional layer of refinement for target identification, and those analyses were thus performed subsequently. Function Prediction of ENH Proteins The function prediction of 30 ENH proteins using bioinformatics secondary databases and tools, such as InterPro, Pfam, and NCBI BLASTp yielded 17 annotated proteins (Table 1) along with 10 uncharacterized domain proteins with unknown functions and three proteins with no hits/similarity (Supplementary File 3). It is worth mentioning at this point that our previous work on functional annotation of essential HPs from Staphylococcus aureus N315 assessed the performance of these tools using receiver operating characteristic (ROC) curve analysis and was noticed to have annotated the function of HPs with high accuracy (Prava et al., 2018). In the present study, the majority of the 17 annotated proteins fell into the category of enzymes, lipoproteins, transporters, and stress-induced proteins among others. The remaining 10 ENH proteins either belonged to uncharacterized domain(s)/family proteins (YaeP family protein, YccJ like protein, YfdX like protein) or different families of hypothetical proteins (YheO like PAS domain protein). The previous studies of these uncharacterized proteins have suggested their essentiality in bacteria, and some are reported to have condition-based expression in several organisms (Goodacre et al., 2014). For instance, the YfdX protein, which has been reported to be present in several bacterial pathogens, including Escherichia coli, Salmonella enterica serovar Typhi, and Typhimurium, is under the control of a regulator protein, EvgA, responsible for environmental stress expressions (McClelland et al., 2001;Masuda and Church, 2002). A recent study on Salmonella infection has demonstrated that YfdX is involved in virulence, antibiotic susceptibility, and in modulating pathogens' growth/survival strategies (Lee et al., 2019). The hypothetical protein WP_002920130.1 with a YheO-like PAS domain was predicted to have putative DNA binding activity apart from its transcription regulatory activity (InterPro entry: IPR013559). Predicted functions of 17 proteins were verified by a DEG-BLAST search that yielded similar function annotations in other organisms, thereby supporting our results. Of the 17 annotated proteins, eight proteins were found to have orthologs in Escherichia coli, a closely related organism of Klebsiella; orthologs of the remaining annotated proteins were found to be present in other bacterial species, including Mycoplasma pulmonis, Salmonella, Pseudomonas aeruginosa, and Haemophilus influenzae (Supplementary File 4). A GO analysis of the annotated 17 proteins revealed that these proteins were involved in various molecular functions, namely, peptidase activity, hydrolase activity, isomerase activity, enzyme regulator activity, ion binding, transmembrane transporter activity, and kinase activity (Supplementary File 5). This is in accordance with the function prediction results obtained from InterPro, Pfam, and NCBI BLASTp searches, signifying the reliability of our annotation. Catabolic processes, protein folding, cell differentiation, transport, response to stress, and small molecule metabolic processes were the major biological processes in which these annotated proteins were found to be involved. In addition, the PPI analysis of the 17 annotated proteins mapped 14 proteins in the interaction data, and, of these, 11 proteins were found to have annotated functions that are consistent with our function prediction results (Supplementary File 6). In the PPI network, it was observed that the PTS system protein (cellobiose-specific IIA component) had interacted with 4-deoxy-L-threo-5-hexosuloseuronate ketol-isomerase, and the secretion protein (chaperone lipoprotein YacC) had interacted with the TraB protein. The detailed discussion on the predicted molecular functions of the 17 annotated ENH proteins can be found in the subsequent section under different functional categories. Enzymes Enzymes are a class of proteins that are involved in biochemical reactions in the form of catalysts to convert the substrate(s) to product(s). Among 17 annotated ENH proteins, four belong to different enzyme classes. For instance, the protein WP_002890284.1 was found to be involved in pyrimidine/purine nucleoside phosphorylase activity. In the nucleoside salvage pathway, phosphorylase enzymes participate in catalyzing the reversible phosphorolytic cleavage of the glycosidic bond of pyrimidine/purine nucleosides. They also play an important role in activating prodrugs (as analogs) and as inhibitors for antiparasitic and anticancer agents (Bzowska, 2015). The protein WP_004176857.1 was found to harbor a cyclophilin-like domain, which is involved in cyclosporine A (an immunosuppressive drug) binding. The proteins belonging to this family comprise of a beta-barrel domain, which is the core of cyclophilin-type peptidyl-prolyl cis-trans isomerases activity. This domain speeds up the protein folding by catalyzing the cis-trans isomerization of prolineimidic peptide bonds in oligopeptides (Takahashi et al., 1989). The protein WP_002911528.1 is a sugar phosphate isomerase, which includes sugar isomerases like ribose 5-phosphate isomerase B (RpiB), galactose isomerase subunit A (LacA), and galactose isomerase subunit B (LacB). The enzyme galactose-6-phosphate isomerase is induced by the presence of galactose or lactose in the cell. RpiB has a Rossmann-type alpha/beta/alpha sandwich topology and forms a homodimer (Takahashi et al., 1989;Zhang et al., 2003b). This protein is involved in catalyzing the interconversion of D-ribose 5-phosphate and D-ribulose 5-phosphate in the non-oxidative branch of the pentose phosphate pathway (Zhang et al., 2003a). The protein WP_002914983.1 was identified to be involved in the phosphoenolpyruvate-dependent sugar phosphotransferase system (PTS). The role of PTS is to serve as a transport system for carbohydrate in bacteria. It takes part in catalyzing the phosphorylation of incoming sugar substrates along with its translocation across the cell membrane, making PTS a link between the uptake and metabolism of sugars (Postma et al., 1993). Lipoproteins Bacterial lipoproteins are a group of membrane proteins that are involved in diverse functions, such as cellular physiology, cell division, virulence, adhesion to host cells, and virulence factor translocation. The proteins WP_002888808.1, WP_004222859.1, and WP_012068456.1 were found to be similar to the chaperone lipoprotein YacC, lipoprotein NlpC/P60 family protein, and lipoprotein YbfN family, respectively. Apart from the novel addition of protein YacC, the Chaperon lipoprotein family, PulS/OutS-like comprises of pullulanase secretion protein (PulS) in K. pneumoniae (UniProt ID: P20440), the lipoprotein OutS protein of Erwinia chrysanthemi (UniProt ID: Q01567), and a functionally uncharacterized type II secretion protein EtpO (UniProt ID: Q7BSV3) of E. coli O157: H7. Reportedly, PulS and OutS interaction facilitates the insertion of secretins into the outer membrane indicating its chaperone-like role in bacterial systems (InterPro entry: IPR019114). In various bacterial lineages, NlpC/P60 proteins categorically belong to the cell wall peptidases family, which hydrolyses the d-γ-glutamylmeso-diaminopimelate or N-acetylmuramate-L-alanine linkage in the cell wall (Xu et al., 2009). Stress-Induced Proteins The present study predicted three proteins (WP_002898708.1, WP_004150795.1, and WP_004143718.1) as stress-induced proteins with a highly conserved KGG repeat. In E. coli, YciG protein from yciGFE operon was reported to have a similar motif. The protein YciG, under the regulation of the general stress response controller RpoS, showed significant resistance to thermal and acid stress (Robbe-Saule et al., 2007). Transporters Among 17 annotated proteins, two (WP_004151327.1 and WP_073549749.1) were predicted as transporters. The former protein was predicted as a conjugal transfer protein TraB. TraB consists of nucleotide-binding motifs, suggesting its potential energy-providing role in plasmid DNA/Tra protein transport (Chandler and Dunny, 2004). The later one was predicted as an inner-membrane metabolism molecule, and it showed similarity with BCCT family transporter protein (InterPro entry: IPR000060). BCCT represents Betaine/Carnitine/Choline Transporters that have 12 transmembrane regions and four conserved tryptophan in the central region. This protein family is specific to compounds that have quaternary nitrogen atoms (Ziegler et al., 2010). Toxin, Skeletal, and Two-Component System Proteins The protein WP_041937616.1 belongs to a putative bacterial toxin YdaT superfamily, which corresponds to a toxin-antitoxin protein system. These genetic modules were found in plasmids as well as in the chromosomes, and they encode for a toxin and its cognate antidote. They are reported to be important in maintaining multi-resistant plasmids and in the evolution of antibiotic resistance (Yamaguchi and Inouye, 2011). There are several theories proposed to explain this plasmid stabilizing toxin-antitoxin systems based on evolution. Targeting the toxin or the anti-toxin would lead to accumulation of proteins, which can lead to lethality. The bactofilin A/B (WP_015959101.1) family of proteins covers a diverse range of functional roles in cytoskeletal polymer formation, which is conserved among bacterial species. The unique subcellular distribution, and the dynamics of bactofilins in different bacterial species, suggests its roles as versatile structural elements adopting a range of cellular functions (Kühn et al., 2010). WP_023288894.1 was predicted as YcgZ, a two-componentsystem connector protein that plays a major role in biofilm formation and in providing an additional input signal into the two-component signaling pathway. YcgZ is a substrate of Lon protease that regulates the expression of an outer membrane protein, OmpF which serves as a passive diffusion pore (Duval et al., 2017). Soo et al. (2011) has reported that YcgZ is associated with resistance against several antibiotics. Other Proteins The protein WP_002918629.1 was predicted as a barstar-like superfamily protein. Barstar proteins are small single-chain proteins that tackle the lethal effect of the active barnase enzyme, which is an extracellular ribonuclease. Barstar inhibits the activity of this enzyme by sterically blocking the active site with a helix and adjacent loop segment (Hartley, 1988;Buckle et al., 1994). Another protein WP_002890061.1, which was identified to have PapD-like superfamily domain, acts as a chaperone protein during pili and flagellar assembly. This protein, reportedly found in several pathogenic bacteria, also helps in mediating host cell surface adhesion (Barnhart et al., 2000). Physicochemical Characterization and Virulence Prediction In the present study, the molecular weight of the annotated 17 proteins was found to be ranging from 5,953.2 to 35,235.6 Da. The highest and the lowest theoretical isoelectric points (pIs) of the proteins were 10.15 and 4.28, respectively. Molecular weight and pI help in experimental setup for protein purification and crystallization procedures. The instability index measures the stability of the protein in a test tube. If the instability index of a protein is <40 then it is believed to be a stable protein (Gill and von Hippel, 1989). Among the annotated ENH proteins, the index of 11 proteins was found to be <40, and they are thus likely to be stable proteins. The aliphatic index of proteins is calculated on the basis of the number of aliphatic residues in the proteinthe higher the value, the higher the thermostability (Ikai, 1980). The aliphatic index of the annotated ENH proteins varied from 9.84 to 109.31. The average GRAVY (Grand Average of Hydropathicity) for the ENH proteins was −0.45 with maximum 0.587 and minimum −1.738. The GRAVY is calculated based on the total hydropathy values of amino acids divided by the length of the protein. The calculated physicochemical properties of the annotated ENH proteins are provided in Supplementary File 7. All the calculated physicochemical properties of the annotated proteins could be useful for further experimental studies of these proteins. A homology search against the core set of virulent proteins from VFDB resulted in the identification of two virulence proteins-WP_004222859.1 and WP_002890061.1-with comparable similarity to proteins from Listeria monocytogenes and E. coli, respectively. WP_004222859.1 was identified to be homologous to iap/cwhA which encodes P60 protein, a major extracellular virulence protein in L. monocytogenes. This protein has been reported to be involved in pathogen survival and host invasion (Cabanes et al., 2002). From the function prediction, it was found that this protein has an endopeptidase NlpC/P60 domain. The protein WP_002890061.1 was found to be similar to yagV/EcpE, a part of E. coli common pilus (ECP). ECP is an extracellular adhesive fiber found in both pathogenic and commensal strains, and is involved in biofilm formation and host cell recognition (Rendón et al., 2007). Targeting virulence factors of a pathogen would hinder the progress of pathogenesis. VICMpred categorizes proteins into different classes: virulence factors, information molecule, cellular process, and metabolism molecule. It was observed from the prediction that, out of 17 annotated ENH proteins, seven proteins were involved in cellular process, eight were involved in metabolism, and two proteins were involved in information and storage. Furthermore, the MP3 server identified eight proteins as pathogenic and the remaining nine as non-pathogenic proteins. Two proteins (WP_004222859.1 and WP_002890061.1), which were identified as pathogenic by VFDB and MP3, were considered as virulence factors in the present study. The comprehensive results of the predictions are given in Table 2. Druggability and Subcellular Localization Analysis Druggability prediction of ENH proteins using DrugBank and ChEMBL revealed that the proteins were not similar to the available known targets. Thus, they can be considered as "novel targets" which can be validated further experimentally. Determining protein subcellular localization is vital for understanding the role of proteins in a cell, and it also aids in the process of drug discovery and delivery. The current study utilized the subcellular localization tool CELLO, which has high accuracy in predicting subcellular localization of proteins in Gram-negative bacteria. The tool CELLO predicted 10 proteins as cytoplasmic, four as periplasmic, two as inner membrane proteins, and one as an extracellular protein ( Table 2). The proteins predicted in the cytoplasm, periplasm, and inner membrane can be considered as drug targets, and extracellular proteins can be considered as vaccine targets (Barh et al., 2011). Thus, in total, 16 drug targets and one vaccine candidate were identified from the analysis, and this can be further validated using an experimental study. Host-Pathogen Interactions Prediction of host-pathogen interactions among the hypothetical/annotated proteins in the organism could shed insight into understanding the pathogen biology, i.e., the pathogenesis as well as the host response, during infection/invasion. In the present study, from the 17 ENH proteins, the protein WP_073549749.1, which was annotated as a BCCT family transporter protein, was found to have interactions with five human host proteins, namely, von Willebrand factor, fibrillin-1, protein YIPF2, zinc finger protein Aiolos, and Figure 2. This prediction was based on an interlog method in which proteins similar to orthologs in interactions have an increased possibility of interacting with the same partner. The K. pneumoniae BCCT protein was found to be similar (75.6% sequence identity) to the putative quaternary ammonium transport protein (UniProt ID: Q8D0R9) from Yersinia pestis. The putative quaternary ammonium transport protein is encoded by beT2 gene, which reportedly has shown interactions with the abovementioned five human proteins (Supplementary File 8). The functions of the human proteins ranged from binding, catalytic activity, and transcription regulation to biological adhesion. Phylogenetic analysis of 17 annotated proteins was performed to find out interrelationship among these proteins. The analysis indicated that the BCCT protein, predicted to be involved in host-pathogen interaction, was grouped with biofilm-forming proteins, although they belong to diverse functional classes ( Figure 3A). However, further study is needed to evaluate the relationship between these two proteins. Moreover, the orthologs of the K. pneumoniae BCCT protein were collected from UniProt using a BLASTp search, and a phylogenetic analysis was carried out for those BCCT proteins. It was observed that the K. pneumoniae BCCT protein was found to be closely related to Klebsiella aerogenes and E. coli ( Figure 3B). Herein, the phylogenetic tree was built using maximum likelihood method with 500 bootstrap replicates in MEGA X, a molecular analysis tool for constructing phylogenetic tree (Kumar et al., 2018). Additionally, a phylogenetic tree was generated using PATRIC for 1,018 genomes belonging to the order Enterobacterales to have an insight into the evolutionary relationship of K. pneumoniae with the other members of Enterobacteriaceae family (Supplementary File 9). PATRIC is a bioinformatics resource platform that provides multi-omics data and analysis tools for biomedical research (Davis et al., 2020). Structure Prediction and Validation Protein structure determination helps in understanding the functional domains responsible for its activity, which is invaluable for novel inhibitor development. The sequences with high similarity tend to have high structural similarity also; a similarity search of the identified 17 drug/vaccine targets against 3D structures of proteins from PDB was thus performed, which identified 10 homologous structures from E. coli, S. enterica, and V. parahaemolyticus. Sequence identity varied from 24.26 to 79.36% with seven proteins showing >30% identity ( Table 3). Homologous structures can be used as a template for building 3D models for these proteins, which is underway in our laboratory. Here, however, we have reported the structure of an important host-pathogen interacting protein (WP_073549749.1) that showed interactions with the human host. Due to low structural similarity with the available 3D template, the protein structure was modeled using I-TASSER server. Among the five generated models, the best model was chosen based on the C-score (C-score = 1.24), which represents the confidence of the predicted model. The estimated TM-score and RMSD were 0.88 ± 0.07 and 3.8 ± 2.6 Å, respectively. The predicted 3D model of the BCCT protein was trimeric in nature comprising large helical structures. The model was validated using the SAVES server. It was observed that 89.4% of residues of the predicted structure were falling in the most favored regions in the Ramachandran plot while 8.1 and 2.1% were in the additionally and generously allowed region, respectively (Figure 4). This emphasized the validity of our model, and, additionally, molecular dynamics studies can be carried out using this model. CONCLUSION Understanding the functions of hypothetical proteins is important since it facilitates the further comprehension of their role in biochemical/physiological pathways and the identification of novel classes of therapeutic targets. The present study utilized an in silico approach to identify drug and vaccine targets from hypothetical proteins of K. pneumoniae. The study first predicted 30 pathogen-specific essential proteins, for which a functional analysis was carried out. The methodology utilized herein was enabled the annotation of the functions of hypothetical proteins with high confidence. It was found that the proteins have various functional roles as enzymes, lipoproteins, stress-induced proteins, and virulent proteins. However, the functional annotation for some of the proteins was not possible owing to insufficient information. Subcellular localization analysis revealed 16 proteins as drug targets (cytoplasmic, inner membrane, and periplasmic proteins) and one extracellular protein as a vaccine candidate. Structure prediction of one protein which was predicted to be involved in host-pathogen interaction is reported in this study and can be utilized for further experimental studies in this area. In addition, the structural analyses of identified target proteins and screening of potential inhibitors are underway in our laboratory. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS GP and AP conceived and designed the study. GP performed the experiment and analyzed the results. GP, AP, JP, and AR critically reviewed the analysis and contributed to the preparation of the final version of the manuscript.
7,446.6
2020-04-03T00:00:00.000
[ "Medicine", "Biology" ]
Fully Noncontact Wave Propagation Imaging in an Immersed Metallic Plate with a Crack This study presents a noncontact sensing technique with ultrasonic wave propagation imaging algorithm, for damage visualization of liquid-immersed structures. An aluminum plate specimen (400mm × 400mm × 3mm) with a 12mm slit was immersed in water and in glycerin. A 532 nm Q-switched continuous wave laser is used at an energy level of 1.2mJ to scan an area of 100mm × 100mm. A laser Doppler vibrometer is used as a noncontact ultrasonic sensor, which measures guided wave displacement at a fixed point. The tests are performed with two different cases of specimen: without water and filled with water and with glycerin. Lamb wave dispersion curves for the respective cases are calculated, to investigate the velocity-frequency relationship of each wave mode. Experimental propagation velocities of Lamb waves for different cases are compared with the theoretical dispersion curves. This study shows that the dispersion and attenuation of the Lamb wave is affected by the surrounding liquid, and the comparative experimental results are presented to verify it. In addition, it is demonstrated that the developed fully noncontact ultrasonic propagation imaging system is capable of damage sizing in submerged structures. Introduction Lamb waves are useful for the detection of damages in thin sheet materials and tubular properties.Extensive developments in the application of the Lamb wave provide a foundation for the inspection of many products in the aerospace, pipe, pipeline, and transportation industries.Lamb waves are composed of a combination of two fundamental modes: symmetric and antisymmetric.For each of these modes, their velocity (phase or group) of Lamb waves varies with frequency; in other words, they are all dispersive.And, their energy is spread in time and space as it propagates.Hence, as the distance increases, the signal duration increases and the peak amplitude decreases.Also, attenuation is of concern in specimens immersed in liquid, because of leaky Lamb waves.For example, when guided Lamb waves are propagating in a plate that is placed in vacuum or in air, both plate surfaces are considered to be untrammeled.But, if one or both of the surfaces are in contact with liquid, the guided plate waves become leaky Lamb waves, because the energy of the wave leaks to the adjacent liquid. Ultrasonic waves are increasingly being investigated for nondestructive evaluation (NDE) and structural health management (SHM) of engineering systems, because they can propagate over long distances and cover relatively large areas of thin plates.They can travel comparatively large distances with little attenuation and offer the advantage of exploiting one or more of the phenomena associated with transmission, reflection, scattering, and mode conversion.A few studies have been reported concerning the use of ultrasonic waves for underwater structures. Na and Kundu [1] investigated the feasibility of flexural cylindrical guided waves for inspecting the mechanical defects of underwater pipes, using a transducer holder and its coupling mechanism.Mijarez et al. [2] developed a system composed of a waterproof transmitter and a seawateractivated battery package, to monitor the tubular crossbeam members used in offshore steel structures.Chen et al. [3] proposed a damage identification approach capitalizing on the fundamental asymmetric Lamb wave.Aristégui et al. [4] presented wave propagation characteristics of pipes with fluid loading on both inside and outside of the pipe, which was affected by viscosity of the media.A noncontact approach was also used for the generation and detection of Lamb waves, because of the circumstance of the target to be inspected.Rizzo et al. [5] presented a SHM technique using a hybrid laser/immersion transducer system for the detection of damage in submerged structures.Xu et al. [6] presented a comparison between theoretical predictions and experimental results, to consistently reveal the propagation properties of Lamb waves on a specimen that was in contact with different liquids on both of its surfaces, using laser generation and laser Doppler vibrometer (LDV) sensing. In this paper, variation of certain mode is analyzed, to evaluate the size and shape of the damage for a test specimen in different boundary conditions: free plate and immersed plate with liquid.A laser ultrasonic propagation imaging (UPI) system capable of fully noncontact inspection of immersed structures is developed, through the modification of the sensor-contact UPI system, based on piezoelectric sensing.In addition, this study visualizes that the Lamb wave propagation characteristics, such as dispersion and attenuation, are affected by liquid and comparative experimental results are presented to verify it. Lamb Waves Propagation in Immersed Thin Plates It is stipulated that the symbols and ( = 0, 1, . ..) stand for the symmetric and antisymmetric modes, respectively, and the subscript implies the order of the mode.Symmetric Lamb waves move in a symmetric fashion about the median plane of the plate.Wave motion in the symmetrical mode is most efficiently produced when the exciting force is parallel to the plate.The antisymmetric Lamb wave mode is often called the "flexural mode" because a large portion of the motion moves in a normal direction to the plate, and a little motion occurs in the direction parallel to the plate.In the laser pulse excitation used in this paper, the magnitude of modes (in-plane motion) is normally smaller than that of modes. Wave Propagation with Attenuation in Submerged Plates. When a plate is submerged in an infinite liquid, the Lamb wave propagation energy will be leaked into the liquid.This wave is called a leaky Lamb wave.For example, when a plate is immersed in a liquid, such as water or glycerin, symmetric modes will mostly be retained in the plate, because it is difficult for in-plane particle motion to cross the plate-liquid interface.However, as the antisymmetric modes mostly have out-of-plane displacements, they will leak into the fluid.The leaky Lamb waves behave differently from Lamb waves in free solid.For instance, the dispersion equations associated with the first antisymmetric mode ( 0 ) show a large discrepancy between the free solid and fluid-coupled solid [8]. In this research, the sample geometry of a three-layer flat and thin plate system was considered, as shown in Figure 1.According to [7], the partial waves are assembled by matching the boundary conditions at each layer.At a certain frequency, wave number, and attenuation combination, these partial waves combine to form a guided Lamb wave, which propagates down to the longitudinal direction.Here, L+−, SV+−, and SH+− stand for longitudinal waves, shear vertical waves, and shear horizontal waves, respectively.In addition, + denotes the downward direction and the − denotes the upward direction of the plate case.We did not consider the SH+− waves, because they were difficult to generate by laser excitation. Dispersion Curves in Different Boundary Conditions. If the plate is surrounded by liquid or solid, wave attenuation also occurs due to the leakage of bulk waves into the medium surrounding the waveguide.In this study, to investigate the acoustic properties of the leaky Lamb wave, dispersion curves for a 3 mm thick aluminum plate in three different cases were calculated; free plate and immersed plate in water and in glycerin, as presented in Figure 2. In an aluminum plate immersed in water or glycerin, the 0 mode disappeared, and its energy was converted to Scholte wave below about 100 kHz.Group velocity of the 0 mode became slower than that in the free aluminum plate.The Scholte wave is an interface wave between a liquid medium and a solid medium and decreases exponentially away from the surface, into the liquid medium [9]. Ultrasonic Propagation Imaging System for Submerged Structural Inspection. A photo and schematic of the ultrasonic propagation imaging (UPI) system to inspect submerged structures are shown in Figures 3(a) and 3(b).The system was constructed with a laser Doppler vibrometer (LDV), in-line signal conditioner with filters and amplifiers, a personal computer (PC) with a data acquisition and signal processing platform, and a Q-switched solid-state diode pumped laser (QL).The laser pulses were generated by Qswitching technique in QL, at a pulse repetition rate of 200 Hz.The laser beam with a wavelength of 532 nm and energy of 1.2 mJ was directed to a laser mirror system (LMS). The laser beam was reflected toward the target specimen by a pair of laser mirrors in the LMS.As shown in Figure 3(c), an aluminum plate (400 mm × 400 mm ×3 mm) with a 12 mm long and 2 mm deep artificial crack in the opposite surface was used as the specimen.The scanning area and interval were 100 mm × 100 mm and 0.5 mm, respectively and thus it took 202 seconds to scan the area.The 633 nm sensing laser beam of the LDV was impinged at a point 10 mm above the square scanning area.The typical reflection film was not used on the plate surface. Ultrasonic sensing using the LDV is based on the detection of the Doppler shift of the laser light.The Doppler shift refers to the frequency shift of the light that is reflected back from the vibrating object to the source.The signal processing platform visualizes the wave propagation in the immersed specimen, using the basic ultrasonic wave propagation imaging (UWPI) algorithm.The UWPI is a visualization technology of propagation for in-plane guided wave or through-thethickness wave, in the time or frequency domain, based on 3D data processing [10]. In the experiments, when the laser beam impinged the surface of the immersed target specimen, an ultrasonic wave was created at the affected point.The wave was propagated over the specimen and reached the LDV.The time domain ultrasonic wave measured by the LDV was amplified and band-pass filtered between 40 kHz and 140 kHz and then stored in the PC.The experimental case studies were performed with three different cases: free plate and immersed plate in water and in glycerin, as shown in Figure 4. Ultrasonic Wave Propagation Imaging in Submerged Plates.The proposed system for submerged structural inspection was able to generate UWPI video clips within 1 s after scanning.Figure 5 shows the freeze-frame at 53.5 s extracted from the video clip, as the UWPI result in the free plate.On the other hand, Figures 6 and 7 show the freeze-frames at 61 s and 63 s extracted from the videos as the UWPI results in the water-and glycerin-immersed plates, respectively.Since the proposed system was designed with both remote excitation and sensing laser beams capable of penetrating the liquid, the UWPI results were successfully obtained.All the freeze-frames in Figures 5-7 were taken at the moments when the maximum ultrasonic amplitudes at the crack location appeared.As a result, it has been verified that the velocity of wave propagation was reduced in the immersed plates, due to the high densities of the surrounding fluid (air = 1 kg/m 3 , water = 1,000 kg/m 3 , and glycerin = 1,258 kg/m 3 ).In addition, as shown in Figures 6 and 7, including the results for different depth () of the liquid, such as 10 mm, 40 mm, and 70 mm, the propagation time was not affected by the tested depth of the water.In other words, even 10 mm deep surrounding liquid can be considered as an infinite surrounding medium. In addition to the successful UWPI in the submerged specimens, the back surface crack that the water-or glycerinimmersed specimen encompasses was visualized in the form of sudden phase change and high peak-to-peak amplitude at the crack location, (50, 50), in the freeze-frames of Figures 7 and 8, respectively.However, in contrast to the free plate, a low SNR problem was identified in the immersed conditions, because of the effect of the surrounding liquid.As studied in Section 2.2, the surrounding liquid caused leaky Lamb wave occurrence.As concluded in Figures 6 and 7, the energy losses within the liquid of excitation and the sensing laser beams themselves were negligible, because when the depth of liquid was increased, the SNR did not change considerably.For this reason, repeat scanning technology [11] was implemented in the proposed system for the submerged structural inspection, to increase the SNRs to be similar to the SNR level in the structure in air.The repeat scanning technique can play an important role in realworld submerged structural inspection.In the experiments, the scanning area was repeatedly scanned 10 times.The ten signals obtained at each scanning grid point were averaged, and the averaged waveforms were used as the input data for the UWPI algorithm [12] to generate the ultrasonic wave propagation video clips. As compared between Figures 6(a) and 8(b) and between Figures 7(a) and 8(c), the respective SNRs under the water-and glycerin-immersed plate conditions were highly improved.The SNR changes before and after the repeat scanning are summarized in Table 1, where the SNR was evaluated in the way presented in Figure 9. On the other hand, the SNR improvement in the free plate by repeat scanning was not considerable, because the original raw signals already had low noise level.As compared between Figures 5, 8(b), and 8(c), the repeat scanning now made the SNRs in the water-and glycerin-immersed plate conditions reach similar levels to that in the free plate. Figure 10 shows the amplitude distributions along the wavefront (s-axis) at the moments of the maximum ultrasonic amplitude in the crack location, as indicated in Figure 8. And, the s-axis is formed along the same distance from the sensor location.The artificial crack length of 12 mm was estimated at 11.54 mm in the free plate, as shown in Figure 10(a).As also presented in Figures 10(b) and 10(c), the 12 mm long crack was evaluated by the proposed system as 11.29 mm in the water-immersed plate and 10.87 mm in the glycerin-immersed plate.These results were represented by two standard deviations away from the noise mean. Figure 8 shows the spatial domain freeze-frames of the UWPI videos, while Figure 11 shows the time domain signals Since the UWPI freeze-frames of Figures 5-7 show the moments of the maximum ultrasonic amplitudes at the crack location, the freezing times also imply the relative ultrasonic time-of-flights in the free plate and water-and glycerin-immersed plates, respectively.This information was used to calculate the experimental group velocities for the three different cases, which were determined at 2.35 m/ms, 1.82 m/ms, and 1.72 m/ms, respectively.This result was comparable to the theoretical results in the wave dispersion curves depicted in Figure 2 where the central frequency for each case was estimated based on fast Fourier transform and Hilbert transform as shown in Figure 11(b).The theoretical group velocities at the 44 kHz for the free plate, 60 kHz for the water immersion, and 44 kHz for the glycerin immersion stood at 2.32 m/ms, 1.90 m/ms, and 1.69 m/ms, respectively. Conclusion In this paper, a noncontact laser UPI system for submerged structural crack visualization was proposed.A 532 nm Q-switched solid-state laser and a 633 nm laser Doppler vibrometer were integrated into the system, for remote excitation and sensing, respectively.The tested specimen was an aluminum plate with dimensions of 400 mm × 400 mm × 3 mm, which encompassed a back surface crack of 12 mm × 1 mm × 2 mm in the middle of the plate.Three cases were studied: free plate, water-immersed plate, and glycerinimmersed plate.First, theoretical wave dispersion curves were plotted, to understand the theoretical difference between the free and immersed plates.Then, wave propagation imaging for the submerged plates was successfully performed by the proposed system, where a laser mirror scanner and LDV were used for excitation laser scanning and noncontact sensing.The detected waves in the immersed plates showed delays in arrival time and reduction in amplitude, compared to the free plate, because of the surrounding liquid.In addition, the SNR was deteriorated in the submerged plates, because of the leaky Lamb wave.Therefore, the repeat scanning technique was incorporated into the system, to increase SNR up to a similar level to the single scan of the free plate and to prepare for real-world application that would involve more complex and thicker immersed structures.Finally, the proposed system successfully visualized the wave propagation and showed damage evaluation capability for the submerged structural back surface crack.The 2 mm deep and 12 mm long back surface crack was evaluated by the proposed system as 11.29 mm in the water-immersed plate and 10.87 mm in the glycerin-immersed plate. Figure 1 : Figure1: Geometry of a three-layer flat plate system showing the partial waves in each layer (L+−, SV+−, and SH+−) that combine to produce a guided wave[7]. Figure 2 : Figure 2: Group velocity dispersion curves of 3 mm thick aluminum plate: (a) free plate, (b) immersed in water, and (c) immersed in glycerin. Figure 3 : Figure 3: Experimental setup: (a) photo and (b) schematic of the UPI system for submerged structural inspection and (c) specimen with a back surface crack for immersion. Figure 4 :Figure 5 : Figure 4: Experimental model in different conditions: (a) free plate in air and (b) immersed plate. Figure 6 :Figure 7 : Figure 6: Ultrasonic wave fields at 61 s in the water-immersed plate according to the water depth: (a) = 10 mm, (b) 40 mm, and (c) 70 mm (the same color scale for (a) to (c)). 12 SNR = 20log 10 Figure 9 : Figure 9: Typical time domain waveforms extracted at (20 and 25) in Figures 6(a) and 8(b) for SNR evaluation (waveform comparison after single scan and 10× repeat scan for water-immersed plate condition) and definition for SNR calculation. Figure 10 : Figure 10: Crack length evaluation with the wave fields obtained in (a) the free plate (single scan), (b) the water-immersed plate (10 times scans), and (c) the glycerin-immersed plate (10 times scans). Figure 11 : Figure 11: Signals extracted at the impinging point of the center of crack: (a) time domain and (b) frequency domain. Table 1 : SNR comparison related to immersion and repeat scanning.
4,081
2014-05-05T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Slow slip event in the focal area of the 1975 Kurile tsunami earthquake inferred from unusual long-term seismic quiescence In subduction zones, slow slip events (SSEs) have been observed in the portion deeper than the downdip edge of seismogenic zone. However, since it is far offshore from geodetic networks on land, shallow SSEs near the trench axis are hardly observed. Despite of less quantitative than seafloor geodetic observations, a method for inferring the shallow SSEs based on seismic quiescence was presented in this study. Unusual decrease in occurrence rate of M ≥ 5.0 earthquakes was found in the southwestern Kurile Islands. The occurrence rate was ∼1.3 events/year between 1977 and around 2004 and no earthquake was observed during 16 years after 2004. The spatial pattern of the seismic quiescence can be explained qualitatively by the Coulomb failure stress change due to shallow SSE and its fault plane is on the upper boundary of the subducting Pacific plate in the focal area of the 1975 tsunami earthquake. Introduction Two megathrust earthquakes occurred in 1969 and 1975 at almost the same place in the southwestern Kurile Islands subduction zone.The 1969 earthquake was a typical high-frequency earthquake in this region, whereas the 1975 earthquake was a tsunami earthquake that caused a tsunami which was much higher than predicted from the magnitude determined by the short-period seismic wave (Sapporo District Meteorological Observatory 1976;Abe 1989).The tsunami source area of the 1975 event is closer to the trench axis than that of the 1969 event (Hatori 1975) and the aftershock distribution according to the earthquake catalog of the Japan Meteorological Agency also supports the near-trench location of the 1975 event.Fukao (1979) presented a hypothesis that the shear stress was loaded on the shallower plate interface after the coseismic faulting of the 1969 event and the 1975 event was triggered trenchward due to this steep accumulation of the stress.Ioki and Tanioka (2016) conducted tsunami waveform inversion and found that the large slip area of the 1975 event is closer to the trench axis than that of the 1969 event.This result supports the Fukao's hypothesis. A slow slip event (SSE) occurred in September and October 2014 at the Hikurangi subduction margin offshore New Zealand (Wallace et al. 2016).By using data from the absolute pressure gauge (APG), they found that the fault plane of SSEs was very close to the trench axis and extremely shallow (within 2 km from the seafloor).In the subduction zone many previous studies reported deep SSEs at the downdip limit of the seismogenic zone, i.e., at depths of 30-40 km (Schwartz and Rokosky 2007).Comparing with the deep SSEs, the Hikurangi SSEs are in the obviously shallow plate boundary with low temperature and low pressure.Moreover, the tsunami earthquakes occurred in the SSEs area in 1947 (Bell et al. 2014). In the Hikurangi region, SSEs occur in the focal area of the tsunami earthquake, but what about the Kurile Islands?Since there is no APG network, it is not possible to directly measure seafloor deformation around the focal area of the earthquakes in 1969 and 1975.Instead of the geodetic measurement, I inferred SSEs based on the following idea that when SSEs occur, the stress is transferred in the surrounding faults, and consequently the long-term seismic activity is expected to be changed.There are previous studies that estimate SSEs from change in seismic activity (e.g., Ogata 2007;Kumazawa et al. 2010).Immediately after a megathrust interplate earthquake, the extensional stress in the direction of the dip of slab decreases within the descending oceanic plate near the downdip edge of megathrust seismic fault and the downdip-extension-type intraslab earthquakes decrease (Lay et al. 1989;Taylor et al. 1996).In this study, first, decrease in occurrence rate of earthquakes, i.e., seismic quiescence, was searched.Second, the fault model of SSE was constructed so that the spatial pattern of change in the Coulomb failure stress (ΔCFS) matches with that of the seismic quiescence.Finally, I show that an earthquake swarm was observed near the SSE fault and it is consistent with the SSE model. Data Earthquakes occurred between 1 January 1964 and 30 September 2019, with the body wave magnitude m b ≥ 5.0, and the depth of hypocenter h ≤ 60 km were selected from an earthquake catalog published by the International Seismological Center (ISC) (Fig. 1).Following Katsumata and Zhuang (2020), the study area is in a circle with a radius of 882 km centered at 46.71°N and 154.33°E.Katsumata and Zhuang (2020) confirmed that earthquakes with m b ≥ 5.0 and h ≤ 60 km were detected and located without fail in the study area between 1964 and 2019.Although the ISC is in the process of rebuilding the catalog (Storchak et al. 2017), I used old data in the present study to maintain the temporal homogeneity of the earthquake catalog. Method In this study, the PMAP method (Katsumata and Zhuang 2020) was used for investigating the seismic quiescence.Since Katsumata and Zhuang (2020) described the PMAP method in detail, the outline of the method is briefly explained here.Suppose that N earthquakes were observed during the period T years and let t i (1 ≤ i ≤ N) be the origin time of the i-th earthquake.Assuming that the earthquake occurrence follows the stationary Poisson process, the probability P N (0) that no earthquake occurs between t i and time t (t i < t < t i+1 ) is defined as follows: In the actual calculation, the procedure is as follows.First, the study area is divided into grids and N earthquakes are selected around a node.Second, P N (0) is calculated and search for the minimum value while changing N from 5 to 40.Finally, the P-value is defined at node (x, y) at time t as: where x ranges from 143 to 163°E with an interval of 0.1°, y ranges from 40 to 54°N with an interval of 0.1°, and t ranges from 1977.7 through 2019.7 with an interval of 0.1 years.When selecting N earthquakes around the node (x, y), select the earthquake inside a circle with a radius of r km centered on the node.The higher the seismicity, the smaller the r.Since r designates the spatial resolution in search for the seismic quiescence, this circle with radius r is defined as the resolution circle.If N = 5 and r > 50.0 km, the P-value was not calculated at the node. Results The P-value was calculated in the study area at 1,426,348 nodes, i.e., 3388 spatial × 421 temporal nodes.From the P-value maps calculated every 0.1 years from 1977.7 to 2019.7,four representative times were shown in Fig. 2. It is obvious that small P-values are rare (see Additional file 1: Fig S1).For example, there are more than 30,000 nodes with a P-value of 0.01 or less, and they are frequently observed.The number of nodes with a P-value (1) (2) P x, y, t = min 5≤N ≤40 P N (0) of 0.00002 or less has decreased to 295 and is divided into three clusters.As a result, I found two nodes 1 and 1' with P = 1.57× 10 -6 , which is the smallest value among the P-values calculated (see Additional file 1: Table S1).Since the nodes 1 and 1' are close each other, it is appropriate to recognize them as the one seismic quiescence.Hereafter, only the node 1 will be discussed.The number of earthquakes included in the resolution circle of the node 1 is N = 36 and the distribution of epicenters is plotted on the map in Fig. 3.The seismic quiescence area is defined as an area in the resolution circle.The 36 earthquakes occurred between 2 March 1978 and 10 February 2004 and the occurrence rate is 1.4 events/year.No earthquake was observed between 10 February 2004 and 30 September 2019 and this period was recognized as the seismic quiescence period (see Additional file 1: Figs.S2, S3).The epicenters in the seismic quiescence area were compared with the coseismic slip distribution of the past great earthquakes presented by Ioki and Tanioka (2016).In the case of the 1975 tsunami earthquake, the epicenters in the seismic quiescence area 1 are distributed around the downdip edge of the subfaults that have a large slip during the main shock.On the other hand, the The second smallest P-value was 4.79 × 10 -6 observed at the node 2 and the third smallest P-value was 1.67 × 10 -5 observed at the node 3 (see Additional file 1: Table S1).Comparing with the coseismic slip distribution of the largest aftershock of the 1963 Kurile earthquake presented by Ioki (2013), both two seismic quiesce areas 2 and 3 are included in the focal area of the 1963 aftershock.It is noteworthy that both the 1963 aftershock and the 1975 earthquake were tsunami earthquakes. Statistical significance of seismic quiescence The statistical significance of seismic quiescence was estimated by a numerical simulation using earthquake catalogs created by the ETAS model.The simulation procedure is as follows.First, a synthetic earthquake catalog including background and cluster activities is produced by assuming the ETAS parameters obtained in this study.Second, after the declustring, the P-values were calculated, which is the same analyses as those applied to the actual earthquake catalog.Finally, the minimum P-value is searched among the P-values calculated.This simulation procedure is repeated 1000 times and the distribution of the minimum P-value was obtained (see Additional file 1: Fig. S4).Observing the entire Kurile Islands for 42 years, the seismic quiescence of P ≈ 0.0001 is not unusual according to the simulation result.Whereas log 10 P = − 5.804 for the node 1 and the number of cases with log 10 P ≤ − 5.804 is 39, therefore the rate of by-chance is 39/1000 = 3.9%.Since the rate of by-chance is smaller than 5% at the node 1, the seismic quiescence 1 is not likely to occur by chance.On the other hand, the rate of by-chance is rather large at the nodes 2 and 3, and the seismic quiescence 2 and 3 is less significant statistically than the seismic quiescence 1. Inferences about slow slip event near the trench axis Hypocenter relocation using HYPODD For constructing SSE models based on the seismic quiescence, it is very important to use the reliable location of hypocenters.Therefore, the hypocenters in the seismic quiescence area 1 were relocated by a double-difference earthquake location method.Since the seismic quiescence 2 and 3 have low statistical significance, only the seismic quiescence 1 is considered here.The location method was developed by Waldhauser and Ellsworth (2000), known as HYPODD.There are previous studies that used HYPODD to determine hypocenters using data from worldwide seismograph networks (Waldhauser and Schaff 2007;Pesicek et al. 2010).A subroutine of HYPODD calculates traveltimes assuming a horizontally layered velocity structure.In this study, the subroutine was replaced suitably by the iaspei-tau package (Kennett and Engdahl 1991;Snoke 2009) which is a program to calculate traveltimes assuming a radially stratified velocity structure of the Earth.The arrival times of P-wave in the ISC bulletin were used, which were observed at 273 seismographic stations within 5000 km from the seismic quiescence area 1 (see Additional file 1: Fig S5).The maximum separation of event pairs was 30 km and then 4678 double-difference data sets were produced from the hypocenter and the origin time in the ISC bulletin.The standard 1-D Earth model iasp91 was assumed as the P-wave velocity structure.As a result, the rms of residuals of the 36 earthquakes reduced 21% from 1.67 to 1.32 s and the hypocenters shifted slightly (see Additional file 1: (Ioki and Tanioka 2016).Twelve squares between 148 and 152°E are subfaults of the largest aftershock of the 1963 Kurile earthquake (Ioki 2013).The subfaults in grey have a coseismic slip larger than 1 m.Three circles numbered as 1, 2, and 3 are the seismic quiescence areas on Additional file 1: Table S1.Red crosses show the epicenter of earthquake that occurred during times other than the seismic quiescence period within each area.The solid lines along the Kurile Islands indicate the trench axis (Bird 2003).b The coseismic slip of the 1969 Kurile earthquake (Ioki and Tanioka 2016) was compared with the seismic quiescence.The notation of symbols is the same as a HYPODD.The fault parameters of the two models are listed on Additional file 1: Table S2.As mentioned in the introduction, many previous studies reported deep SSEs at the downdip limit of the seismogenic zone and some authors suggested the existence of shallow SSEs near the trench axis.We compare the deep and the shallow SSE models in the present study.The model 1 represents the shallow SSE on the plate boundary near the trench axis. The fault area of the model 1 corresponds to the subfaults 6 and 7 of the 1975 tsunami earthquake defined by Ioki and Tanioka (2016) and they found a large coseismic slip in these subfaults.The model 2 represents the deep SSE on the plate boundary which is an extension of the 1969 earthquake fault in the deep direction.The fault slip was assumed to be 1 m for the models 1 and 2. Since the purpose is to compare positive and negative spatial patterns of ΔCFS, any amount of slip can be used here.The frictional coefficient was assumed to be 0.4 for the models 1 and 2. Earthquakes that are affected by ΔCFS are called as receivers.Christova (2015) conducted a stress inversion using focal mechanisms of intraslab earthquakes shallower than 60 km within the Pacific plate in the southern Kurile Islands and found the σ 1 -axis with strike = 127° and dip = 20°, and the σ 3 -axis with strike = 288° and dip = 69°.The σ 1 and σ 3 are the maximum and minimum principal stresses, respectively.In this region, the σ 3 is predominant in the downdip direction along the subducting plate.When SSEs take place, σ 1 and σ 3 either decrease, increase, or remain unchanged, depending on the location.To estimate the change in σ 1 and σ 3 , I assumed that the σ 1 -and σ 3 -axes match the P-and T-axes of the focal mechanism of the receiver, respectively.Consequently, the focal mechanism of the receiver for calculating ΔCFS was assumed to be strike = 31°, dip = 65°, and rake = 93°.Assuming the auxiliary plane of the focal mechanism, the result is almost the same.The centroid moment tensor (CMT) solution for 16 out of 36 earthquakes are shown in Additional file 1: Fig S7 .These CMT solutions were determined by the global CMT project (Dziewonski et al. 1981;Ekström et al. 2012). Estimation of the SSE fault model For the model 1, the area of the seismic quiescence 1 corresponds to that of negative ΔCFS (Fig. 4).This result means that, in the seismic quiescence area 1, earthquakes has been occurring at almost constant rate before 2004, this area experienced negative ΔCFS due to the SSE, and no earthquake has occurred since 2004.This characteristic pattern matches not only on the plan view but also on the cross-sectional view.For the model 2, as in the model 1, the area of the seismic quiescence 1 corresponds to that of negative ΔCFS.Although the other negative ΔCFS area appears around the deep edge of the SSE fault, no seismic quiescence was detected there.Therefore, the seismic quiescence 1 is better-explained by the model 1 rather than the model 2. Earthquake swarm, increased seismicity, and seismic quiescence following the 1975 tsunami earthquake Some previous studies reported that SSEs were accompanied by earthquake swarm (e.g., Vallée et al. 2013;Ozawa et al. 2019), and there are attempts to estimate SSEs based on the swarm activity (Llenos and McGuire 2011;Marsan et al. 2013).According to the earthquake catalog of the Japan Meteorological Agency, I found that 43 earthquakes with M ≥ 2.5 occurred in February 2003 in a rectangular area near the deep edge of the SSE fault of the model 1 (Fig. 5).After that, the seismicity in the rectangular area was gradually decayed and 15 earthquakes occurred again at one month in December 2019.The northwestward migration of epicenters was observed between 2004 and 2006, showing in Fig. 5e.Considering the swarm activity, it is reasonable to think that the SSE started around February 2003.However, if so, there needs to be a reason why the start of seismic quiescence was delayed by one year.One possibility is that the rupture velocity of the SSE was very slow, and it takes a year that the stress falls below the threshold of earthquake occurrence. An increase in seismicity rate was systematically searched based on the same ISC data as describded in the Section "Data" of this paper.As a result, I found that the seismicity rate increased in the red color area, where ΔCFS is positive, at the same time of decrease in 2004 (see Additional file 1: Fig S8).The seismicity rate is 0.590 events/year before 2004.0 and 0.754 events/year after 2004.0,indicating 1.28 times increase.Although this increase is not significant statistically, this result is not inconsistent with the SSE hypothesis presented in this paper. If the seismic quiescence was caused by the SSE near the trench axis, the 1975 tsunami earthquake should be also followed by long-term seismic quiescence.To confirm this hypothesis, I searched for seismic quiescence systematically based on the same ISC data as described in the Section "Data" in this paper.As a result, I found that no earthquake occurred around the deeper edge of the seismic fault after the 1975 tsunami earthquake for approximately 5 years from 1975.5 to 1979.8 (see Additional file 1: Fig S9).However, note that this quiescence is not significant statistically. Discussion In general, the trench axis of subduction zone is far from GNSS networks on land, therefore it is difficult to detect SSEs that occurred near the trench axis.There are only a few cases that SSEs near the trench axis have been revealed by geodetic observations on the seafloor (Davis et al. 2015;Wallace et al. 2016;Araki et al. 2017).Instead of the direct measurement on the seafloor, Yamashita et al. (2015) argued the earthquake swarm as evidence for SSEs near the trench axis.Compared to the geodetic observation, since the quantitative relationship between ΔCFS and the occurrence rate of earthquakes is not clear, estimation of SSEs based on the seismic quiescence cannot quantitatively constrain some model parameters, e.g., the amount of fault slip.Despite of poor quantitative estimation, it might be an advantage that it is simple and inexpensive. From physical point of view, there is a fundamental question whether shallow SSEs are allowed to occur near the trench axis.Yoshida and Kato (2003) conducted a numerical simulation based on rate and state dependent friction laws with assuming a simple model when the unstable area of (a-b) < 0 and the conditional stable area of (a-b) > 0 are adjacent along the upper boundary of the subducting plate.When the seismic slip occurs in the unstable area, the seismic slip occurs in the conditional stable area almost at the same time.In the unstable area, no aseismic slip is observed during the interseismic period.On the other hand, in the conditional stable area, episodic aseismic slips are observed during the interseismic period when the same amount of stress is accumulated as the stress drop during the main shock.The timing of SSE occurrence is consistent with the SSE model proposed in this study.If the 2003 earthquake swarm is the beginning of the SSE, In this area, the Pacific plate is subducting beneath the North American plate at a rate of 0.08 m/year (DeMets et al. 1994), and thus, the slip deficit of 2.24 m at most accumulates in 28 years.This amount of the slip deficit is comparable with the slip of 1.6-2.2m during the 1975 tsunami earthquake estimated by Ioki and Tanioka (2016).A model in which there is a conditional stable area near the trench axis and an unstable area adjacent to it in the deeper part along the plate boundary is presumed to be quite common in subduction zones (Lay et al. 2012).Therefore, it is likely that SSEs occurring near the trench axis are common phenomena in subduction zones. Conclusions In the present study, the background seismicity including neither aftershock nor earthquake swarm has been investigated in the Kurile Islands subduction zone and unusual long-term seismic quiescence was found.I argue that the stress release associated with the shallow SSE near the trench axis is the most likely model to explain the seismic quiescence.Many previous studies found such kind of the long-term seismic quiescence which preceded megathrust earthquakes and has been interpreted as a reliable precursor (e.g., Katsumata 2011).According to a retrospective forecast experiment based on the seismic quiescence, when the alarm is issued to the 40% area of the whole area, the prediction rate is about 80% and this means that most of the seismic quiescence area will not experience subsequent large earthquakes (Katsumata and Nakatani 2021).I suggest that the seismic quiescence found in this study is also not to be a precursor of great earthquakes, but to be the one caused by SSEs, which frequently occur near the trench axis during the interseismic period. Fig. 1 Fig.1Earthquakes considered in the present study.a, c Earthquake distributions before and after the declustering process, respectively.Earthquakes were selected from the ISC catalog and occurred from 1 January 1964 to 30 September 2019, with m b ≥ 5.0 and depth ≤ 60 km.The study area is in a large circle with a radius of 882 km centered at 46.71°N and 154.33°E, and the solid line along the Kurile Islands indicates the trench axis(Bird 2003).b, d Space−time plots before and after the declustering process, respectively Fig. 2 Fig. 2 Time slices of the P−value distribution at year of (a) 2010.0,(b) 2015.0,(c) 2018.0, and (d) 2019.7.The nodes are not colored if the radius of the resolution circle is larger than 50 km.The smaller the P − value, the more significant the seismic quiescence.Three seismic quiescence areas are found and numbered as 1, 2, and 3 in (c) and listed on Additional file 1: TableS1.The solid line along the Kurile Islands indicates the trench axis(Bird 2003) Fig. 3 Fig. 3 Seismic quiescence areas and the past large earthquakes.a Eight squares between 146 and 149°E are subfaults of the 1975 tsunami earthquake(Ioki and Tanioka 2016).Twelve squares between 148 and 152°E are subfaults of the largest aftershock of the 1963 Kurile earthquake(Ioki 2013).The subfaults in grey have a coseismic slip larger than 1 m.Three circles numbered as 1, 2, and 3 are the seismic quiescence areas on Additional file 1: TableS1.Red crosses show the epicenter of earthquake that occurred during times other than the seismic quiescence period within each area.The solid lines along the Kurile Islands indicate the trench axis(Bird 2003).b The coseismic slip of the 1969 Kurile earthquake (Ioki and Tanioka 2016) was compared with the seismic quiescence.The notation of symbols is the same as a Fig. 4 Fig. 4 The Coulomb failure stress change (ΔCFS) caused by the fault motion of slow slip event (SSE).Rectangles indicate the SSE faults of (a) model 1 and (c) model 2. Crosses depict 36 earthquakes in the seismic quiescence area 1, which have been relocated by using HYPODD.a, c are plan views.Colors represent ΔCFS at a depth of 40 km.Solid lines along the Kurile Islands indicate the trench axis (Bird 2003).6 and 7 in (a) are the subfault number defined by Ioki and Tanioka (2016).b, d are vertical cross−sectional views along the broken line AB on the plan view.Gently curved black lines in (b) and (d) represent the upper boundary of the subducting Pacific plate (Nakanishi et al. 2004).Bold lines on the upper boudary of the subducting Pacific plate show the SSE faults assumed in this study Fig. 5 Fig. 5 Earthquake swarm activity.a Thin lines show a northeastern corner of the SSE fault assumed in Fig. 4a.A rectangle drawn by a bold line indicates the area where the earthquake swarm activity was observed.Dots within the rectangle are the epicentres determined by the Japan Meteorological Agency between 2000 and 2022.b The cumulative number of earthquakes within the rectangle in (a).(c) Magnitude−Time plot of the earthquakes within the rectangle in (a).d Space−Time plot along AB in (a).e Space−Time plot along CD in (a).Broken lines in red color show the migration of earthquakes from 2004 to 2006
5,594.4
2023-09-06T00:00:00.000
[ "Geology", "Physics" ]
Higgs decay mediated by top-quark with flavor-changing neutral scalar interactions We explore the flavor-changing parameters mediated by a Higgs boson within the THDM-III context. In particular, the $h\to t^* c$ processes, and check the high suppression for the FC in the THDM-III context for the low $t_\beta$ parameters. Our exploration in the $\chi_{ij}^{u}-\chi_{ij}^{d}-$parameter space shows the allowed regions for different $t_\beta^{}$ values. We explored different modes for Higgs decays, considered the experimental constraints to get scattering plots for the FC parameters and some relevant decay modes. We expect future results to figure out the FC and its implications in the scalar sector. Introduction Different experiments have examined the Standard Model observables which have good agreement with the theoretical results. Currently LHC explores the nature at energy scale of order TeV s and the SM works very well to analyze the structure of the matter. However we have some questions to solve: the matter-antimatter asymmetry, the CP violation and flavorchanging neutral currents (FCNC) mediated by gauge and scalar bosons. In this document, we shall discuss the last one where the neutral scalar boson can change the fermion flavor. This letter is organized as follows: section 2 shows the models and methods in the THDM-III context, section 3 we expose a results for the h → t * c process and section 4 contains the conclusions. Models and Methods We shall consider the THDM-III where the most general potential is [7] In a general way, the Yukawa sector for the THDM-III is given by [8] L T HDM −III Where the (2) is the Lagrangian density for the fermion-fermion-φ; and φ = A 0 , H 0 , h 0 which are the pseudoscalar, heavy Higgs and standard model Higgs, and the superscript (zero) labels the mass eigenstates. One can rewrite the Yukawa couplings as because we explore a simple model where the flavor-changing is due to the χ i f -parameters, and it is simple if we reduce the number of those parameters, which are associated to the Yukawa couplings; namely where the m i and m j are the fermion masses and χ f ij are the dimensionless parameters that will probe the flavor-changing mediated by scalar bosons. We explore the THDM-III because it is possible to have mixing between the fermion flavors at tree level. For the purpose of this paper, we introduce B → X s γ processes considering Γ(B → X s γ) Γ(b → sγ), since the non-perturbative effects are small [9]. We consider the constraints coming from t → cV, b → sγ, h → l i l j , h → γZ, to show the correlation between branching ratios b → sγ and h → γZ, and found excluded regions (e. g. figs. 3-4). Figs. 3 and 4 show a correlation between parameters of the model, considering different constraints coming from t and b-quark and h decays [10]. The process that is represented in fig. 2 was proposed by ref. [2] to explore the flavor-changing modes. Results Our analysis shows that it could be possible to have FC if we consider the quark type separately, since the parameter space is wider for the u-quark type. We find that if t β 4 ) show that χ d ij − t β is more suppressed than the χ u ij − t β for the lowest t β values. In figs 5-6, the dark regions represent the highly allowed region. We generated randomly the parameter set, as follow, −200 ≤ χ u,d ij ≤ 200, 0 ≤ t β ≤ 100, 350 GeV ≤ m H,H ± ≤ 1000 GeV, as well considering the experimental bounds for the t, b and h branching ratios at tree and one-loop level. If we consider the W − decays to ν l l then we obtain Br(h → llqq) of order 10 −4 , therefore our results are interesting if we compare to the experimental results Br(h → llqq) ∼ 10 −2 − 10 −3 as is shown in ref. [11] for m h = 125 GeV . We found Br(h → t * c) ∼ 10 −3 for the 1 t β 20 ( fig. 7), which is a very interesting channel to explore in the LHC or the next generation of colliders. Fig. 7 shows an interesting channel (Br(h → t * c) ) to probe new physics and we expect the Br (h → γ Z) Figure 6. Scattering plot for the branching ratios of the processes b → sγ versus h → γZ. next experimental results to test the THDM-III as the simplest SM extension. Conclusions We have explored the h → t * c processes, and checked the high suppression considering FC mediated by Higgs boson. Our exploration in the χ u ij − χ d ij −parameter space showed the allowed regions for different t β values (see figs. 3-4). Besides we explore the different modes for Higgs decays to (see figs. 5-6). Our results showed Br(h → µτ ) 10 −5 and Br(h → γZ) ∼ 10 −6 as long as Br(b → sγ) 10 −4 as is shown in figs. 5-6, those engaging values for exploring in LHC. Besides we predicted Br(h → t * c) ∼ 10 −3 for 1 t β 20, this a feasible channel to explore our model in the LHC.
1,253
2017-10-24T00:00:00.000
[ "Physics" ]
Quasar Microlensing Statistics and Flux-ratio Anomalies in Lens Models Precise lens modeling is a critical step in time delay studies of multiply imaged quasars, which are key for measuring some important cosmological parameters (especially H 0). However, lens models (in particular those semi-automatically generated) often show discrepancies with the observed flux ratios between the different quasar images. These flux-ratio anomalies are usually explained through differential effects between images (mainly microlensing) that alter the intrinsic magnification ratios predicted by the models. To check this hypothesis, we collect direct measurements of microlensing to obtain the histogram of microlensing magnifications. We compare this histogram with recently published model flux-ratio anomalies and conclude that they cannot be statistically explained by microlensing. The average value of the model anomalies (0.74 mag) significantly exceeds the mean impact of microlensing (0.33 mag). Moreover, the histogram of model anomalies presents a significant tail with high anomalies (∣Δm∣ ≥ 0.7 mag), which is completely unexpected from the statistics of microlensing observations. Microlensing simulations neither predict the high mean nor the fat tail of the histogram of model anomalies. We perform several statistical tests which exclude that microlensing can explain the observed flux-ratio anomalies (although Kolmogorov–Smirnov, which is less sensitive to the tail of the distributions, is not always conclusive). Thus, microlensing cannot statistically explain the bulk of flux-ratio anomalies, and models may explore different alternatives to try to reduce them. In particular, we propose to complement photometric observations with accurate flux ratios of the broad emission lines obtained from integral field spectroscopy to check and, ideally, constrain lens models. Systems of multiple images of distant quasars formed by the gravitational field of intervening galaxies are one of the most useful "laboratories" in astrophysics and cosmology, allowing to study the structure of the quasar sources, the properties of matter in the lens galaxies, and the cosmological parameters, among other applications.A necessary step in these studies is the modeling of the lens, which needs to be very precise and robust, particularly for cosmographic applications like the prediction of gravitational time delays between the images, which may be used to solve the current tension in the determination of the Hubble constant, H 0 , from different methods (see, e.g., Di Valentino et al. 2023 and references therein).On the other hand, the number of observed systems which will need to be modeled is expected to increase considerably in the near future, which will prevent a detailed individual modeling, calling for (semi-)automated procedures (see, e.g.Shajib et al. 2019, Schmidt et al. 2023 andreferences therein). Common observable photometric/astrometric quantities of the lensed systems are the positions of the lens and images, and the fluxes of the images.Models must, therefore, take into account the structure of the lens (often including secondary lenses), the structure of the source, and a careful modeling of the point spread function (Koopmans et al. 2003, Suyu et al. 2010, Birrer et al. 2022).To avoid an unmanageable large number of unknowns, the lens mass distribution is usually parametrized (e.g. as a power law or Navarro-Frenk-White profile).Spectroscopic information is less frequently used in spite that it may be crucial to break important degeneracies present in lens modeling. Photometry based lens models are usually much more constrained by the astrometric observables (because they are accurate, with typical uncertainties of a few milliarcseconds) than by fluxes.Moreover, in the case of quasar images, broad-or narrow-band fluxes can be, in principle, affected by several sources of uncertainty like intrinsic variability of the source combined with time-delays, micro and millilensing, extinction, etc. (see Pooley et al. 2007, Yonehara et al. 2008, Motta et al. 2012 and references therein) which make them practically of no use in constraining the models.In fact, calculated models very often present strong differences between their predicted flux ratios and the observed ones (Witt et al. 1995, Mao & Schneider 1998, Chiba 2002, Metcalf & Madeau 2001, Dalal & Kochanek 2002, Schechter & Wambsganss 2002, Keeton 2002, Bradac et al. 2002, Metcalf & Zhao 2002, Moustakas & Metcalf 2003, Metcalf & Amara 2012, Xu et al. 2009, 2015, Gilman et al. 2017).As a statistically representative case, Shajib et al. (2019) find strong flux-ratio anomalies, which they attribute mainly to microlensing, in a sample of 13 quadruple imaged quasars studied to devise a general framework to model multiply imaged quasars (with the aim of processing the large number of systems to be discovered in deep wide-field surveys like the Wide-Field Infrared Survey Telescope, LSST or Euclid).If microlensing is the cause of the anomalies, continuum flux-ratios in the visible are basically useless to model lens systems, but if the impact of microlensing cannot explain the flux-ratio departures from the predictions, it would make sense to explore possible ways to improve the models (like the use of spectroscopic data).Microlensing magnification of the quasar images can be directly (independently from lens modeling) measured by using spectroscopic information of the quasar images.Microlensing is size sensitive (the larger the size, the smaller the impact of microlensing) and in the spectrum of each image we have information from different regions in the quasar: the continuum comes from the tiny accretion disk, which can be strongly affected by microlensing, while the broad emission lines come from the relatively large broad line region, which is rather insensitive to this effect (see, e.g, Wisotzki et al. 1993, Mediavilla et al. 2009, 2011 and references therein).Consequently, we can use the flux ratios corresponding to the emission lines as zero microlensing baseline to measure the impact of microlensing in the continuum flux ratios at a given epoch (single epoch microlensing measurements). Alternatively, microlensing can also be studied from photometric monitoring of the images of a lensed quasar.Subtracting the (time delay corrected) light curves of two images, we can obtain microlensing light curves.Owing to the presence of extinction and to the relatively reduced extension of the monitoring period we can not fully quantify the total amplitude of microlensing at a given epoch, but the amplitudes of the peaks of microlensing events should provide a conservative upper bound to single epoch microlensing. The first objective of this paper is, therefore, to obtain the histogram of observed flux-ratio anomalies induced by microlensing.In order to do so, we calculate the experimental differential microlensing magnifications for a sample of 44 measurements in 34 image pairs of 23 lens systems with available spectroscopic information (collected by Esteban-Gutiérrez et al. 2022from Rojas et al. 2020, 2014, Motta et al. 2017, 2012, Jiménez-Vicente et al., 2015, and Mediavilla et al. 2009).We then compare this experimental histogram with the flux-ratio anomalies inferred from model predictions in two samples of quadruple lens systems (Shajib et al. 2019, Schmidt et al. 2023), to illustrate how the hypothesis that the flux-ratio anomalies are caused by microlensing can be tested.The present work does not intend to make any general statement about lens modeling.Instead, we just aim at providing some tools to detect potential problems in some models and to, eventually, improve them in some cases. The paper is organized as follows.In §2 we present the histogram of observed microlensing magnifications (based on spectroscopic data), and compare it with the statistics of microlensing peak amplitudes (derived from microlensing light curves) and with the predictions of microlensing simulations.In §3 we collect histograms of microlensing model anomalies from the samples in Shajib et al. (2019) and Schmidt et al. (2023) and compare them with the histograms of observed microlensing magnifications and of microlensing peak amplitudes.In §4 we discuss possible observational strategies to derive useful constraints based on the flux ratios either to cross-check the models or to improve them.In §5 we summarize the main conclusions.Finally, we devote an Appendix to explore the relationship of flux-ratio anomalies with the degeneracy of lens models with respect to the radial mass distribution. Estimates from emission-lines To directly measure the impact of microlensing in the images of lensed quasars, we can take advantage of the sensitivity of microlensing to the size of the source.Microlensing by a distribution of stars induces strong spatial changes ("microlensing roughness") in the otherwise uniform (smooth) magnification at the source plane.If the size of the source is large enough, the inhomogeneities of the magnification are spatially averaged and washed out.The spatial scale of the magnification roughness is related to the Einstein radius of the microlenses, which for a typical mass of 0.3M ⊙ and typical values for the redshifts of the lens (z l = 0.5) and the source (z l = 2) amounts to approximately 10 light-days.Consequently, the impact of microlensing can be potentially high for the quasar continuum source (a few light-days in size) but negligible for the Broad Line Region (BLR) (with sizes above hundred light-days) (e.g., Jiménez-Vicente et al. 2022, Guerras et al. 2013, Fian et al. 2018).Thus, we can use the broad lines present in quasar spectra (in particular, the core of the lines) to determine the zero microlensing baseline.For a pair of images of a lens system, we can define the relative microlensing magnification between them as the continuum ratio relative to the zero point defined by the emission line ratio.Expressed in magnitudes we can write (see, e.g., Mediavilla et al. 2009), Using the sample of 44 microlensing measurements collected by Esteban-Gutiérrez et al.1 (2022) obtained according to Eq. 1, we derive the histogram of (unsigned) microlensing magnifications (shown in Figure 1).The average rest wavelength for which these measured differential microlensing magnifications are estimated is λ ∼ 1700 Å (cf.Jiménez-Vicente et al., 2012).The histogram has a mean of ⟨|∆m ij |⟩ = 0.33 ± 0.22.This is, in fact, an overestimate of the expected impact of microlensing, as all four single measurements in Eq.1 are also affected by experimental uncertainties, which will broaden the intrinsic histogram of microlensing magnifications.We have estimated directly the mean error, ⟨σ ∆m ⟩ from the different data sources used: 0.13 ± 0.09 (Mediavilla et al. 2 2009), 0.11 ± 0.04 (Motta et al. 2011), 0.23 ± 0.02 (Rojas et al. 2014), 0.15 ± 0.13 (Motta et al. 2017), and 0.11 ± 0.07 (Rojas et al. 2020).The weighted average of the errors is 0.13.Notice that the experimental ∆m ij also includes the differences in flux arising from intrinsic quasar variability combined with the time delay between images, which are supposed to be small, specially for quads.On the contrary, as far as the continuum and emission lines are observed at close wavelengths, we can assume that the ∆m ij , calculated using Eq.1, are virtually free from extinction. In the histogram of microlensing magnifications (Figure 1) doubles and quads are mixed.If we separate both groups, we obtain ⟨|∆m ij |⟩ quads = 0.27 ± 0.22 and ⟨|∆m ij |⟩ doubles = 0.40 ± 0.19.The results indicate that doubles exhibit slightly larger microlensing than quads (likely because doubles have larger time delays, which combined with intrinsic variability may increase flux-ratio anomalies; on the other hand in doubles one of the images is also often close to the lens galaxy and, consequently, more prone to microlensing).Then, if we restrict to quads, micro-lensing anomalies would be even slightly smaller.Some of the microlensing measurements in our sample correspond to different epochs of a same image pair.To avoid the impact of possible covariance between repeated measurements we have substituted for each image pair where more than one measurement were available, all the measurements by its mean, finding neglectable differences in ⟨|∆m ij |⟩ either for quads or doubles. Comparison with peak amplitudes of microlensing light-curves It is interesting to compare the values of microlensing magnifications obtained using the emission lines as baseline, with the peak amplitudes of microlensing light curves.In Figure 1, we also present the histogram of microlensing peak amplitudes taken from Mediavilla et al. (2016).These amplitudes use the "flat" part of the light-curve before or after the microlensing event as baseline.This is an idealization, because due to the relatively high optical depth, quasar light curves can not generally be described as isolated microlensing events/peaks over a flat baseline.In fact, it is common that microlensing light-curves do not present a well defined flat region.Moreover, the peak or part of the baseline can fall in one of the (seasonal or incidental) gaps of the light-curves.For this reason, the peak amplitude defined with respect to the left or right sides from the peak, can be different in some cases.We have selected always the largest one. There is a great similarity between the histograms of peak amplitudes and of microlensing magnifications from emission lines, although an offset towards larger values of the histogram of microlensing peak amplitudes would be expected.The coincidence of the means of both histograms can result, in part, from the above mentioned overestimate in the mean value of microlensing inferred from the emission lines due to measurement uncertainties.Notice also that intrinsic variability is contributing to the microlensing magnifications from emission lines while it is not affecting to microlensing light curves which are obtained subtracting time delay corrected light curves of two images.Moreover, an underestimate in peak amplitudes can be produced because the true microlensing zero-point may fall below the "flat" regions of the microlensing light-curves taken as baseline3 , or because the maximum is in a seasonal gap. Finally, note that for source sizes comparable to the Einstein radius of the microlenses, lensed quasar images can be frequently engaged in microlensing events with gentle slopes and broad peaks of relatively small amplitude.In fact, taking into account the total monitoring time (310 years) of the light curves of the ensemble from Mediavilla et al. (2016), the total number of microlensing events detected in the ensemble (20) and the mean Einstein radius crossing time scale4 of 9.4 years, a 61% of the images will be engaged in a microlensing event at any time (slightly above the 50% estimate by Mosquera & Kochanek, 2011).Thus, many of the measurements from emission lines likely correspond to images undergoing a microlensing event with amplitudes not very different from that of the peak.Although, even with all these considerations in mind, the coincidence between the means obtained either from the line-emission or from the light-curves may remain questionable, the absence of a significant high magnification tail in both histograms is a very robust common result. Theoretical microlensing estimates using reverberation mapping sizes for the quasar source It is also possible to make a theoretical estimate of the expected impact of microlensing from simulations based on microlensing magnification maps.The key parameter in the simulations is the size of the continuum quasar source, which can be estimated around r s = 5 light − days from reverberation mapping studies (see, e.g., Edelson et al. 2015, Fausnaugh et al. 2016, Jiang et al. 2017, Esteban Gutiérrez et al. 2022 and references therein).Taking this value for r s , Esteban-Gutiérrez et al. ( 2022) calculate the probability distributions of microlensing magnifications corresponding to a population of stars, for all the objects in the sample used in the present work.As it can be observed (see the blue lines in their Figure A1), the impact of microlensing is concentrated around zero, with typically, σ(∆m) ≤ 0.4, and with a negligible tail above |∆m| > 1, in agreement with the emission-line based measurements. COMPARISON WITH MODEL FLUX RATIO ANOMALIES In order to compare the observed microlensing flux ratios with model predictions, we consider here the work of Shajib et al. 2019 (and its extension by Schmidt et al. 2023, see below).These authors explicitly introduce the question of flux-ratio anomalies related to microlensing and provide a homogeneous data set which has been modeled in a very systematic way.On the other hand, this work has the interesting perspective of exploring semi-automated modeling to face the future massive data availability.In Figure 1 we include the histogram of frequencies of (unsigned) fluxratio anomalies obtained from Shajib et al. (2019) corresponding to a sample of 13 quads.We use this sample just as a test bench to illustrate how to check the impact of microlensing knowing that no general conclusion about lens modeling can be inferred from a particular set of models that the authors themselves consider susceptible of refinement in several ways.We hope that the present work can be one of them. In each quad we take image A as reference to compute the magnitude differences.We have used the data in the F475X filter from Shajib et al. (2019), which have the closest wavelength correspondence with the average rest wavelength of the microlensing measurements described in Section 25 .The mean of the histogram, ⟨|∆m models shajib |⟩ = 0.74, greatly exceeds the mean of the microlensing measurements from emission lines (⟨|∆m lines |⟩ = 0.33).On the other hand, comparison of the tails of the histograms, also reveals strong differences.The high magnification tail is very populated in the case of the model flux anomalies (36% of pairs with |∆m models shajib | ≥ 0.74 magnitudes) while there is only one case (2.2%) in the histogram of microlensing from emission lines, and two (10%) in the sample of microlensing peaks.Finally, we have computed several statistical tests that reject the hypothesis that this sample of flux ratio anomalies and the sample of direct microlensing measurements estimated from the emission lines have the same parent population (see Table 1).The same negative conclusion is reached using the sample of microlensing estimates from the light curves peaks (Table 1). Very recently, the sample by Shajib et al. (2019) has been enlarged with 16 additional systems by Schmidt et al. (2023), who compute new models for all the quads.In Figure 1 we include the histogram of anomalies corresponding to Schmidt et al. (2023), which presents an offset mean ⟨|∆m models schmidt |⟩ = 0.64 and a populated high magnification tail (30% of pairs with |∆m models schmidt | ≥ 0.74 magnitudes).Although the results confirm those obtained from Shajib et al. (2019) sample, the differences are slightly smaller.The performed statistical tests (see Table 1) reject the hypothesis that model flux ratio anomalies and observed microlensing anomalies come from the same underlying distribution.Only the Kolmogorov-Smirnov test (known to be less sensitive than the others, particularly to differences in the tails of the distributions) is inconclusive for the case of the comparison of the observed sample based on emission lines with the results by Schmidt et al. (2023). It is interesting to notice that the average redshifts of the lens galaxies (present sample: 0.53 ± 0. Therefore, from the statistical comparison, we can conclude that the flux ratio anomalies inferred from the lens models that we have taken as example, can not be attributed to microlensing, and that, consequently, there is room to improve the flux-ratio predictions of the models and reduce the bulk of the anomalies. Statistical cross-check According to the previous analysis, the flux-ratio predictions of the considered lens models do not pass the statistical cross-check based on quasar images observations.To estimate the true amplitude of the deviations of the model predictions, we can remove6 the expected average effect of microlensing, after which we are left with a mean model-predicted flux-ratio anomaly of 0.66 magnitudes for Shajib et al. ( 2019) models (0.55 for Schmidt et al. 2023 models) which can neither be attributed to microlensing nor to intrinsic source variability7 . A mean deviation of 0.66 magnitudes is so large, that other causes frequently invoked to explain the flux-ratio anomalies can be confidently ruled out according to previous estimates from the literature (see Motta et al. 2012, Pooley et al. 2007).To confirm this directly from the data, we repeat the histograms removing 12 image-pairs where extinction can play a significant role finding that the high magnification tail remains.On the other hand, to check the possible impact of time-delay in the flux-ratio anomalies we have compared the histograms considering all the image-pairs in the samples with histograms excluding pairs with time delays greater than 10 or 30 days, respectively, finding no significant differences among them.In fact, following the steps described in Yonehara (2008) and assuming a time delay of 30 days and an absolute demagnified magnitude of the sources in the -21 to -23 magnitude range, we find an uncertainty of just 0.26 magnitudes in the worst case. Millilensing (by dark matter subhaloes, for instance) may also, in principle, contribute to flux-ratio anomalies.However, Pooley et al. (2007Pooley et al. ( , 2012) ) find that X-ray anomalies are much larger than the optical ones.This result, confirmed by Jimenez-Vicente et al. (2015), indicates that the effect causing the anomalies is sensitive to the differences in size between the X-ray and optical sources, while they should perform as point like under the lensing action of large mass millilenses (Pooley et al. 2007(Pooley et al. , 2012)).Based on a similar reasoning, Pooley et al. (2007) also exclude that changes in the smooth lens model component can explain at once the anomalies in X-ray and in the optical. Then, the large anomalies are, indeed, an indication that there is room to improve flux predictions of lens models.In particular, an examination of the specific procedure followed to fit the model for each system should be performed to analyze the origin of the discrepancies in the fluxes and their relationship with possible uncertainties in the time delay estimates for cosmographic studies.It is possible that with a small effort of sophistication in the models, the predictions of the flux-ratios improve drastically (Ertl et al. 2023), though, the impact of these changes on the time-delays should be, anyway, examined.Although lens modeling of specific systems is outside of the scope of this work, in Appendix A we explore flux-ratio anomalies under some simplifying assumptions which, while they may not reproduce the complexity of the real problem, may still provide some interesting insight. Individual model cross-check and modeling using (integral field) spectroscopic flux-ratios Perhaps the most interesting reflection to improve lens modeling is that accurate experimental determinations of intrinsic flux ratios (free from the effects of microlensing and extinction) obtained from the broad emission lines of quasars can be used to cross-check individual models8 .Moreover, for quads, the effects of variability combined with time delays between images can be reasonably controlled (in particular when a determination of the time delays is available).In principle, narrow emission lines, mid-infrared or radio emission may also be used to determine the intrinsic flux ratios free from microlensing, but as far as the emitting regions involved are much larger than the continuum source, their images might present different flux ratios depending on the shape, centroid, extension and location of the source respect to the macro caustic.Notice, also, that only the use of emission lines of wavelengths close to the continuum as baseline automatically cancels the effects of extinction (see Eq.1).Accurate flux-ratios free from these systematic effects can be used to select reliable systems for cosmographic studies, rejecting those systems with large anomalies. The emission lines of lensed quasars are relatively bright, and Integral Field Spectroscopy (IFS) combined with adaptive optics in large telescopes can be a very reasonable experimental possibility to simultaneously observe the emission in the continuum and in the lines of a large enough (∼ 50) sample of lensed systems as to estimate H 0 with a few percent precision9 .For instance, a 20 magnitude object (I filter) can be observed with HARMONI@ELT (Thatte et al. 2016(Thatte et al. , 2020)), with S/N ∼ 100 with a total time exposure ≲ 10 min, with an spatial resolution of ∼ 10 mas and a spectral resolution of 0.208 nm (adjacent wavelength slices could be co-added to further increase the S/N ratio).Thanks to the broad spectral range covered with HARMONI, several emission lines and continuum bands can be observed at once, and it is possible to perform a simultaneous fitting of the lens system images in all of them, significantly increasing the reliability and robustness of the analysis. Going a step further, can the flux ratios inferred from the emission lines be used not only to crosscheck individual models but also to effectively constrain them?Commonly adopted uncertainties for continuum based flux-ratios between images (of even 20%) make their use irrelevant as compared with that of the astrometry and, likely for these reason, the use of flux-ratios has been, in general, considered accessory.However, with IFS based flux-ratios with a few percents of relative uncertainties, the role of flux-ratios may be interesting to constrain the lens models10 . In the simple study performed by us to explore the impact of the radial dependence of the gravitational potential (see Appendix A), we see that, in most cases, flux ratios are much less sensitive than time delays to changes in the potential, although the impact in some image-pairs of some specific systems may be large enough as to help breaking the degeneracy.In any event, we have used a very simple model which, among other issues, does not take into account any complexity of the angular part.A more thorough exploration of lens modeling is needed to ascertain the real usefulness of precise flux ratios to constrain the models and improve the robustness of theoretical time delay estimates. SUMMARY AND CONCLUSIONS We use a sample of 44 measurements from 34 image pairs of 23 lensed systems with spectroscopic observations to obtain the histogram of microlensing magnifications, using the emission lines to define the non-microlensed baseline.This histogram can be used to perform a statistical cross-check of lens models comparing with the deviations of the predicted flux-ratios with respect to the observed ones. To illustrate this possibility we obtain the histogram of model flux-ratio anomalies (predicted minus observed flux-ratios) from Shajib et al. (2019) and Schmidt et al. (2023).The main conclusions are the following: 1 -The mean value of the model anomalies (⟨|∆m models |⟩ = 0.74) exceeds significantly the mean impact of microlensing (⟨|∆m lines |⟩ = 0.33).The histogram of model anomalies shows a significant tail (|∆m models | ≥ 0.7 magnitudes) not present in the histogram of directly measured microlensing magnifications.The histogram of peak amplitudes of microlensing events obtained from microlensing light curves, neither presents this extended tail.These results strongly disfavors the hypothesis that the model flux-ratio anomalies arise mainly from microlensing. 2 -Consequently, the remaining flux-ratio anomalies (after removing microlensing and intrinsic variability combined with time delay effects) of ⟨|∆m|⟩ = 0.6 magnitudes may be reduced by further refinements of the models, which are well outside of the scope of the present work.Using, nevertheless, an exploratory simple model, we find that the degeneracy of astrometric model fitting with the radial distribution of mass in the lens can account only for a relatively small part of the observed flux-ratio anomalies (departures from ellipticity of lens galaxies can play a more significant role). 3 -In principle, models can be cross-checked, not only statistically but also individually, using flux ratios from emission (mid infrared, radio, broad and narrow emission lines, for instance) coming from regions large enough as to be insensitive to microlensing.However, if the region is too large (including the dusty torus, the NLR or the radio jet, for instance), additional modeling of the source and of its (extended) images is needed.In this sense, the use of the relatively compact BLR can be less complex.We propose to use spectroscopic data, specifically based on integral field spectroscopy, to measure with current (SINFONI, MUSE) or future (HARMONI) instrumentation, accurate broad emission line fluxes, to obtain flux-ratios free from microlensing and extinction to check the models just to the experimental uncertainties of flux photometry. 4 -In most cases, the uncertainties associated to the possible impact of microlensing in the observed continuum flux ratios, have made them irrelevant in lens model fitting.The use of very accurate broad emission line flux-ratios to establish effective constrains in the models should be explored. Finally, we recommend the consideration of the flux-ratio anomalies as a quality check for the fitted models, and we advice to discard those systems with unexpectedly large anomalies.Although a large flux-ratio anomaly in an individual system is certainly not warranty of a wrong model, it may be a good warning signal which is worth taking into account.Given that ongoing and future surveys will produce large numbers of lensed systems suitable of being used for cosmographic studies, discarding a fraction of suspicious systems shall not damage the statistical quality of those studies.This research was supported by grants PID2020-118687GBC33 and PID2020-118687GB-C31, financed by MCIN/AEI/10.13039/501100011033.J.J.V. is also financed by projects FQM-108, P20 00334, and A-FQM-510-UGR20/FEDER, financed by Junta de Andalucía.V.M. acknowledges support from ANID Fondecyt Regular #1231418 and Centro de Astrofísica de Valparaíso. APPENDIX A. WHAT CAN BE LEARNT FROM MODEL FLUX ANOMALIES? The degeneracy of models based in astrometry with respect to the law describing the radial mass distribution in the lens galaxy is an often invoked difficulty of lens models to provide accurate estimates of the true time delays (see Kochanek, 2020, and references therein).We can explore here whether the flux-ratio anomalies can also be related to this degeneracy11 , estimating and comparing its impact in both quantities.With this exploratory aim, we can consider the singular isothermal ellipsoid potential (SIE) generalized to take into account a power law dependence with size, , where q 2 is the axial ratio.Expanding to first order this potential and adding an external shear, we can write, where the second term is analogous to the SIE quadrupole (see, e.g., Kochanek 2002).Using this potential, we fit the images and lens positions of the nine quadruple lens systems that Shajib et al. ( 2019) modeled with a single lens mass profile.We consider logarithmic slopes of the power-law in the range β = 0.5 to 1.5.Eight of the nine systems are very well fitted by our simple model for all the values of β, (χ 2 (β) ≤ 1), confirming the degeneracy of models based on astrometric data with respect to plausible radial dependences12 .Then, we compute the variation of flux ratios and time delays between images in the considered range of β.In Figure 2 we illustrate, for one of the lens systems (SDSS J0248+1913), the fractional deviation of both magnitudes (time delays vs. flux ratios) with respect to a fiducial model that we arbitrarily select for β = 1, corresponding to SIS+γ ϵ +γ ext . As it is shown in this Figure, the maximum deviation of the flux ratios ranges from 0% to ±25% (depending on the pair of images), while a larger maximum fractional deviation of ±50% is obtained for the time delays for all the three image pairs.Similar results (maximum fractional variations of the flux ratios between 0 and ±30% with a mean value of about 10% while the time-delays exhibit a much larger typical variation of about 50%) are derived considering all the image pairs of the lens systems in the sample when β is changed. In principle, these results show that the degeneracy in the radial mass distribution of the lens may be a common source of uncertainties for flux ratios and time-delays.However, the range of variability of flux ratios with β would only account for a small part of the measured model anomalies (⟨∆m⟩ = 0.6 magnitudes is equivalent to a fractional deviation of 60%).This indicates that other ingredients of the model (aside from the radial dependence of the lens potential) might be affecting the flux ratios13 .In fact, on top of the degeneracies on the radial dependence of the mass distribution, the angular dependence may also induce biasses in the models (e.g.Kochanek, 2021, Van de Vyvere et al. 2022, Gomer et al. 2022).Several recent works have indeed shown that assuming elliptical models can bias the estimate of H 0 up to 10% (see e.g.Gomer & Williams 2021;Cao et al. 2022) and flux-ratio anomalies have proven to be indicative of non-elliptical components in the mass distribution in some systems (e.g.Hsueh et al. 2016Hsueh et al. , 2017)).The anomalies virtually disappeared when the non-ellipticity is included in the models.Then, more complex models (and ancillary data) are needed to account for the observed anomalies in the flux-ratios and to explore possible correlations with uncertainties on the time-delays. Figure 1 .Figure 2 . Figure 1.Flux-ratio histograms corresponding to: single epoch microlensing obtained using the broad emission lines as reference (thick, black line), peaks in microlensing light curves (thin, blue line), models from Shajib et al. (2019) (red shaded histogram), and models from Schmidt et al. (2023) (green shaded histogram).Vertical lines show the mean values.The grey shaded region corresponds to one standard deviation around the mean for single epoch microlensing and microlensing peaks. Table 1 . p-values for several statistical tests Sample of flux-ratio anomalies from Shajib et al. (2019). 2 Sample of flux-ratio anomalies from Schmidt et al. (2023). 3Sample of microlensing estimates based on light-curves peaks. 4Sample of single-epoch microlensing measurements based on emission lines.See Figure 1.
7,574
2024-03-21T00:00:00.000
[ "Physics" ]
Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords Word vector representations enable machines to encode human language for spoken language understanding and processing. Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information. Confusion2vec provides a robust spoken language representation by considering inherent human language ambiguities. In this paper, we propose a novel word vector space estimation by unsupervised learning on lattices output by an automatic speech recognition (ASR) system. We encode each word in Confusion2vec vector space by its constituent subword character n-grams. We show that the subword encoding helps better represent the acoustic perceptual ambiguities in human spoken language via information modeled on lattice-structured ASR output. The usefulness of the proposed Confusion2vec representation is evaluated using analogy and word similarity tasks designed for assessing semantic, syntactic and acoustic word relations. We also show the benefits of subword modeling for acoustic ambiguity representation on the task of spoken language intent detection. The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset. We demonstrate that Confusion2vec subword modeling eliminates the need for retraining/adapting the natural language understanding models on ASR transcripts. Abstract: The abstract is well-written and the contribution of the paper is clear. However, include a statistical evidence of the significantly better performance that is claimed eg: an xx% increase) Thank you for the suggestion. We have added the statistics under abstract. The following modification has been made (9 th line): "The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset." Introduction: 1. "Although, there have been few attempts in leveraging information present in word lattices and word confusion networks for several tasks" -this sentence undermines the amount of work that has happened with word lattices and confusion networks. Even the references you have mentioned contain numerous citations. Rephrase the sentence clearly stating that the representations using lattices and confusions networks have been successful in multiple tasks, however, they have some limitations. As per your suggestion we have now rephrased the sentence as follows: "Prior attempts at leveraging information present in word lattices and word confusion networks have been successful for multiple tasks [12][13][14][15][16][17]. However, they have some limitations, as these prior works estimate the embedding in a supervised manner specifically trained with task specific labels. Consequently, the main downside is that the word representation estimated by such techniques are task-dependent and are restricted to a particular domain and dataset." 2. The motivations need to be referenced. For example, sentences like: • the acoustically ambiguous words tend to have more similar bag-of-character n-grams • subwords help model under-represented words more efficiently • subwords enable representations for out-of-vocabulary words Thank you for the comments. We have now added appropriate citations. Also, the sentence "the acoustically ambiguous words tend to have more similar bag-of-character n-grams" is a consequence of how the subwords are generated. 3. Somewhere in the introduction the difference of this study from the initial Confusion2Vec model has to be clearly mentioned. We thank you for the comment. We have now made the distinction clear under Introduction, paragraph 6. The modification is as follows: "In this paper, we extend the previously proposed Confusion2Vec representation framework by incorporating subwords to represent each word for modeling both the acoustic ambiguity information and the contextual information." 4. "the main downside with these works is that the word representation estimated by such techniques are task-dependent and are restricted to a particular domain and dataset." -has this been experimentally verified? If yes, state the reference. If no, this sentence will have to be rephrased. If we have a large text database of a language (which often exists in atleast the wellresourced languages like US English) and a relatively smaller domain-specific text databases, the representations should still be good for the domain-specific task. As for the speech database, dealing with "unseen" words in ASRs is a problem that is more general than specific to this paper's theme. Thank you. We would like to point out and clarify that the cited prior works estimate the vector representations using task and domain dependent speech datasets. These prior works make use of task-specific supervised training with ASR lattices as inputs. For example, spoken language intent detection or slot-filling. The data limitations are resultant of two factors: (i) task specific labeled speech data, and (ii) lattices generated through ASR of the speech datasets. We realize that the above may not have been conveyed clearly. We have added the following text under Introduction, paragraph 5 to enhance the clarity: "Prior attempts at leveraging information present in word lattices and word confusion networks have been successful for multiple tasks [12][13][14][15][16][17]. However, they have some limitations, as these prior works estimate the embedding in a supervised manner specifically trained with task specific labels. Consequently, the main downside is that the word representation estimated by such techniques are task-dependent and are restricted to a particular domain and dataset." Confusion2Vec: • The section heading should be a bit more descriptive -maybe Confusion2Vec representation framework? Thank you for the suggestion. We have renamed the section heading to "Confusion2Vec Representation Framework". Confusion2Vec 2.0 subword model • "We believe we have a compelling case for the use of subwords for representing the acoustic similarities (ambiguities) between the words in the language since more similarly sounding words often have highly overlapping subword representations." -reference for this statement? More clearly, why do you think it's a good representation? Similarly sounding words have highly overlapping subword representations -this is a consequence of how the subwords are generated, i.e., character n-gram encoding. These subwords can approximate/capture syllables in the language. Since the similarly sounding words tend to have similar set of syllables, this leads to higher similarity in encoding of subwords. A feature capturing such information helps in modeling the ambiguity information. We have rephrased and added better description as follows: • "use of subwords should help in efficient encoding of under-represented words in the language."reason for this or reference? Thank you. We have added the appropriate reference to the above statement. • "In the proposed model, each word w is represented as a sum of its constituent n-gram character subwords." -Replace with -"In the proposed model, for example, … " Thank you. This has been replaced. • "The n-grams are generated for n=3 up to n=6." -Why? Why is n=3 and n=6 the maximum and minimum limits for English? Is this based on language analysis, if yes, provide references. Yes, indeed the n-gram character range is language dependent. The character n-gram range was chosen based on empirical evidence obtained from a prior work. We have now added the reference and modified the text as follows: "The choice of length of character n-grams is language dependent and empirically chosen for English \cite{bojanowski2017enriching}." Training Loss and Objective • Equation (4)'s description is ambiguous. Is the equation that of the binary logistic loss? It is the objective function for subword model's negative sampling. Please mention that clearly, and add an LHS to this function. We regret the lack of clarity. We have rephrased the description and included the LHS for the function. The following changes have been made: "The negative sampling loss function to be optimized for subword model can be expressed as:" • Are there any other differences between the new implementation and the old Confusion2vec? Other than that, you are using subwords here? Again, any other changes should be clearly mentioned. Yes, practically, the major difference is use of subwords. The implications of use of subwords are multifold and has been discussed throughout the paper. Evaluations: • "useful, meaningful information embedded in the word vector representation" -what is the difference between useful and meaningful in this context? Can it be useful but not meaningful, or can it be meaningful but not useful? Thank you for the question. The word "useful" is used in the context of the embedding providing performance benefits with respect to the task (from a machine's perspective). Whereas the word "meaningful" is from a human's perspective, more specifically in terms of visualizations. • For all the databases you have used, clearly mention the language of the database, and size of the database. This is essential for someone trying this out in another language. Thank you, we would like to clarify all the databases, evaluations are in English language. To make this clear, we have added the following text under Evaluation section (1 st paragraph): "Note, all the evaluations, analysis and databases used in this work are in the English language." The database description under section "Analogy & Similarity Tasks", subsection "Database" is selfexplanatory -"Fisher English Training corpus". We have also added the information under section "Spoken Language Intent Detection", subsection "Database". The following modifications have been made: "The dataset consists of humans making flight-related inquiries in the English language with an automated answering machine with audio recorded and its transcripts manually annotated." We have mentioned the size of the database in Section "Analogy & Similarity Tasks" under "Database" and "Experimental Setup". • W2V -first time usage needs to full form. Thank you for pointing it out. We have now used the full form. • You mention the Word Similarity task -did you use human annotators for this? Or did you just use the results from [20] -in either case that has to be mentioned clearly, including number of people who annotated. The word similarity task uses the WordSim-353 database. This database consists of 353 word pairs which are human annotated on the perceived word similarity by the annotators. We make use of these human annotated scores and calculate the correlation against the cosine similarity obtained using the various embedding spaces. We have included better description for better clarity under section "Evaluations", subsection "Analogy and Similarity Tasks" (see here). • A description of the results is needed. Did your evaluation show that Confusion2vec 2.0 is better or comparable to existing representations? Thank you for your comment. The description and discussion of the results are presented in the subsequent section "Analogy & Similarity Tasks" under subsection "Results". The current section "Evaluations" is meant to discuss and describe the evaluation techniques adopted in our work. • What do the bold numbers in the Appendix Table 5 mean? Thank you. We have fixed any inconsistencies in bold numbers. We have now specified what the bold numbers mean under the caption of Table 6, 7, 8 and 9. "The bold numeric correspond to the results outperforming the Confusion2Vec 1.0 in each evaluation task". Analogy & Similarity Tasks • Automatic speech recognition -how important is the performance of the ASR to the Confusion2Vec training? Will an ASR with better performance (better WER) be better for the Confusion2Vec training? That is a good question. The effect of ASR performance on the quality of embeddings has not been investigated as part of this study. However, it is an interesting aspect to be investigated in the future. Generally, we believe, the ASR should have a reasonable performance. For example, we don't want an ASR that makes too many errors which would result in too many conflicting words ending up in the confusion network, meanwhile, we also don't prefer an ASR that is too accurate, since that would result in very few word confusion in the lattices. Moreover, we believe that there are many aspects to this question, for example, the beam width used during decoding could have an effect in addition to the word error rate, furthermore, different noise/channel conditions, like reverberation could pose different dimension to this investigation. Overall, the answer to the posed question is not trivial and needs extensive investigation over multiple aspects effecting ASR performance. Hence, we leave this to future work. We have added a brief discussion regarding this to the future work under the section Conclusion (4th paragraph, last 2 lines): "We also plan to understand the factors that affect the quality of the proposed embeddings by conducting extensive analysis of the effects of ASR performance (WER), decoding beam size, characteristics of underlying speech signal environments including type of noise, amount of noise, channel effects, transferability over different ASR systems etc. The performance implications of these factors to the end-task are also of interest." • "Also, a minimum frequency threshold of is set and the rarely occurring words are pruned from the vocabulary." -what is the motivation for this other than reducing the training time/resources? Will Confusion2Vec representation be able to deal with "unseen" words? Thank you for your comment. Setting a minimum frequency threshold to prune rarely seen words is a standard practice for training most word vector representations. The reasoning behind this is that sparsely available words result in inaccurate representation due to poor estimation of the underlying distribution. Too few occurrences of a word results in erratic vector updates (insufficient statistics for reliable estimates). Pruning such words have been proven to result in more robust estimation and accelerates learning [1][2]. We would further like to clarify that our word vector representation has the ability to deal with unseen words. In case of unseen words, the vector sum of its constituent subwords is computed and used as the word vector representation of the unseen word. We have demonstrated this by performing specifically designed analysis by visualizing the embedding for the out-of-vocabulary word "prinz". Please see the discussion presented under section "Analogy & Similarity Tasks", subsection "Embedding Visualization". • Under the results section -are the analogy and similarity tasks performed on the 353 pairs? Please refer to the relevant section. We regret the lack of clarity regarding the analogy and similarity tasks. We have added better description of these evaluation databases and re-structured the content under section "Evaluations", subsection "Analogy and Similarity Tasks" to enhance readability. We have also listed the number of analogy questions present in each analogy-based tasks and the number of word pairs for the similarity tasks. The modified descriptions are listed below for your convenience: "Analogy and Similarity Tasks: For evaluating the inherent semantic and syntactic knowledge of the word embeddings, we employ two tasks: (i) the semantic-syntactic analogy task, and (ii) the word similarity task. For assessing the word acoustic ambiguity (similarity) information, we conduct the Acoustic analogy task, Semantic&syntactic-acoustic analogy task and Acoustic similarity tasks, all proposed in \cite{shivakumar2019confusion2vec}. Acoustic Analogy Task: The Acoustic analogy task comprises word pair analogies compiled using homophones which answer questions of the form: W1 sounds similar to W2 as W3 sounds similar to W4. The task comprises 2,678 analogy questions and is designed to assess the ambiguity information embedded in the word vector space \cite{shivakumar2019confusion2vec}. Semantic&Syntactic-Analogy Task: The semantic&syntactic-acoustic analogy task is designed to assess semantic, syntactic and acoustic ambiguity information simultaneously. The analogies are formed by replacing certain words by their homophone alternatives in the original semantic and syntactic analogy task \cite{shivakumar2019confusion2vec}. The task comprises 3860 analogy questions. Examples of the analogies can be found in \cite{shivakumar2019confusion2vec}. Acoustic Word Similarity Task: The acoustic word similarity task is analogous to the word similarity task, i.e., it contains 943 word pairs which are rated on their acoustic similarity based on the normalized phone edit distances. A value of 1.0 refers to two words sounding identical and 0.0 refers to the word pairs being acoustically dissimilar. The task involves computing the rankcorrelation (Spearman correlation) between the normalized phone edit distances and the cosine similarity of the corresponding word vector pairs." • Why do you think Confusion2Vec 2.0 performance is lower compared to Confusion2Vec and FastText for S&S analogy task? Thank you for the question. We believe that two factors result in slightly lower performance on S&S analogy task in case of both Confusion2Vec and Confusion2Vec-2.0 compared to fastText: 1. Modeling: The additional acoustic ambiguity information that is being modeled in case of Confusion2vec can be considered nearly orthogonal to the semantics/syntax of the language. This makes any performance improvements among the ambiguity dimension result in slight degradation on the other dimension (semantics and syntax) inevitable. We believe the challenge is to obtain better trade-offs with respect to the end-tasks. 2. Evaluation: The analogy tasks are scored only if the most-similar word is the correct answer. Although such an approach seems fair in case of testing the contextual relations (Semantics and Syntax) in a language, the scheme is not optimal when testing for inter-relations across two disconnected dimensions (acoustic ambiguity and Semantics/syntax). Even though, we have tried to address the evaluation up to an extent by introducing top-2 evaluations for analogy tasks in case of Confusion2vec, there is a possibility that the embedding space prioritizes certain information dimension in special cases. We have added the following discussion under the section "Analogy & Similarity Tasks", subsection "Results" (3 rd paragraph, last but 2nd sentence). "One explanation for this is that the different analogy tasks are fairly, mutually exclusive, i.e., getting right on one task compromises performance on the other. The top-2 evaluations for Confusion2Vec provides a partial solution to this. Nevertheless, there can be instances where the embedding can favor information on either acoustic ambiguity or contextual information dimension. Thus, there exists tradeoff between the different proposed analogy based evaluation tasks. The goal is to optimize this tradeoff as best as possible. One way to judge this trade-off is to look at the average accuracy across the analogy tasks." • "Investigating the results for the similarity tasks, we find a significant correlation of …" -how was this correlation calculated? Did you have annotators perform the task for you? Or used the results from past annotations? Apologies for the lack of clarity. We have added more detailed description of the evaluation tasks (see here) including the similarity tasks under section "Evaluations", subsection "Analogy and Similarity Tasks". Model Concatenation • "The subword models slightly under-perform in the acoustic analogy task …" This is a very interesting result and contradictory to what we expect. Why do you think this is the case? It feels that in these concatenations, the impact of fastText is dominant than Confusion2Vec. Thank you. As discussed before, this is again a consequence of the trade-off between modeling acoustic ambiguity and contextual information associated within a language. The emphasis is to optimize this trade-off in favor of end-task performance. Please note, in case of concatenations of two vector spaces, we are optimizing a totally different criterion opposed to the model without concatenation. We empirically find that concatenating models favor semantic and syntactic relations and also enhance the semantic&syntactic-acoustic dynamics. We have added the following text under subsection "Model Concatenation", (3 rd paragraph, last line): "Overall, these changes in dynamics between the acoustic and semantic/syntactic subspaces observed in the case of concatenated models can be attributed to the fact that we are optimizing a different criterion than the non-concatenated versions." • This is a general comment for all the training you have mentioned in this paper. To allow resproducibility of your results, and to allow other researchers to judge whether the resources they have are sufficient to undergo your experiments -please provide details of your computational resources and the training time needed. This maybe a separate section, or even included in Appendices. Thank you for the suggestion. We have added the following information under Section "Analogy & Similarity Tasks", subsection "Experimental Setup", under "Confusion2Vec 2. Embedding Visualization • Give details of the packages used for the visualisation. Thank you for the suggestion. The following details have been added in the 2 nd line. "The visualizations are generated using scikit-learn and matplotlib python packages." • The example about "prinz" is interesting -but is this a one-off example? Are there other occurrences of words that are clustered together due to their acoustic similarity? Also, was prinz part of the training set? The example word-pairs used in the visualizations are picked randomly but, in a way, to represent semantic, syntactic and acoustic relations. Please note, the visualizations are provided to humanly relate to the complex interactions of the acoustic and contextual subspaces. It is practically exhaustive to check for every such combinations visually. The analogy based evaluations as well as the word similarity evaluations are designed to check for such relations in a more practically feasible way. Thus, it is likely we will find many more such examples. More examples of the acoustically similar words clustered together can be found in our previous publication on Confusion2Vec-1.0 (see Figure 12). Moreover, the acoustic analogy task and acoustic similarity task results also support the evidence. The word "prinz" is out-of-vocabulary, meaning that it is not a part of the training set. The subword encoding enables to derive vector representations for such unseen words by computing the vector sum of its constituent character n-grams. We have mentioned this in the last line of the subsection "Embedding Visualization". • It would be interesting to see a similar visualisation of Confusion2Vec 1.0 and the concatenated model too so that a comparison can be drawn with Confusino2Vec 2.0. I had a look at the Confusion2Vec 1.0 paper, but, as the same word list is not used, a direct comparison is not possible. We would like to emphasize that the visualizations provide the overall gist of the word spaces and should not be used to judge the performance differences between the vector spaces. For visualization purposes, we perform extreme dimension reduction to enable plotting vectors -which results in a lot of information loss compared to the original embeddings. For performance evaluations, the various analogy and similarity based tasks serve as indicators. For your reference, we have included the plot with the concatenated model below: The visualizations corresponding to vector space plot of concatenated and non-concatenated versions of Confusion2Vec 2.0 are alike. Since both the version of the models are based on the same concept of joint modeling of ambiguity and context, we expect the plots to be similar. The main difference between the concatenated and non-concatenated versions are performance based, i.e., the concatenated version achieves a better balance of the two information dimensions. We skip the plot since it doesn't add to the paper. • The visualization is interesting and gives a clear picture (literally) of what the models are doing. We can see that the Confusion2Vec 2.0 is clearly modelling human perception. But it feels like it is modelling human perception of individual words in isolation without the context. That would describe why it has such a close feature space for "prints" and "prince". But then, is that good? Do we not want our NLP applications to be able to differentiate these two words rather than consider them as similar? I think this also explains the high correlation you have got in the acoustic similarity tasks -basically where humans are finding individual words acoustically similar, Confusion2Vec 2.0 is also finding the same, and not otherwise. This needs to be addressed in your discussion: ➢ Why is the Confusion2Vec in its embeddings training not capturing the context information? Or rather what can we do to make it capture context information AND acoustic similarity? Maybe the concatenated model is the solution for this. We can know this only by having a look at the visualisation. Thank you for your comments. The results from the semantic and syntactic analogy tasks, from Table 1, are evidence for the fact that Confusion2Vec is capturing context information. Please note that these analogy tasks are quite strict in assessments, i.e., any random embedding not capturing context information would give near 0% in semantic & syntactic analogy tasks. We agree that the performance is slightly less than embeddings trained solely on contextual information (word2vec and fastText) mainly because of reasons discussed earlier. Also, please note that, under Table 1, it is only fair to compare the "S&S" Analogy Task results of Confusion2Vec with the "In-domain" versions of fastText and Google W2V since the Confusion2Vec is trained only using "In-domain" data. This comparison further shows that the loss in contextual information is minimal. Moreover, there is evidence that Confusion2Vec captures context information even in the visualized plots. Please refer to our previous work -"Shivakumar, P. G., & Georgiou, P. (2019). Confusion2Vec: towards enriching vector space word representations with representational ambiguities. PeerJ Computer Science, 5, e195" for plots portraying exclusive semantic, syntactic word relations (Figures 7,8,9 and 10). We show that the Confusion2vec preserves the context information and augments acoustic ambiguity information efficiently. This is also the case with Confusion2Vec 2.0. ➢ In what application would you want acoustically similar words to have a similar feature space? I understand it is good for cases like a noisy ASR output or mispronounced words. We believe any application involving speech signal (spoken language) should benefit with inherent acoustic ambiguity information embedded in word vector representations. For example, ASR, Spoken Language Understanding, speech translation, text-to-speech systems etc.. We agree that purely NLP based applications (with no ambiguity) may not benefit. However, given the evidence (see Table 3, comparing results under "Reference" column) that Confusion2Vec doesn't degrade performance in purely NLP applications as well (since it effectively preserves and captures similar context information as other popular alternatives such as fastText, Word2vec + additionally augments information that can potential provide benefits in different scenarios), there is no reason to discount the Confusion2Vec in most NLP applications. Moreover, the ambiguity need not be limited to acoustics only. Inherent ambiguities are present in various other scenarios dependent on the nature of underlying signals. For example, pictorial ambiguities associated in applications such as Optical character recognition or Image/Video Scene summarization. There is also possibility of multiple ambiguity dimensions associated with certain applications such as Speech Translation where in addition to acoustic ambiguity, there can be ambiguity associated with sourcetarget language morphology, segmentation and paraphrases. More applications are discussed in detail in our previous work, please see section "Potential Applications" in ""Shivakumar, P. G., & Georgiou, P. (2019). Confusion2Vec: towards enriching vector space word representations with representational ambiguities. PeerJ Computer Science, 5, e195" The following text has been added under the conclusion section (3 rd paragraph) , discussing potential future applications: "The proposed Confusion2Vec word embedding can benefit any application involving speech signal (spoken language) in which acoustic ambiguity is inherent, for example in scenarios involving ASR, error correction systems, spoken language understanding, speech translation, text-to-speech systems etc. Moreover, the ambiguity need not be limited to acoustics only. Inherent ambiguities are present in various other settings dependent on the nature of the underlying signals such as for example, pictorial ambiguities associated with applications such as Optical character recognition or Image/Video Scene summarization. There is also the possibility of multiple ambiguity dimensions associated with certain applications such as Speech Translation where in addition to acoustic ambiguity, there can be ambiguity associated with source and target language morphology, segmentation and linguistic expressions such as paraphrasing. More applications are discussed in detail in \cite{shivakumar2019confusion2vec}." ➢ Finally, what impact did the sub-word model bring here that a word-based model could not? We have found that the sub-word modeling overall enhances the modeling capabilities of acoustic ambiguities. We obtain higher performance in both of the evaluation tasks. The analogy tasks and similarity tasks. More crucially, we observe significant improvements over word-based model in application to real world spoken language intent detection. The subword model also comes with certain additional perks of being able to represent out-ofvocabulary words. it has such a close feature space for "prints" and "prince". But then, is that good? Do we not want our NLP applications to be able to differentiate these two words rather than consider them as similar? Thank you for the comments. The experimental results presented in our previous works as well as the current paper indicates that the Confusion2Vec augments typical context based word vector representations with additional useful information such as any ambiguities that may be present in human spoken language or any other signal modalities. In other words, the Confusion2Vec is providing "additional" information that acoustically word "prints" sounds similar to word "prince" while retaining the contextual information. The Confusion2vec comprises of two principal subspaces, one comprising contextual information (similar to fastText/word2vecc) and another comprising acoustic signature information. Hence, depending on the scope of end-task applications, the back-end classification models can choose to use any combination of the subspaces. For example, a purely NLP application may use just the contextual subspace and ignore the acoustic ambiguities, whereas any spoken language application may take into account crucial acoustic signatures of the words in addition to the contextual information. Spoken Language Intent Detection • In the Database section -what does "samples" mean? Sentences? That's right. Samples in the context of ATIS dataset correspond to one sentence with an associated intent label. • "Among the different versions of the proposed subword based Confusion2vec, we find that the concatenated versions are slightly better." -It does not look like they are "slightly" better, it looks like they are clearly better. Again, I think the visualisation of the concatenated models in the visualisation section is essential. Thank you for the suggestion. We have now modified the text to indicate the concatenated version is clearly better. The modification is as follows: "Among the different versions of the proposed subword based Confusion2vec, we find that the concatenated versions are better." • Please provide some examples of the Intent detection task -sentences, along with humanannotated intent and ASR identified intent. Thank you for the suggestion. We have added a table listing a few examples through the process of intent detection. The relevant discussion has also been added under Section "Spoken Language Intent Detection", subsection "Results" under "Training on Clean Transcripts", last paragraph. For your reference the Table and the discussions are given below: "Further, analyzing the results, Table 4 lists a few examples within the domain of intent detection comparing the baseline fastText embedding and the proposed concatenated version of inter-confusion model. In the first example, the ASR incorrectly recognizes ``seating'' as ``feeding'' which leads to an error in intent classification, i.e., intent is detected as ``Meal'' instead of ``Flight Capacity''. However, Confusion2Vec is able to recognize the ambiguity through better vector representation of acoustic confusions between the two unvoiced fricatives /f/ and /s/ and the consonants /d/ and /t/, phenomena that are well documented \cite{kong2014classification,phatak2007consonant}, and eventually lead to better classification. The second example is a classic instance of homophones (fare and fair) with similar implications. In the third example, both the embeddings fail to recover from the error. Finally, the fourth example is a manifestation of a more complex error spanning words/phrases. The proposed Confusion2Vec is able to reconcile the acoustic ambiguity information across multiple words and successfully recognize the correct underlying intent." • "This confirms our initial hypothesis that the subword encoding is better able to represent the acoustic ambiguities in the human language." -are we sure that this experiment is proving that? The statement is ambiguous because it feels like the model is able to differentiate the ambiguous wordsrather from the visualisation we see that it is clustering the ambiguous words together. Hence, this claim has to be made unambiguous. Also, the results are good for this particular task, or in tasks were its okay to have similar representation for ambiguous words. What about applications where a differentiation is needed? Thank you for the comments. Please note in the visualization, in case of Confusion2vec 2.0 ( Figure 2b), the model is not just blatantly clustering acoustically ambiguous words together. Instead, it is clustering the acoustically ambiguous words together while also attaching the semantic context to the acoustic alternatives. For example, the vector "boy-prince" is similar to "boy-prints" (cosine similarity). Also, the vector "boy-prince" is similar to vector "boy-prinz". In application to the particular task, the following is a fair explanation: For recovering errors made by the ASR, the backend intent classification model needs to know which set of words are acoustically ambiguous and in turn realize the most probable correct word given the context. For example, consider the true sentence "List all the flights flying today". Let's assume that the ASR makes an error as follows "List all the lights flying today". A typical word embedding modeling only context information can provide alternatives to the wrongly recognized word "light" which are semantically/syntactically close such as "shine", "fire", "sun" "illuminate". A word embedding modeling only the acoustic similarity can provide the erroneously recognized word "lights" with several acoustically ambiguous alternatives such as "slights", "plights" and "flights". However, Confusion2Vec can provide the correct alternative word, i.e., "flights", which is not only acoustically similar but fits in the context. Note, providing acoustic alternatives such as "slights" or "plights" and ignoring context information could instead confuse and deteriorate the performance. Our representation does not blatantly cause more confusions in vector representations, but instead provide additional useful information. This is supported by the fact that the Confusion2Vec provides decent, comparable performance to popular word embeddings in tasks comprising clean transcripts and no errors (see Table 2). Benefits are evident when there are errors from ASR (see Table 3). • "These results prove that the subword-Confusion2vec models can eliminate the need for re-training natural language understanding and processing algorithms on ASR transcripts for robust performance." -again too generalized -this is an intent classification task and the experiment only proves the efficacy of the model for this task or similar ones. It should not be generalized. Thank you for the suggestion. We have rephrased the sentence to make it more specific to the task presented in the paper. The modified text is as follows: "These results demonstrate that the subword-Confusion2vec models can eliminate the need for retraining the intent classification model on ASR transcripts for robust performance." Conclusion • A discussion section needs to be added that discusses the impact of the findings of this paper. A few points that can be discussed about are: o The impact of having context information for the Confusion2Vec embeddings. Thank you. We would like to clarify that we have demonstrated throughout our paper that the Confusion2Vec comprises context information. o Some applications of Confusion2Vec 2.0 -like what is the use of clustering similar sounding words together for an NLP application -especially without context information. Thank you for the suggestion. We have now added possible applications of Confusion2Vec in NLP/SLU domains and implications of ambiguity modeling in other digital signal processing domains. Thank you for the suggestion. We haven't conducted any specifically designed experiments in this study regarding resource requirements. The consensus for training word embeddings in the NLP community is: "More the data better the embedding". This should also apply in our case. However, we would to point out that subword encoding allows for relatively better, robust embeddings for a given amount of data (especially in low data scenarios). Moreover, the unsupervised modeling and domain independent representation of Confusion2Vec, allows training on easily available, large amounts of speech data for applications towards any other domains. We have added the following statements under Conclusion (2 nd paragraph) to highlight the strengths of the proposed embeddings: Reviewer 2: The article proposed a Confusion2Vec 2.0 to handle the ambiguities found in natural language using subword modeling units. The article presents the performance over various evaluations tasks including word analogy and word similarity tasks, which deal with acoustic, syntactic, and semantic ambiguities. The empirical evaluations presented in the article are thorough and have significant improvements over the existing methods. Overall, the research article is mostly clear when it comes to related literature, methodology, and result analysis. The language is simple enough to read and understand. However, there are few flaws and questions throughout this article that authors should consider and clarify, and are mentioned below: 1. There are many state-of-the-art end-to-end ASR models exist today, why the traditional HMM-DNN based pipeline has been used? Thank you for the question. Our choice of ASR was to match the setup of our previous published works to facilitate direct comparisons. This enables us to assess the impact of subword encoding. For your reference the following are the previous studies: • However, we would like to clarify that this study can be replicated by more recent end-to-end ASR systems. Moreover, we want to emphasize that employing state-of-the-art need not necessarily improve the quality of the Confusion2Vec embedding. While a poorly performing ASR with a high WER is not preferrable (leads to too many, acoustically unrelated confusions), we also don't need a near perfect ASR since it might lead to too fewer acoustic confusions for the sake of training Confusion2Vec. We realize that it is an interesting question to know what WER bands are ideal for training Confusion2Vec. Also, we believe that there are many aspects to this question, for example, the beam width used during decoding could have an effect in addition to the word error rate, furthermore, different noise/channel conditions, like reverberation could pose different dimension to this investigation. Overall, further investigation is needed over multiple aspects effecting ASR performance. Hence, we leave this to future work. We have added a brief discussion regarding this to the future work under the section Conclusion (4th paragraph, last 2 lines): "We also plan to understand the factors that affect the quality of the proposed embeddings by conducting further analysis of the effects of ASR WER, decoding beam size, characteristics of underlying speech signal environments including type of noise, amount of noise, channel effects, transferability over different ASR systems etc. The performance implications of these factors to the end-task are also of interest." 2. Have you considered other modeling configurations other than inter and intra-confusion? Thank you for the question. In our previous paper (Shivakumar, P. G., & Georgiou, P. (2019). Confusion2Vec: towards enriching vector space word representations with representational ambiguities. PeerJ Computer Science, 5, e195 ), we proposed 4 different configurations (i) top-confusion training, (ii) intra-confusion training, (iii) inter-confusion training, and (iv) hybrid intra-inter confusion training. Based on the findings regarding the effectiveness and quality of the Confusion2Vec embeddings, we narrowed our choice to mainly inter-confusion and intra-confusion configurations in the current paper. 3. The Thank you for the suggestion. We have added the metric information to the captions under Table 1 and 2. For your reference, the following text has been added: "The results of the analogy tasks represent percentage accuracy; and the results of the similarity tasks represent Spearman correlation." 4. There are no red lines and ellipses in Figure 2. I believe It should be orange. Thank you for pointing this out. We have corrected and replaced "red with "orange" under caption of Figure 2. 5. There are many grammatical errors in the article. The examples can be found in lines #279 and #313, where "are" should be "is". Further, line #312 "is" should be "are". Thank you for pointing out the grammatical errors. We have fixed the same. We have also gone through the entire paper to fix any additional errors to the best of our efforts. Thank you. We have fixed these and any other occurrences in the paper. Thank you. We have gone through the entire reference section and fixed any such occurrences. Thank you for pointing this out. We have fixed it. 9. The English article should be used wherever possible. We added "the" at several locations where it was missing. Thank you for pointing this out.
9,171.4
2021-02-03T00:00:00.000
[ "Computer Science" ]
nipalsMCIA: Flexible Multi-Block Dimensionality Reduction in R via Non-linear Iterative Partial Least Squares Motivation: With the increased reliance on multi-omics data for bulk and single cell analyses, the availability of robust approaches to perform unsupervised analysis for clustering, visualization, and feature selection is imperative. Joint dimensionality reduction methods can be applied to multi-omics datasets to derive a global sample embedding analogous to single-omic techniques such as Principal Components Analysis (PCA). Multiple co-inertia analysis (MCIA) is a method for joint dimensionality reduction that maximizes the covariance between block- and global-level embeddings. Current implementations for MCIA are not optimized for large datasets such such as those arising from single cell studies, and lack capabilities with respect to embedding new data. Results: We introduce nipalsMCIA, an MCIA implementation that solves the objective function using an extension to Non-linear Iterative Partial Least Squares (NIPALS), and shows significant speed-up over earlier implementations that rely on eigendecompositions for single cell multi-omics data. It also removes the dependence on an eigendecomposition for calculating the variance explained, and allows users to perform out-of-sample embedding for new data. nipalsMCIA provides users with a variety of pre-processing and parameter options, as well as ease of functionality for down-stream analysis of single-omic and global-embedding factors. Availability: nipalsMCIA is available as a BioConductor package at https://bioconductor.org/packages/release/bioc/html/nipalsMCIA.html, and includes detailed documentation and application vignettes. Supplementary Materials are available online. Introduction Multiple co-inertia analysis (MCIA) is a member of the family of joint dimensionality reduction (jDR) methods that extend unsupervised dimension reduction techniques such as Principal Components Analysis (PCA) and Non-negative Matrix Factorization (NMF) to datasets with multiple data blocks (alternatively called views) [1,2].Such methods, also known as multi-block or multi-view analysis algorithms, are becoming increasingly important in the field of bioinformatics, where data is often collected simultaneously using multiple omics technologies such as transcriptomics, proteomics, epigenomics, metabolomics, etc. [3]. Here, we present a new implementation in R/Bioconductor of MCIA, nipalsMCIA, that uses an extension with proof of monotonic convergence of Non-linear Iterative Partial Least Squares (NIPALS) to solve the MCIA optimization problem [4].This implementation shows significant speed-up over existing Singular Value Decomposition (SVD)-based approaches for MCIA [5,6] on large datasets.Furthermore, nipalsMCIA offers users several options for pre-processing and deflation to customize algorithm performance, methodology to perform out-of-sample global embedding, and analysis and visualization capabilities for efficient results interpretation.We show application of nipalsMCIA to both bulk and single cell multi-omics data.The overall workflow that includes the optimization steps and analyses for nipalsMCIA is outlined in Figure 1. Notation and preliminaries Scalars, vectors, and matrices are represented in lowercase script (), lowercase script with a vector symbol ( ⃗ ) and bold uppercase script (), respectively.The th column vector of a matrix is denoted ⃗ () . Since we are evaluating several datasets (termed blocks) simultaneously, the sample-by-feature data matrix for the th block is labeled as .We denote the column-wise concatenation of data blocks as the 'global' data matrix = [ 1 |...| ]. Loadings and Scores MCIA extends the concept from PCA of deriving principal components (which we term scores) and loadings (which we also term loadings) to the multi-block setting.The loadings are a set of optimal axes in feature space, while the scores are the projection coefficients of the samples onto these axes.Unlike PCA, MCIA generates two types of scores and loadings, one set for all the data (global scores/loadings), and the other for the individual omics (block scores/loadings).The number of scores/loadings generated is equal to the dimension of the MCIA embedding of the data, which we will denote as . Originally, the optimization criteria for MCIA were presented using the concept of statistical triplets [7,5].The criteria can equivalently be represented as a parameterized member of the Regularized Canonical Correlation Analysis (RGCCA) family of multi-variate dimension reduction methods [2,8], which is consistent with the optimization criteria that is solved by an extension of the NIPALS algorithm [4].We review these criteria below. Scores and loadings are computed by nipalsMCIA to satisfy the objective function and orthogonality constraints where ⃗ () = ( ) is a vector of block contributions to the th order global score, with constraint || ⃗ () || 2 = 1 for all orders = 1, ..., as in [4], and is the Kronecker delta function.Equation ( 2) is solved separately for each order () up to the dimension of the embedding, .The block scores { ⃗ , ⃗ , ..., ⃗ () } represent a -dimensional embedding of the samples in the orthonormal set of block loadings vectors for block .This contrasts with Consensus PCA (CPCA), which solves for the same objective function as MCIA, but with an orthogonality constraint on the global scores instead of the block loadings [9].In nipalsMCIA, users can choose to use either method. NIPALS strategy for computing MCIA Several methods exist for computing MCIA, including direct computation from the principal components of the covariance matrix (see [2]).The implementation in nipalsMCIA uses an extension of Nonlinear Iterative Partial Least Squares method (NIPALS) [4].NI-PALS was first introduced as an iterative (power) method to estimate principal components [10,11], and later extended to the multi-block setting [12].A modification of the multi-block algorithm was proven to have monotonic convergence [4].Since the NIPALS procedure is iterative, it does not require a full eigendecomposition.Moreover, it easily enables a choice of deflation methods.In nipalsMCIA, the stable multi-block extension to NIPALS [4] is implemented with deflation options for both MCIA and CPCA.Additionally, variance explained by each component is also calculated without reference to an eigendecomposition calculation. Usage and functionality Since MCIA is designed to handle multiple omics data blocks, preprocessing options are available both at within-and whole-block levels.The latter is recommended to account for potential disparities Analysis & Interpretation The nipals_multiblock function is used to run MCIA in nipalsMCIA.The function outputs an object of the NipalsResult class, which includes the global scores and loadings, block scores and loadings, the global score eigenvalues, and the block score contributions vector for all orders up to the maximum specified via the num_PCs argument.The global scores represent the projection of the multi-block data in the reduced space, and can be plotted with or without corresponding block scores (Figure 1C, ii).The contribution of each block to the global score can be easily visualized (Figure 1C, iii), along with high-scoring features (Figure 1C, iv). Vignettes providing full analysis pipelines using nipalsMCIA for bulk and single cell data are available with the package.The example bulk data is a subset of the National Cancer Institute 60 tumor-cell line screen (NCI60 data) [13,8].It includes RNA-Seq, miRNA, and protein data from 21 cell lines that correspond to three cancer subtypes (brain, leukemia, and melanoma).The single cell data is sourced from 10x Genomics and includes both gene expression and cell surface antibody data [14].The single cell analysis vignette includes instruction on how to obtain, process, and prepare the dataset for nipalsMCIA, along with a demonstration of the capability of nipalsMCIA for effectively clustering known cell types in a computationally efficient manner. Out-of-sample embedding The loadings vectors generated by MCIA on a dataset represent linear combinations of the original features of .Therefore, after computing MCIA on a training dataset, one can use the associated loadings vectors to predict global embeddings for a test dataset of new observations of the same features.nipalsMCIA provides the predict_gs function for this task. This can be valuable for testing the quality of the embedding, as well as embedding new data without rerunning the decomposition.We provide a vignette in the package showing how this can be done using the NCI60 data set, using 70% of the data to train the model, and then deriving global scores for the remaining 30%. Computation time comparison for MCIA algorithms We used three datasets to compare the performance of nipalsMCIA with two other implementations of MCIA: MOGSA [6], and Omicade [5].The three datasets are composed of the NCI60 data, the 10x single cell data filtered for the top 2000 most variable genes, and the same single cell data without filtering.Data pre-processing was standardized across all algorithms and a decomposition for 10 factors was performed across all datasets and implementations.All experiments were performed in R 4.3.0 on a MacBook with 3.2GHz and 16GB RAM.The dimensions of the datasets and performance are shown in Table 1.We observe that while MOGSA has slightly faster performance than nipalsMCIA and Omicade on the smaller NCI60 dataset, nipalsMCIA is an order of magnitude faster for both the filtered and full single cell datasets, even when using the 'fast SVD' option in MOGSA.The speedup offered by nipalsMCIA thus opens up capabilities for practical deployment of nipalsMCIA on a larger variety of datasets, including high-dimensional single cell data. Discussion The accessibility of next-generation sequencing and other highthroughput biological assays are resulting in an increase of multiblock (or multi-modal) datasets [15,16,17,18].Analysis of these data are facilitated by the application of joint dimensionality reduction methods such as MCIA.nipalsMCIA is a comprehensive R package that implements MCIA in a highly efficient manner using the NIPALS algorithm.The package features various pre-processing and analysis options, is much faster for large input datsets compared with existing packages, supports the projection for out-of-sample scores, and offers visualization options for scores and top-magnitude loadings at each order. )Figure 1 . Figure 1.Workflow overview for nipalsMCIA performed on the three-block NCI60 data from the main text.a) A breakdown of the NIPALS algorithm for performing MCIA.Data blocks are normalized before scores and loadings are computed to satisfy the objective function.Higher-order results are then computed after the data has been deflated with the current scores or loadings.b) Scree plot for the proportion of variance explained by each order of global score/loading.c) Scheme for interpreting the global loadings and scores.(i) Global scores are calculated from the global data matrix and global loadings.(ii) Global scores represent low-dimensional embeddings of the data used to cluster samples via hierarchical clustering.Colors represent the three different cancer types associated with each sample (iii) Block contributions vectors plotted to visualize the weight of each block on each order of global score.(iv) The first global loadings vector is plotted to identify the top features for the first global score. Mattessich et al. nipalsMCIA: Flexible Multi-Block Dimensionality Reduction in R via Nonlinear Iterative Partial Least Squares in block size. Table 1 . Computation time (in seconds) comparison for different MCIA implementations and datasets.
2,394
2024-06-10T00:00:00.000
[ "Computer Science", "Biology" ]
Titanium dioxide nanotubes applied to conventional glass ionomer cement influence the expression of immunoinflammatory markers: An in vitro study Objectives To assess the impact of different concentrations TiO2-nt incorporated into a glass ionomer cement on the proliferation, mitochondrial metabolism, morphology, and pro- and anti-inflammatory cytokine production of cultured fibroblasts (NIH/3T3), whether or not stimulated by lipopolysaccharides (LPS-2 μg/mL, 24 h). Methods TiO2-nt was added to KM (Ketac Molar EasyMix™, 3 %, 5 %, 7 % in weight); unblended KM was used as the control. The analyses included: Cell proliferation assay (n = 6; 24/48/72h); Mitochondrial metabolism assay (n = 6; 24/48/72h); Confocal laser microscopy (n = 3; 24/48/72h); Determination of biomarkers (IL-1β/IL-6/IL-10/VEGF/TNF) by using both multiplex technology (n = 6; 12/18 h) and the quantitative real-time PCR assay (q-PCR) (n = 3, 24/72/120 h). The data underwent analysis using both the Shapiro-Wilk and Levene tests, and by generalized linear models (α = 0.05). Results It demonstrated that cell proliferation increased over time, regardless of the presence of TiO2-nt or LPS, and displayed a significant increase at 72 h; mitochondrial metabolism increased (p < 0.05), irrespective of exposure to LPS (p = 0.937); no cell morphology changes were observed; TiO2-nt reverted the impact of KM on the secreted levels of the evaluated proteins and the gene expressions in the presence of LPS (p < 0.0001). Conclusions TiO2-nt did not adversely affect the biological behavior of fibroblastic cells cultured on GIC discs. Introduction Glass ionomer cements (GICs) are categorized as acid-base cements materials, comprising basically of fluoride-aluminum-silicate powder and polyacrylic acid liquid [1].Their favorable chemical, physical and biological properties have made them widely used in the dental clinic for the cementation of indirect restorations and orthodontic brackets, as a liner, sealant, for minimally invasive restorative treatments [1,2], nevertheless survival rates are about 50 % of cases over 3 years [3].This considerable flaw is related to their high sensitivity to moisture during the initial 24 h of the setting process, and consequent hydrolytic degradation, which may compromise their mechanical properties and color stability [2,4]. Furthermore, dental restoration procedures must take biological principles into account; hence, one should assess biological impact of a new dental material and in vitro models offer established assays to address these questions.As a restorative material, GIC might eventually be in direct contact with the gingival tissues, which is populated by fibroblasts that will modulate tissue repair and inflammatory response by producing bioactive factors, including interleukin (IL)-1β, 10, and 6, and tumoral necrosis factor alpha (TNFα) [19][20][21].These biomarkers feature interconnected function and are involved in several biological processes [22][23][24][25]. Sampling Preliminary experiments defined sample sizes (n) for the study, while P value (5 %) and statistical power (80 %) were set by the G*Power software 3.1.9.7.One calibrated and experienced examiner performed the analysis under IRB approval (protocol #2022-0899). Incorporation of TiO 2 -nt into GIG powder Nanotubes (≈20 nm in size -10 nm diameter) were created [44], after which their structural and morphological characteristics were characterized [15].A highly precise scale was used to define the mixtures (Fig. 1).Subsequently, nanotubes were manually incorporated into the KM powder and thoroughly mixed with the assistance of a vortex device (Biomixer, Tafl, CA, USA) for 2 min, following a method described previously [11,[13][14][15]17]. Cell culture Murine fibroblasts (NIH/3T3) from the 13th and 14th passages were used in the experiments and cultured as described elsewhere [11].Twenty-four hours post-plating, the culture medium was exchanged to 5 % FBS, along with antibiotics and data collected at this time point with or without LPS, using a challenge assay. Cell proliferation assay (trypan blue; n = 6/group) GIC samples with or without TiO 2 -nt (3 %, 5 % and 7 %) were individually placed in 48-well plates (Corning Costar, # CLS3549).Cells were plated in triplicate (1 × 10 3 cells/well) and cultured for 24 h.The growth medium was switched to 5 % FBS and antibiotics for data collection.Total number of cells (viable and non-viable) was determined with a hemocytometer and trypan blue (24/48/72 h).Concisely, the cells were trypsinized, centrifuged, and resuspended in phosphate buffer saline (PBS).The viable and non-viable cell numbers were obtained after an incubation period of 3-5 min in the presence of trypan blue [45]. Mitochondrial activity rate (MTT; n = 6/group) After the materials were cured, the samples were positioned individually into 48-well culture plates and the cells plated (1.5 × 10 4 cells/well) as described before [11].Data was collected (24/48/72 h) as recommended by the manufacturer and described elsewhere [46]. Multiplex secretome analysis (n = 3/group) NIH/3T3 were seeded (1 × 10 3 cells/well) and cultured on GIC discs with or without nanotubes for 24 h.Culture media without supplementation was collected after 12 and 18 h, after which it was stored at − 80 • C for later investigation of biomarker expression using the MAGPIX system (Milliplex, Denton, TX, USA).Levels of IL-1β/6/10, VEGF, TNF were determined using a previously described protocol [46]. Statistics Data normal distribution was defined by appropriate tests (Shapiro-Wilk & Levene's) (p ≤ 0.05).Assessments of cell proliferation, mitochondrial metabolism (MTT), and biomarkers level were done by generalized linear models to ascertain the impact of time, LPS and TiO 2 -nt on fibroblast cells (α = 0.05).Qualitative analyses were performed for morphological data (confocal microscopy).Statistics assessments used the R software. Cell proliferation assay (trypan blue) Data analysis showed that time significantly affected cell numbers, regardless of the presence of GIC or LPS (p < 0.05), pointing out that the highest cell numbers were observed at 72 h.Furthermore, at 24 h, our findings show that LPS lead to a significantly rise in cell numbers in the GIC (only) and negative control (no disc) (p < 0.05).On the other hand, differences were not significant for the groups containing TiO 2 -nt and LPS (p > 0.05).Additionally, data revealed that LPS treatment did not significantly change cell numbers across the groups at 48 or 72 h, except for the negative control (no disc) at 48 h, which a higher number of cells was also observed in the presence of LPS (p < 0.05).Further data analysis indicated that the groups containing TiO 2 -nt regularly exhibited no discernible differences when treated with or without LPS, implying that the presence of TiO 2 -nt did not influence the proliferative rates of NIH/ 3T3 cells cultured on GIC discs with or without nanotechnology.Table 2 summarizes the findings of the proliferative rates across the experimental groups. Mitochondrial metabolism assay (MTT) Table 3 shows the data analysis for cellular metabolism.There was a significant interaction between GIC with TiO 2 -nt and time (p = 0.0327).Study data showed that LPS did not impact the metabolic activity of NIH/3T3 cells, irrespective of TiO 2 -nt (p = 0.2912).The results of the TiO 2 -nt concentration comparisons showed that metabolic activity depended only on time, and the highest metabolic activity was observed at 72 h (p < 0.001).Overall, our findings demonstrated that TiO 2 -nt did not alter MTT of NIH/3T3 cells, compared with negative control (only cells) and the GIC groups. Cellular adhesion/morphology by confocal microscopy analysis NIH/3T3 cells were able to adhere to the substrate in all the experimental groups, regardless of the presence of either GIC or TiO 2nt, and irrespective of treatment with LPS.Cell adhesion was more evident at 48 and 72 h.In addition, it was noted that there were no evident distinctions regarding cell morphology across the experimental groups.Fig. 2 (A-O) and Fig. 3 (A-O) illustrate representative images from the confocal microscopy analyses. Secretome analysis The cytokine multiplex assay showed that the selected inflammatory markers were expressed and secreted by NIH/3T3 cells under the conditions of the study (Fig. 4A-E).Data analysis showed that IL-6 levels were significantly increased over time and by LPS treatment at 12 and 18 h (p < 0.05).There was a triple interaction among TiO 2 -nt, LPS, and the time factors (p = 0.0015).In addition, TiO 2 -nt significantly reduced the levels of IL-6 at 3 % and 5 % in the presence of LPS at 12 h, and at 5 % and 7 % TiO 2 -nt at 18 h (p = 0.0002).In the absence of LPS, TiO 2 -nt increased the secreted levels of IL-6 at 5 % and 7 %, and at 3 %, 5 % and 7 %, at 12 h or 18 h, respectively, versus GIC alone (p < 0.0001).A triple interaction among TiO 2 -nt, LSP, and the time factors was detected for IL-10 (p = 0.0214).The secreted levels of cytokine were unaltered by LPS at 12 h; however, IL-10 levels were increased by LPS at 18 h (p < 0.001).In the presence of LPS, TiO 2 -nt significantly increased IL-10 levels at 3 % and 7 % at 12 h, and at 3 % at 18 h (p < 0.001), compared with GIC alone.In contrast, TiO 2 -nt significantly decreased IL-10 levels at 7 % at 18 h.In the absence of LPS, a noteworthy distinction was identified for the 3 % and 5 % TiO 2 -nt groups at 12 and 18 h, respectively, compared with GIC alone (p < 0.001).As expected, IL-1β was significantly increased by LPS treatment and by time (p < 0.001).Compared with GIC alone, the addition of 3 % and 7 % TiO 2nt significantly increased the levels of IL-1β at 12 h, whereas the addition of 3 %, 5 %, and 7 % TiO 2 -nt significantly decreased the levels of IL-1β at 18 h (p < 0.001).In the current investigation, TNF-α levels were significantly reduced by TiO 2 -nt in the presence of LPS, at all the evaluated concentrations, whereas no significant effect was observed at 18 h, except for the 5 % TiO 2 -nt (p < 0.0001).The treatment with LPS led to a marked elevation the expression of VEGF by fibroblasts (p = 0.0107), whereas time did not influence VEGF Gene expression analysis Fig. 5 (A-E) reveled the cytokine expression of NIH/3T3 cells by the qPCR assay.Data analysis indicated a triple interaction among the TiO 2 -nt, LSP, and time factors for all the cytokines evaluated (p < 0.05).Differences were not significant among the groups for the mRNA levels of IL-6 in the presence of LPS, apart from the KM + 5 % TiO 2 -nt group (p < 0.0001).Regarding IL-10, there was no significant difference among the groups with and without LPS (p = 0.7028), except for the KM + 7 % TiO 2 -nt group, which showed a significantly lower gene expression of IL-10 in the presence of LPS at 72 and 120 h (p = 0.0377).In contrast, regarding IL-1β, there was a significant increase in gene expression in the presence of LPS for all the experimental groups at different the time points (p < 0.0374).Fig. 5D shows that GIC with no nanotechnology had lower TNF-α gene expression than other groups at 24 h.The same occurred with the KM containing 5 % and 7 % TiO 2 -nt at 120 h, demonstrating that nanotechnology did not alter the TNF-α gene expression pattern over time.As for VEGF, GIC alone promoted significantly higher gene expression levels by fibroblasts in LPS presence at 72 and 120 h, whereas GIC plus 5 % or 7 % TiO 2 -nt expressed significantly lower levels (p < 0.001) (Fig. 5E). Discussion Fibroblast response to TiO 2 -nt in the presence of LPS: Data analysis showed that TiO 2 -nt resulted in a greater proliferative and metabolic activity rate overtime.In addition, LPS significantly affected proliferation and metabolic activity of NIH/3T3 cells cultured on GIC discs, whereas the TiO 2 -nt-treated groups displayed a less evident of LPS.Consistently with the findings of the current study, it has been previously reported that TiO 2 -nt does not affect the biocompatibility of GIC in vitro [18,48,64].In addition, KM + 5 % TiO 2 -nt was associated with the highest metabolic cell rate over time, facilitating the production of both collagenous and non-collagenous extracellular matrix [11].In contrast, TiO 2 nanoparticles have been reported to present cytotoxic properties [48,49].The interaction mechanisms between nanostructures and living systems are still not fully understood.It has been suggested that the nanostructures penetrate cells through active or passive mechanisms and contribute to accelerating the host response to a foreign body.Characteristics that may affect how nanoparticles modulate the host response are: material type, size, format, surface and load type, coating, dispersal, agglomeration and concentration of nanostructures [37]. Inflammatory markers are regulated by the presence of TiO 2 -nt: It is recognized that nanomaterials safety and efficiency are directly related to its ability to modulate the inflammatory response.The current investigation discovered that GIC containing TiO 2 -nt modulated the expression of immune-inflammatory markers by NIH3/T3 in the presence of bacteria LPS (Fig. 4 A-E; Fig. 5A-E).In general, immune-inflammatory markers play key roles biological systems, including the regulation of the activity of white blood cells, modulation of T-lymphocytes function and apoptosis, production of chemokines against pathogens, modulation of the host response to trauma, recruitment of cells and promotion of angiogenesis [19,[50][51][52].Therefore, determining how TiO 2 -nt incorporated to GIC affect the expression of inflammatory markers by NIH/3T3 cells will provide valuable information to expand our understanding on the biological implications involved with such association.In general, our findings show that GIC alone affected the transcript levels of selected inflammatory markers, whereas TiO 2 -nt reverted this response.As expected, LPS significantly increased the levels of the evaluated cytokines at 12 and 18 h, with TiO 2 -nt reversing the effect of LPS on cultured on GIC discs for all markers at 12 h, except TNF-α.A similar effect was observed in the absence of LPS for the following markers: IL-6 (18 h), IL-10 (18 h), TNF-α (12 h), and IL-1β (18 h) (Fig. 4 A-E; Fig. 5A-E). As a member of the IL-1 family [53], IL-1β was demonstrated to be a key player in the pathogenesis of several conditions, including osteoarthritis [54].The biological effect of IL-1β occurs based on its interaction with membrane receptors [55,56].TNF-α has also been reported to be a key inflammatory marker, and to bind to TNF-R1 and TNF-R2 that are expressed by almost every nucleated cell.IL-6 was defined as a pro-inflammatory mediator involved in a number of biological processes, such as bone metabolism, liver pathologies, tumors and intraocular neovascular disorders [25].In contrast to IL1-β, TNF-α and IL-6, IL-10 has been described to have anti-inflammatory properties, playing a central role by limiting the immune-inflammatory response [57].VEGF is a critical factor regulating physiological angiogenesis during numerous complex conditions and may serve as a target for prevention of angiogenesis, and visual loss in age-related macular degeneration [58]. Previous reports suggested the potential of TiO 2 -nt to alter the expression of biological markers involved in the regulation of the inflammatory processes by activating T-and B-lymphocytes in a dose-dependent way after 24 h [27][28][29][30]58].Importantly, the authors * Significant difference at 12 h under the same TiO2− nt and LPS conditions (p ≤ 0.05).A triple interaction among factors was observed: p (TiO2− nt X LSP X time) < 0.0001. Diferent uppercase letters mean statistically significant differences among the LPS challenging conditions within each experimental group. Different lowercase letters mean statistically significant differences among TIO2− nt concentrations within each time category (p ≤ 0.05). J.P. Rangel-Coelho et al. found that cell activation leading to increased expression of biological markers lasted for up to 14 days.Furthermore, intratracheal administration of TiO 2 nanoparticles led to increased levels of MIPs (macrophage inflammatory proteins) and MCPs (monocyte chemoattractant proteins) [59].Interestingly, TiO 2 nanoparticles have been shown to modulate the biological response of dendritic * Significant difference at 24 h under the same TiO2− nt and LPS conditions (p ≤ 0.05).# Significant difference at 72 h under the same TiO2− nt and LPS conditions (p ≤ 0.05).Different uppercase letters mean statistically significant differences between the LPS challenging conditions, within each experimental group.Different lowercase letters mean statistically significant differences among the TiO2− nt concentrations, within each time category (p ≤ 0.05). J.P. Rangel-Coelho et al. cells leading to increased levels of ROS (reactive oxygen species), TNF-a, IL -1β, and IL-6 [38,60,61].Therefore, the findings of the current study align with other research findings indicating the potential of TiO 2 nanoparticles to regulate biological processes. Concluding remarks: In vitro models are known to provide some advantages on the development of new materials.For instance, in vitro testing offers a framework for testing and validating the mechanism of action of many different "ingredients" or products directly on the cellular and molecular levels, may serve as a reliable alternative for the use of animal models, and may speed up the understanding of the molecular mechanisms involved.Nevertheless, one must consider that limitations of in vitro methods include the fact that it does not fully represent the heterogeneity of biological tissues, and therefore, does not fully mimic its physiological and/or pathological responses.With that in mind, we are convinced that the findings of the present study postulate clear information to support the hypothesis that the presence of TiO 2 -nt in the composition of GIC has the potential to regulate biological responses and presents an attractive approach to improve the clinical outcomes of GICs. Conclusion TiO 2 -nt into GIC matrix was able to induce and reverse the inflammatory profile in NIH/3T3 cells by modulating pro-and anticytokines.This effect was observed not only in the absence of bacterial LPS but also when the system was challenged with LPS.Furthermore, it did not negatively impact the biological behavior of fibroblastic cells. Clinical significance statement TiO 2 -nt into the GIC matrix was able to induce and reverse the inflammatory profile in NIH/3T3 cells by modulating pro-and anticytokines, not only in the absence of LPS but also when the system was challenged with LPS. Fig. 5 . Fig. 5. Box plot of the RT-qPCR validation of pro-and anti-inflammatory gene expression by fibroblast cells (NIH/3F3) according to TiO 2 -nt concentration, LPS-challenging condition, and time (24, 72, and 120 h).* Significant difference at 24 h under the same TiO2− nt and LPS conditions (p ≤ 0.05).# Significant difference at 72 h under the same TiO2− nt and LPS conditions (p ≤ 0.05).Different Table 2 Mean (cell number x104) and standard deviation (SD) values for NIH/3T3 fibroblast proliferative rates on experimental discs with and without LPS at 24, 48 and 72 h (n = 6/group).
4,260.2
2024-05-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Four terpene synthases contribute to the generation of chemotypes in tea tree (Melaleuca alternifolia) Background Terpene rich leaves are a characteristic of Myrtaceae. There is significant qualitative variation in the terpene profile of plants within a single species, which is observable as “chemotypes”. Understanding the molecular basis of chemotypic variation will help explain how such variation is maintained in natural populations as well as allowing focussed breeding for those terpenes sought by industry. The leaves of the medicinal tea tree, Melaleuca alternifolia, are used to produce terpinen-4-ol rich tea tree oil, but there are six naturally occurring chemotypes; three cardinal chemotypes (dominated by terpinen-4-ol, terpinolene and 1,8-cineole, respectively) and three intermediates. It has been predicted that three distinct terpene synthases could be responsible for the maintenance of chemotypic variation in this species. Results We isolated and characterised the most abundant terpene synthases (TPSs) from the three cardinal chemotypes of M. alternifolia. Functional characterisation of these enzymes shows that they produce the dominant compounds in the foliar terpene profile of all six chemotypes. Using RNA-Seq, we investigated the expression of these and 24 additional putative terpene synthases in young leaves of all six chemotypes of M. alternifolia. Conclusions Despite contributing to the variation patterns observed, variation in gene expression of the three TPS genes is not enough to explain all variation for the maintenance of chemotypes. Other candidate terpene synthases as well as other levels of regulation must also be involved. The results of this study provide novel insights into the complexity of terpene biosynthesis in natural populations of a non-model organism. Electronic supplementary material The online version of this article (10.1186/s12870-017-1107-2) contains supplementary material, which is available to authorized users. Background Intra-specific variation in plant phenotypes can have profound ecological consequences [1][2][3]. In particular, variation in plant specialised metabolites influences herbivores as selective agents on the survival of some individuals over others [4,5], and even dictates the success of biological control programmes for weeds [6,7]. Understanding how intra-specific variation in plant chemical profiles arises at the molecular level would help explain how it is maintained in natural populations [8,9]. Quantitative variation in specialised metabolites is the norm and suggests that there are multiple selective agents operating on these traits [10]. In contrast, it is less clear how discontinuous or "chemotypic" variation is maintained in longlived plants such as forest trees and it remains difficult to demonstrate exactly what selective agents are influential over the many years that the tree may grow [11]. Characterising the genes responsible and the factors that control their expression remains the first step to resolving this question. Medicinal tea tree (Melaleuca alternifolia (Maiden & Betche) Cheel: Family Myrtaceae) is an excellent system to examine chemotypic variation. Tea tree is a long-lived woody plant that occurs in six distinct, foliar terpene chemotypes: three cardinal chemotypes dominated by terpinolene, 1,8-cineole and terpinen-4-ol respectively, and three intermediates between these [12,13]. The chemotypes can occur in pure natural stands but some sites can contain mixtures of up to five chemotypes. Only one of these chemotypes yields a medicinally valuable essential oil dominated by the monoterpene terpinen-4-ol and an industry is focussed on the cultivation of this chemotype [14]. Tea tree oil is widely used in products for personal care as well as having household, agricultural and veterinary applications. It shows significant antifungal and antibacterial activity in vivo [15] and has promising effects on skin tumours [16]. Enhancing the foliar concentration of medicinally active terpinen-4-ol and reducing the concentrations of 1,8-cineole and d-limonene is a major aim of breeding programmes [14] and thus knowing the genes that underlie these traits would be invaluable for enhancing breeding using molecular markers. A putative monoterpene synthase was isolated previously from M. alternifolia [17], but further analysis showed that the product of this enzyme is isoprene [18,19]. Studies of the foliar chemistry of M. alternifolia led to the hypothesis that only three distinct terpene synthases are responsible for the biosynthesis of over 80% of the leaf terpenes [13] with control of their contributions of each to the final oil profile likely dependent on genomic, transcriptomic or proteomic differences. Studies in other plants have shown that transcriptional level control of terpene synthases is most common. For example, Crocoll et al. found that transcript abundance of terpene synthases was correlated with variations in terpenes in oregano [20] and Irmisch et al. found that transcriptional differences in five sesquiterpene synthases explained the pattern of accumulation of terpenes in different parts of chamomile [21]. In this study, we aim (1) to isolate and functionally characterise terpene synthases that produce terpinen-4ol, terpinolene, and 1,8-cineole, respectively; and (2) to determine the expression of these genes in naturally occurring individuals from each of six chemotypes. Methods This study was carried out in two parts. Firstly, we amplified and characterised the genes responsible for the production of the terpenes that dominate the cardinal chemotypes of M. alternifolia. In the second part of this study we investigated the expression of terpene synthases in natural populations containing up to five chemotypes per population and compared the gene expression to terpene variation. All plant material was collected from private properties with the express permission of the land owners. Part 1: Amplification and characterisation of terpene synthases Plant material Young leaf (ca. 5 g fresh weight) was collected from five mature trees at nine sites across the natural geographic range of M. alternifolia [13]. We chose trees that were at least 100 m apart to avoid collecting from related trees [22] and the location of each tree was recorded. Samples were snap frozen in liquid nitrogen and stored at −80°C to ensure that we had samples suitable for RNA extraction from trees belonging to each of the known chemotypes. Extraction of nucleic acid We extracted total RNA from young leaf ground in liquid nitrogen using the RNeasy Plant Micro kit (Qiagen, Australia). We complemented the lysis buffer with polyvinyl pyrrolidine and sodium isoascorbate (Suzuki et al. 2003). This combination enhanced RNA extraction in all plants except those of Chemotype 2, where the addition of sodium isoascorbate inhibited RNA extraction. Thus, we repeated those extractions without this adjuvant (data not shown). Isolation, identification and characterisation of terpene synthases We used 3′ 'rapid amplification of complimentary ends' or RACE reactions to obtain partial transcripts containing the terpene synthase DDxxD motif using the degenerate 'DDXYDfx' primer and T 35 VN previously used to successfully isolate terpene synthases from 21 species of Myrtaceae [19]. We ligated the amplification products into pGEMT Easy or pCR2.1 TOPO cloning vectors, and sequenced the inserts from the M13 priming sites using BigDye v. 3.1 on an ABI 3130 capillary sequencer. Sequence information from the most abundant transcripts in each chemotype was used to design primers for upstream amplification. We used the SMART 5'RACE kit to amplify the 5′ ends of the identified genes, and obtained sequence information using the reaction conditions described for 3'RACE. Following the assembly of the 3′ and 5′ contigs, we designed primers to obtain full-length cDNA clones. We used Primer3 [23] to design primers, and used these to amplify clones encoding pseudo-mature proteins for characterisation. Chiral GC-MS analysis of the products of MaTPS-SaH was performed on the same instrument using a Rt™-βDEXsm-column (Restek, Bad Homburg, Germany) and a temperature program from 50°C (2-min hold) at 2°C min −1 to 220°C (1-min hold). Enantiomers were identified according to their elution order as described by Larkov et al. [26]. For the determination of the cofactor K m values of MaTPS-Tln, the enzyme was incubated with 5 μM 3 Hlabeled GPP and magnesium within a range of 0.5-50 mM or manganese in a range of 0.01-1 mM. Enzyme kinetics All assays were overlaid with 1 ml pentane and incubated at 30°C for 10 or 15 min, depending on the linear phase. The assays were stopped by shaking at 1400 rpm for 2 min to partition terpene volatiles in the solvent phase. 500 μl pentane were mixed with 2 ml of scintillation cocktail (RotiSzint2200, Roth, Karlsruhe, Germany) and activity was measured in a scintillation counter (LS 6500, Beckman Coulter Inc., Krefeld, Germany). All assays were performed in triplicate. The amount of substrate needed to achieve half of the maximum reaction velocity, or K m values, were determined using the Lineweaver-Burke method. Melaleuca alternifolia Plant material We collected young leaf from trees labelled in Part 1, in November 2015. Since these were wild populations growing in natural conditions, some of the trees could not be found again (e.g. tree death or label overgrowth) and therefore some additional trees were sampled. The terpene profile of each of the 92 samples collected was determined and chemotypes were assigned (according to Keszei et al. [13]). We collected three sub-samples from each tree whilst in the field: 1. Approximately 3 g of young leaf was collected for RNA extraction. This sample was put into a labelled paper envelope and immediately snap frozen in liquid nitrogen. Upon return to the lab, it was stored at −80°C until extraction of RNA. 2. Approximately 0.5 g of young leaf was collected for terpene analysis. This sample was put directly into about 5 ml of ethanol (including 0.25 g·l −1 tetradecane as an internal standard) of predetermined weight. The vials were weighed again at the end of the day, to record the exact weight of the leaf. 3. An additional 0.5 g of young leaf was collected to determine the fresh weight to dry weight ratio. This sample was put in a labelled paper envelope and stored above ice until the end of the day, when it was weighed and stored at room temperature. Upon returning to the lab, these samples were oven dried at 40°C to constant mass and the dry weight was recorded. Terpene analysis Foliar terpenes were analysed as described in Padovan et al. [3]. Briefly, terpenes were separated using gas chromatography on an Agilent 6890 GC using an Alltech AT-35 (35% phenyl, 65% dimethylpolyoxylane) column (Alltech, DE, USA). The column was 60 m long and He was used as the carrier gas. One μl of the ethanol extract was injected at 250°C at a 1:25 split ratio. The total elution time was 25 min. The components of the solvent extract were detected using an Agilent 5973 Mass Spectrometer. Peaks were identified by comparisons of mass spectra to reference spectra in the National Institute of Standards and Technology library (Agilent Technologies, IL, USA) and major peaks were verified by reference to authentic standards [13]. We identified 18 samples corresponding to three from each of the six chemotypes, to use with gene expression analysis, by comparison with the original samples in Keszei et al. [13]. RNA extraction and transcriptome sequencing RNA extraction and transcriptome sequencing were carried out as described by Padovan et al. [27]. Briefly, the samples were ground to a fine powder in a mortar and pestle under liquid nitrogen. Total RNA was extracted using the Spectrum Total RNA Kit as per the manufacturer's instructions (Sigma Aldrich, MO, USA). We then used the Illumina TruSeq RNA library preparation kit as per manufacturer's instructions (Illumina Inc., CA, USA). The libraries were validated on a Bioanalyzer 2100 (Agilent Techonolgies, CA, USA), pooled and sequenced on two lanes of the Illumina HiSeq 2000 platform at the Biomolecular Resource Facility at the Australian National University, using a 150 bp pairedend run (all sequences were uploaded to the SRA database under the Bioproject ID: PRJNA388506). Data analysis After sequencing, raw reads were separated by barcode and filtered by quality using the HiSeq 2000 software. We then checked the raw reads for quality and adapter contamination using fastqc [28]. FLEXBAR [29] was used to remove low quality bases and remaining sequencing adaptors using the following parameters; Removal of Illumina sequencing adapters with a minimum overlap of 6, threshold of 2, trimming at any end and relaxed adapter option; minimum quality of 30, maximum number of uncalled bases of 1, and minimum remaining read length of 40. For each of the three cardinal chemotypes, the individual with the highest amount of raw data was selected for de-novo assembly using the Trinity software [30] with default settings. Next, we created a single consensus transcriptome by clustering the transcripts of each of the three samples using CD-HIT-EST with a threshold of 0.94 [31]. At this threshold, the most similar terpene synthase genes were maintained as separate contigs. We then searched the consensus transcriptome for expressed terpene synthase genes and discovered 27. Each sample was then mapped against the consensus transcriptome using BWA-mem [32] with standard parameters, producing BAM alignments that were sorted and indexed with SAMtools [33]. For each sample, the number of reads mapping to each contig were counted using Qualimap v 2.1.2 comp-counts [34] with the proportional method but with otherwise standard parameters. We compared two approaches for counting reads mapped to the characterised genes since the terpene synthases are a very large gene family [35] and two of the characterised genes have 98% nucleic acid identity (Additional file 1: Table S1). Approach 1 used the read counts generated from Qualimap comp counts to calculate 'fragments per kilobase of transcript per million mapped reads' or FPKM values. Approach 2 corrected the read count based on three (sabinene hydrate synthase) or four (cineole and terpinolene synthases) amino acids that reliably differentiate these three terpene synthase gene sequences before calculating FPKM values. These amino acids are important in determining the product profile of the enzymes [36]. The second approach yielded much lower values, but the relative expression of the three genes was the same in both approaches, so we decided to proceed with the more traditional first approach. We used sparse partial least squares analysis (sPLS) to explore the associations between expression of the terpene synthases (N = 27) and the terpene composition of the leaf, using the R package mixOmics [37]. The gene expression matrix (log transformed FPKM values) was used to explain variation in the terpene matrix using the sPLS regression mode. We analysed the association between genes and terpenes with a correlation plot of the selected variables, using the first two components of the sPLS. In this plot, variables from each matrix are placed on a circular correlation plot. Those variables that are most strongly associated are plotted in the same direction, and the greater the distance from the origin the stronger the correlation. We also prepared heatmaps to show correlations between terpene and gene dataset using the similarity matrices based on the selected variables by the sparse method and the loading vectors for the first three components of the PLS. Relationship between the terpene synthases of M. alternifolia We manually aligned the three characterised terpene synthase sequences, the 24 putative terpene synthases found in the transcriptome data generated here and the 113 terpene synthases found in the Eucalyptus grandis genome [38,39] in BioEdit [40]. The alignments were improved on the Clustal Omega server, using default settings [41][42][43] before phylogenetic trees were generated using the PhyML server, with 1000 bootstraps and using the JTT + I + F + G substitution model [44,45] Kinetic analysis of MaTPS-SaH, MaTPS-Cin, and MaTPS-Tln revealed a three-fold difference in the calculated K m values for GPP (11-31 μM). The sabinene hydrate synthase MaTPS-SaH showed the highest affinity for GPP, followed by terpinolene synthase MaTPS-Tln, and cineole synthase MaTPS-CinA has the lowest affinity for this substrate (Table 1). We also measured and compared the affinity of MaTPS-Tln for Mg 2+ and Mn 2+ ions as co-factors in the presence of GPP. MaTPS-Tln showed 90-fold greater affinity for manganese ions compared to magnesium ions ( Table 1). The expression of the characterised genes in the six chemotypes in M. alternifolia (part 2) The terpene profile of each sample was determined (data not shown) and three trees from each chemotype were selected for further study. Sequencing and mapping stats We sequenced 424,141,194 reads at a read length of 150 bp (total 63.621 Gbp). After adaptor and low quality bases removal, the average read length was 139 bp. Individual samples varied from 11.8-36.8 m reads (average 23.6 m, median 21.7 m reads). The sum of the length of the 27 identified terpene synthase genes was 40,451 bp (average 1498 bp), indicating that some transcripts were not full length as the expected terpene synthase transcript is ca. 1800 bp long. On average, 81,897 reads mapped to the terpene synthase reference per individual (min 17,162; max 135,346, median 70,121) corresponding to 0.34% of the total reads. There was no effect of chemotype on the amount or proportion of reads that mapped to the terpene synthase transcripts. The largest difference was between chemotypes 2 and 3 which had 0.28 and 0.42% of their reads mapped against terpene synthase transcripts, respectively (student's t-test P = 0.16). The expression level (FPKM) of each terpene synthase can be found in Additional file 2: Table S2. We identified 27 putative terpene synthase sequences in the 18 foliar transcriptome libraries of six chemotypes of M. alternifolia. Each of the sequences has the conserved motifs common to all plant mono-and sesquiterpene synthases. Through sequence homology we found the characterised MaTPS-Tln, MaTPS-CinA, and MaTPS-SaH. Phylogenetic analysis with the terpene synthases of E. grandis [38] allowed us to group putative monoterpene synthases (TPS-b and TPS-g) separately from putative sesquiterpene synthases (TPS-a) (Fig. 3). Statistical analysis of relationships between terpene profiles and expression patterns (sPLS) In the circular correlation plot, variables that are most strongly associated are plotted in the same direction, and the greater the distance from the origin the stronger the correlation. In the plot generated using the full terpene matrix, the three terpenes that dominate the cardinal chemotypes were far apart in the circle (Fig. 4a). Terpinolene, α-phellandrene, β-citral, and α-thujene were grouped, as were terpinen-4-ol, sabinene, γterpinene, cis-and trans-sabinene hydrate, α-pinene, and α-terpinene; and 1,8-cineole and D-limonene. MaTPS21 and MaTPS-CinA showed the highest correlation to 1,8cineole (Fig. 4b). The genes most closely associated with the terpinolene group are MaTPS-Tln and MaTPS20. The genes most closely associated with the terpinen-4-ol The relationships between the terpene synthases of M. alternifolia We identified 27 unique putative terpene synthase sequences in the RNA-Seq analysis. Three of these match the three of the characterised terpene synthases in this study. There is no sequence in the RNA-Seq that matches MaTPS-CinB. Of the remaining 24 putative terpene synthase sequences, 19 could be aligned to the sequences of the Eucalyptus grandis terpene synthase gene family [38] to determine which terpene synthase group they belong to (Fig. 3). The remaining five sequences had too many sequence ambiguities to align and so were excluded. The phylogeny generated here is very similar to the one reported by Külheim et al. [38]. We found representatives from each of the subfamilies of class III terpene synthases: TPS-a (angiosperm sesquiterpene synthases; N = 9), TPS-b (angiosperm monoterpene synthases; N = 11) and TPS-g (angiosperm acyclic monoterpene synthases; N = 2) expressed in the leaves of M. alternifolia. We did not find representatives of class I or II terpene synthases [38,49,50]. Discussion The overall aim of this study was to test the hypothesis that a sabinene hydrate synthase, a terpinolene synthase and a 1,8-cineole synthase are responsible for the production of six chemotypes in Melaleuca alternifolia, as proposed by Keszei et al. [13]. To do this, we first identified and characterised terpene synthases that each produce sabinene hydrate, terpinolene, and 1,8-cineole. Then we investigated the gene expression of each of these genes in leaves representing all chemotypes. We found that the sabinene hydrate synthase has the highest affinity for GPP (the precursor to monoterpenes) and the 1,8-cineole synthase (MaTPS-CinA) has the lowest affinity for GPP. The K m values for all enzymes were in the range reported for other known terpene synthases [67,[69][70][71][72][73][74][75][76][77]. There are few studies that have compared the activity of terpene synthases with different cofactors, however it seems that magnesium and manganese are the most commonly used TPS co-factors in the plant kingdom [61,[69][70][71][72][73][74][75][76][77]. These studies suggest that monoterpene synthases have higher activity with manganese as a co-factor and sesqui-and diterpene synthases are more active with magnesium as a co-factor. Relationship between sabinene hydrate and 1,8-cineole synthases in M. alternifolia Comparison of the three major monoterpene synthases suggested that terpinolene and sabinene hydrate synthases are more similar in their catalysis products (with eight products in common) than either is with 1,8-cineole synthase. We suggest that sabinene hydrate synthases evolved from 1,8-cineole synthases in Myrtaceae. MaTPS-CinA, MaTPS-CinB, and MaTPS-SaH share 94-96% amino acid identity, yet the product profile of the MaTPS-SaH is very different to that of MaTPS-CinA and MaTPS-CinB. This suggests that the sequence similarity is due to shared ancestry rather than functional convergence. Additionally, we can amplify many different genes that encode 1,8-cineole synthases in Myrtaceae suggesting that there are multiple copies of 1,8-cineole synthases. In contrast, we have only ever amplified this single sabinene hydrate synthase despite examining multiple species of Eucalyptus and Melaleuca that have terpinen-4-ol as the dominant compound in the oil. If there are multiple sequences that share 94-96% amino acid identity and most of them produce 1,8-cineole, then we expect that the sabinene hydrate synthase arose by neofunctionalization of a 1,8-cineole synthase. The products of the individual enzymes, each representing the most abundant monoterpene synthase transcript in the three cardinal chemotypes (Chemotypes 1, 2 and 5), match the biosynthetic groups proposed by Keszei et al. [13]. This lends support to our original hypothesis that these genes are sufficient to explain chemotypic variation in M. alternifolia. The three characterised genes are not sufficient to explain chemotypic variation in M. alternifolia We used RNA-Seq to investigate the expression of terpene biosynthetic genes in the young leaves of six chemotypes of M. alternifolia from natural populations. We found that the most strongly associated terpenes fall within the biosynthetic groups proposed by Keszei et al. [13], which also matches the product profile of the characterised enzymes. Therefore, the chemical data suggests that main differences between the terpene profiles of different chemotypes could be explained by three terpene synthases. However, we found that all the characterised genes are expressed at similar levels in the leaves of each chemotype (average values from Additional file 2: Table S2 by chemotype). Whilst the expression of the characterised terpene synthases is not sufficient to explain the maintenance of six chemotypes in M. alternifolia, the enzyme with the higher affinity for the shared substrate should produce more product. In other words, all else being equal, the terpene synthase with the lowest K m value will produce the most terpene product if equal amounts of enzyme are present and all enzymes share the same substrate supply, since the reactions catalysed are irreversible [78,79]. 1,8-Cineole is the dominant monoterpene found in chemotype 5 leaves, however the characterised 1,8-cineole synthase is not the most highly expressed terpene synthase in the transcriptomes of chemotype 5 individuals. Since the characterised monoterpene synthases are competing for the substrate FPP, we expect the MaTPS-CinA to have the lowest K m and MaTPS-SaH to have the highest K m , to explain the difference between gene expression and phenotype. We found the opposite, so the substrate affinity of an enzyme doesn't account for the disparity between gene expression and phenotype. Either other aspects of enzyme kinetics (k cat , V max ) account for the patterns in terpene profile, or, more likely, other terpene synthase enzymes are involved. Other terpene synthases may play a role in the maintenance of chemotypic variation in natural populations of M. alternifolia The putative terpene synthase sequences revealed in the RNA-Seq experiment share many similar features to other characterised terpene synthases [49,50] and they align well with the E. grandis terpene synthase gene family [38], confirming their status as putative terpene synthase sequences (Fig. 3). Since the three genes, MaTPS-SaH, MaTPS-Tln, and MaTPS-Cin, were not sufficient to explain the chemotypic variation, we expanded our search to other terpene synthases that could contribute to the foliar terpene profile. We used sparse partial least squares (sPLS) analysis to investigate the relationship between expression of the 27 terpene synthases and the foliar terpene profile (Fig. 4). MaTPS19 and MaTPS23 are very similar to each other, and fall in the ocimene/isoprene group (TPS-b2, [38]). They are both synonymous with the characterised isoprene synthase from M. alternifolia [17], with all three sequences sharing >97% amino acid identity (data not shown). MaTPS4 is likely to have a function similar to the two 1,8-cineole synthases (Fig. 3), which is supported by the sPLS analysis (Fig. 4), comparing the expression of each putative terpene synthase and the amount of each compound in the leaves. The foliar concentration of the focus terpenes, terpinolene, 1,8-cineole, and terpinen-4-ol, correlates with the expression of MaTPS-Tln, MaTPS-CinA, and MaTPS-SaH, respectively, as well as with the additional terpene products of each characterised gene. However, there is also a strong correlation between the focal terpenes and the expression of uncharacterised terpene synthases (Fig. 4). These same enzymes also have strong correlations with other terpenes. The putative monoterpene synthase, MaTPS20, is predicted to encode another terpinolene synthase with very similar product profile to MaTPS-Tln. The putative monoterpene synthase MaTPS25 is likely to encode an enzyme with a very similar product profile to that of MaTPS-SaH. The putative monoterpene synthase, MaTPS21 is predicted to be a 1,8-cineole synthase with a similar product profile to MaTPS-CinA. Of particular note is MaTPS9 a putative monoterpene synthase whose expression co-varies with 1,8-cineole, α-terpineol, (E)and (Z)-sabinene hydrate but not with terpinen-4-ol in the foliar ethanol extracts. If these compounds dominate the product profile of MaTPS9, then this could be one of the first examples of an enzyme producing both 1,8-cineole and sabinene hydrate. Other possible, but less likely explanations for the terpene profile not matching the expression of terpene synthases are: 1. post-transcriptional regulation, where the expression level of the gene doesn't match the activity of the encoded protein, as shown in tissue cultures of Norway Spruce (Picea abies) [80]; 2. the compounds produced by some of these enzymes may not be stored in the leaf, but are released into the headspace of the (See figure on previous page.) Fig. 4 Results of the sPLS analysis between the concentration of the terpenes and the gene expression in M. alternifolia leaf samples. a Correlations between the first two principal components and each terpene proportion (orange text) or the gene expression (blue text) for variables selected with the sPLS analysis (see methods). Variables located in the same direction from the centre of the circle show a direct association. The further a variable is from the centre of the circle the stronger the correlations. b The correlation heatmap with a hierarchical cluster between the terpene matrix (x-axis) and the terpene synthase gene expression matrix (y-axis) using the first three principal components from the sPLS analysis. Blue cells indicate a negative correlation and red cells indicate a positive correlation with the intensity of the colour representing the strength of the correlation plant, (e.g. Bustos-Segura et al. [11]); 3. the products from the expressed and characterised TPS enzymes could undergo further modifications, such oxidation by cytochrome P450 enzymes [81][82][83][84][85], methylation by Omethyl-transferases [86,87] or conjugation to other metabolites to make new metabolites, as is the case for formylated phloroglucinol compounds found in Eucalyptus species [88,89]. Each of these explanations require further investigation. At first glance terpene chemotypes appear to offer relatively simple systems to investigate the molecular basis of ecologically important plant chemistry. However, the route to these chemical variations can be complex involving the expression of multiple genes within a framework of gene duplications and possible introgression from closely related species. Studies of chemotypic variation in non-model organisms, such as Melaleuca alternifolia and Thymus vulgaris offer a view of biodiversity that is easily missed and highlights the complexity of interactions in natural systems. Conclusions We set out to test the hypothesis that three terpene synthases, a 1,8-cineole synthase, a terpinolene synthase and a sabinene hydrate synthase, are sufficient for the development and maintenance of six foliar terpene chemotypes in Melaleuca alternifolia. First, we discovered four novel genes in the leaves of Melaleuca alternifolia, that produce sabinene hydrate, 1,8-cineole and terpinolene. Then we used RNA-Seq to investigate the expression of these genes in the leaves of the six chemotypes. This analysis suggests that 'chemotype' is a more complex trait in M. alternifolia and the products of multiple terpene synthases, most of which remain uncharacterised, is the most likely explanation of the chemotypic patterns observed. Additional files Additional file 1: Table S1. The amino acid similarity matrix (A), the amino acid identity matrix (B) and the cDNA sequence identity matrix (C) comparing the four full length sequences. (XLSX 8 kb)
6,680.8
2017-10-04T00:00:00.000
[ "Biology" ]
Chaos control with STM of minor component analysis learning algorithm One of the most important techniques of feature extraction, i.e., the minor component analysis (MCA), has been widely employed in the field of data analysis. In order to meet the demands of real time computing and curtail the computational complexity, one instrument is often applied, namely, the MCA neural networks, whose learning algorithm, under some conditions, however, can produce complex dynamic behaviors, such as periodical oscillation, bifurcation, and chaos. This article introduces the chaotic dynamics theory to fully and correctly comprehend the numerical instability and chaos of iterative solutions in the MCA. Especially, as an illustration, the Douglas' MCA chaos control is discussed in details, where a stability transformation method (STM) of chaos feedback control is used in the MCA convergence control. As the time series diagrams, Jacobian matrix and Lyapunov exponent of discrete dynamic system indicate, the desired fixed points of iterative map of Douglas' MCA can be captured and the chaotic behavior of the algorithm can be controlled in the original chaotic interval. Introduction Minor component is the small eigenvalue of the correlation matrix corresponding to the input dataset, and the MCA is an important technique for data analysis. It can extract the key features of data and its neural network can be used to extract minor components without calculating the correlation matrix advance, which makes it an ideal method to decrease the computational complexity and thus to be broadly applied in real time applications of data analysis and signal processing [1], such as moving target indication [2], curve and surface fitting [3], total least squares (TLS) [4], clutter cancellation [5], frequency estimation [6], digital beamforming [7], etc. Recently, some MCA learning algorithm are proposed to update the net weights, such as Douglas's algorithm, where abundant chaos phenomena are detected [8]. MCA learning algorithms usually are described by stochastic discrete time (SDT) systems, but it is very difficult to investigate the convergence of the SDT models directly [9]. Consequently, deterministic continuous time (DCT) system associated with the SDT model is analyzed [10]. Furthermore, because of computational round-off limitations and tracking requirements, the condition corresponding to stochastic approximation theorem can not be satisfied in application easily, so that the convergence of original algorithm can be interpreted by examining a deterministic discrete time (DDT) system. Actually, the convergence issue of MCA algorithm has been explored according to the corresponding DDT system [1,[11][12][13]. On the other side, in essence, the iterative algorithm of nonlinear system x k+1 = f(x k ) is a discrete dynamic system. From the chaotic dynamics theory, a dynamic system can produce the instability phenomena of divergence, periodic oscillation, bifurcation, and chaos, if the eigenvalues of the Jacobian matrix of dynamical system satisfy certain condition [14,15]. In essence, a nonlinear iterative map is generated by the MCA neural network algorithm, which within different parameter intervals can exhibit different behaviors, where, under some conditions typical chaos phenomena are displayed [8]. Recently there has been an increased interest in the analysis of the relevant issues [8,16,17]. The chaos theory is applied to fully understand the convergent failure of periodical oscillation and chaos and chaos of iterative solution [18][19][20]. As one of MCA algorithm, Douglas's MCA algorithm can lay out the most properties of MCA algorithms. Therefore, we will obtain general MCA analysis result and extend the properties based on the study of this algorithm. The article discusses in different aspects the causes of some chaos phenomena in Douglas' MCA algorithm. Then on the basis of chaos control principle, the stability transformation method (STM) [21] is applied to control the Douglas' MCA chaos and thus stable convergence solution can be achieved. Specifically, the unstable fixed points embedded in the periodic and chaos orbit of the MCA dynamical system are stabilized by STM, the results of numerical simulation have been demonstrated. The control results are demonstrated with the Lyapunov exponent, time series, and bifurcation diagrams of Douglas' MCA algorithm. The contributions of this article are shown as follows: (1) The chaotic behaviors of Douglas's MCA are controlled by a kind of chaos control method in the original chaotic interval, i.e., STM, moreover, some intrinsic reasons of symmetry phenomena are revealed; (2) via studying Douglas's MCA, we can obtain more effective numerical results and general achievement, which can provide some insights to chaos phenomena existing in most of MCA algorithms. The article is organized as follows. Basic chaos theory and STM are introduced in Section 2. In Section 3, the chaotic dynamic behaviors of Douglas's MCA algorithm are described, and the essential reasons of chaos phenomena are analyzed. The numerical analysis and illustration of chaos control of Douglas's MCA with STM are presented in Section 4. Finally, conclusions are drawn in Section 5. Basic theory of chaos Chaotic behaviors are observed widely in the physical world and natural systems, which attracted abundant attention from different fields after mid-20th century [17,19]. Chaos theory is a scientific theory describing erratic behaviors in certain nonlinear dynamical systems and provide new theoretical and conceptual methods to comprehend the chaos phenomenon. Typically, the n-dimensional discrete dynamic system is expressed by the formula below, where x is a n × 1 dimensional state vector and p is a control parameter vector of the dynamic system. Lyapunov exponent is a numerical method to judge the non-convergence phenomena. The Lyapunov exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. It is just the average of the natural logarithm of the absolute value of the derivatives of the map function evaluated at the trajectory points. For 1D iterative system of function y n+1 = f (y n ), the Lyapunov exponent is described as: If LE < 0, the system is conservative and convergence, elements of the phase space will stay the same along a trajectory, and the trajectory is stable corresponding to the periodic motion or a fixed point. If LE > 0, the system is dissipative and divergent, the trajectory is unstable, and the nearby trajectories depart in exponential way, and form the chaotic attractor. Therefore, Lyapunov exponent LE can be used as an index to identify the dynamic behavior and the chaotic degree of strange attractor. Moreover, If LE = 0, then the trajectory is in the stable border and bifurcation state. The Lyapunov exponent changing from negative to positive means the transition of periodic motion to chaos [19]. Furthermore, another important numerical method to identify the chaotic phenomena of non-linear dynamic system is Jacobian matrix. Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued and can represent the best linear approximation to a differentiable function near a given point. It is generally be utilized to judge the non-convergence phenomena. Further, When the spectral radius of the Jacobian matrix of the dynamical system (1) is smaller than 1, i.e., r(J) < 1, the convergence of dynamical system can be obtained and the fixed point is attracted. If the spectral radius of Jacobian matrix of dynamical system (1) is larger than 1, i.e., r(J) > 1, the fixed point will lose its attracting property in the specific parameter interval and the dynamical system produces instability. After a few iterations, the iterative solutions could present the non-convergence phenomena, such as periodic oscillation, bifurcation, and even chaos. STM of chaos feedback control As mentioned in Section 2.1, when Jacobian matrix r(J) > 1, the dynamic system (1) will generate numerical instability of periodic oscillation, bifurcation, and chaos. Therefore, in order to obtain fixed points of dynamic system (1), the chaos control methods should be incorporated. The chaos feedback control method can capture the specified fixed points embedded in the chaotic attractor of nonlinear dynamical system through implementing the target guidance and position [15, 21,22]. At the same time, it can stabilize the unstable fixed points involved in the periodic orbit of dynamical system, and control the oscillation and bifurcation of the system [20]. Actually, Schmelcher and Diakonos [22] have proposed an appropriate linear transformation method to modify the Jacobian matrix eigenvalue of dynamic systems and stabilize the fluctuating fixed points of the original system. The method is named STM [21], which does not alter the values and locations of the unstable fixed points. This is expressed as follows: in the above, 0 < λ < 1, D is the n × n dimensional involutory matrix. The selection of involutory matrix D in (3) depends on the system's property. To enhance the efficiency of stabilizing the periodic orbit, it is unnecessary to take all the 2 n n! involutory matrices, but it is desirable to select the minimum number of these matrices which is called the minimum set of involutory matrices. Pingel et al. proved that for low dimensional chaotic dynamic system [21], D is to be chosen from the five following matrices according to the properties of the saddle point and spiral point of the unstable fixed points, and when the λ is set a small enough value, the unstable fixed points can be stabilized. Furthermore, λ is selected according to the eigenvalues of the dynamical system's Jaco-bian matrix. The larger the maximum of the absolute eigenvalues of Jacobian matrix is, the smaller the factor λ should be taken to obtain the stabilization, and consequently the more iterative number is required to reach the convergent solution [23]. Specially, when D = I, Equation (3) is given by the original dynamic system can be controlled when λ (0,1), when the attractor's stability can be remodeled by the STM and the unstable fixed points are stabilized into the periodic or chaotic orbits. However, if λ = 1, the original dynamic system emerges periodic oscillation and chaos can not be controlled. Chaotic dynamics analysis of Douglas's MCA algorithm Lv and Zhang [8] analyzed the stability of Douglas' MCA learning algorithm and revealed the chaotic behaviors of the algorithm at some intervals. Douglas' MCA algorithm in 1D case is shown in: where, w is a scalar function, and k ≥ 0, all h > 0. A compact set S ⊂ R is called an invariant set of Function (5), if for any w(0) S, the trajectory of Function (5) starting from w(0) will remain in S for all k ≥ 0. Strictly, if 0 <h ≤ 2.32588, then S is an invariant set of Function (5). Especially, if 1 ≤ h ≤ 2.32588, the Douglas's MCA dynamical system displays the chaotic phenomena illustrated in Figures 1, 2, and 3. Particularly, some interesting phenomena are shown in Figures 2 and 3, where chaos symmetry and coexisting are exhibited in the bifurcation diagrams. The key reason of the attractive phenomena is that Equation (5) is an odd function. If we define w(k) = x, Equation (5) can be rewritten as: If we define w(k) = x, Equation (5) is transferred to the formulation as follows: Now it is clear that Equation (5) is an odd function. As noted in symmetry in chaos [20], odd function mapping has a period-doubling cascade, one corresponding to a positive number and the other a negative as the initial point, and the two chaotic attractors spawned by the period-doubling cascades will merge to form one symmetry attractor. The typical phenomena in the dynamics of symmetric mapping are identified and illustrated by the mathematical model of Equation (5). Specifically, it is observed that, trajectories of attractors from the positive value as their initial condition are shown in Figure 2 and the ones from the negative in Figure 3. Moreover, on h, the chaotic attractors are symmetric if their origins are. Dynamics analysis of controlled Douglas's MCA algorithm As is mentioned in Section 2, Jacobian matrix is a powerful approach to judge the non-convergence phenomena of dynamical system [24]. A dynamic system is unstable under the condition that each eigenvalue absolute of the Jacobian matrix is larger than 1. Lv and Zhang [8] has found that a lot of chaotic behaviors are represented in the interval λ [1, 2.32588]. Accordingly, we use STM to modify the eigenvalue of Jacobian matrix of Equation (5) Hence, the new dynamic equation based on STM method is described as Equation (6): when D = I, the controlled MCA Equation (6) is presented as following: Proof. we define a point w* R n is called an equilibrium of (7), if and only if Clearly, the set of all equilibrium points of (7) is 0,1, -1. For each equilibrium, the eigenvalues of Jacobian matrix at this point is computed. Let the Jacobian matrix of (7) is shown as following: There are three cases: As for equilibrium w* = 0 Therefore, 0 is unstable point. As for equilibrium w* = 1 As for equilibrium w* = -1 dG dw(k) The proof is completed. Consequently, in the new Jacobian matrix (8) of Equation (7), each of eigenvalue is less than 1 if 0 < λ < 1 η . In summary, we can control chaotic behavior in the original system if 0 < λ < 1 η , and the absolute of eigenvalue of formula (7) is less than 1 when 0 < λ < 1 η . This means that the dynamic system can converge, and the unstable system is transferred to a stable system by using STM. Furthermore, according to the Lyapunov exponent method [19], we can justify and confirm the results by using STM with the illustration of Lyapunov exponent. As mentioned in Section 2.1, when Lyapunov exponent LE < 0, the system trajectory is stable corresponding to the periodic motion or a fixed point;when LE > 0, it denotes that the system has dynamic behaviors and presents the chaotic phenomena of strange attractor. The Lyapunov exponent's transition from negative to positive indicates the change of periodic motion to chaos. Figures 4 and 5 present the scenarios in which Lyapunov exponent of original MCA algorithm and the Lyapunov exponent of the controlled Douglas's MCA dynamic system by STM separately. In Figure 4, in some intervals of h, the Lyapunov exponent LE is less than 0, while in some intervals, LE is larger than 0 in which the chaotic solutions of MCA algorithm occur. In Chaos control of Douglas's MCA for STM In this section, case studies of using the STM are illustrated and the time series results of Douglas' MCA from different starting points are shown in Figures 6,7,8,9,10,and 11. For each iterative map w, simulated results of an original system are given to be compared with those using STM. It is evident that the chaotic behaviors of the original dynamic system have been controlled by the STM, the unstable fixed points have been transferred to stable points, and the convergence results have been reached in the original chaotic interval. Figure 7 exhibits that when λ = 0.1, the periodic oscillation of controlled Douglas's MCA algorithm by STM is controlled and a convergence solution is achieved. Figure 8 shows when w = 1.0783, h = 1.93, the original Douglas's MCA system appears chaotic solutions. Figure 9 presents that when λ = 0.1, the chaotic behavior of Douglas's MCA algorithm is controlled. Figure 11 describes for λ = 0.1, the chaotic behavior of the system is controlled. In addition, the bifurcation diagrams of Douglas's MCA algorithm corresponding to different starting points w(0) = 0.6 and w(0) = -0.6 are shown in Figures 12 and 13, respectively. Further, applying the STM to the original MCA system, the control results of MCA algorithms with respect to Figures 12 and 13 are exhibited in Figures 14 and 15. It is found that STM can obtain the stable convergence solutions of Douglas's MCA algorithm, and control the numerical instability of periodic oscillation, bifurcation and chaos. Besides, it is worth mentioning that, Figures 12, 13, 14, and 15 also has odd function properties which present symmetric attractors. Conclusion This article focuses on the chaotic dynamics analysis, and especially chaos control of Douglas's minor component analysis algorithm. Periodic oscillation, bifurcation, and chaotic behaviors are discussed on the basis of the chaos theory, and the Lyapunov exponent and the Jacobian matrix reflecting the dynamic property of non-linear system are analyzed. Furthermore, the chaotic phenomena of Douglas' MCA algorithm under some conditions can be controlled and transformed into a stable system with STM of chaos feedback control, and the convergence solutions can be achieved in the original chaotic intervals. Generally, exploring the chaotic dynamic behavior of Douglas's MCA is a good path to understand the essential reasons for the nonconvergence in MCA method, and it is helpful to extend the effective application of the MCA and related methods. Moreover, there are lots of non-linear dynamics and chaotic phenomena in real world, a correct and general solution is not easy to achieve. However, the formulation of this article proves that STM is a feasible measurement to the chaotic behavior control of Douglas's MCA in the original chaotic interval, and is a novel method to tackle MCA non-convergence issues. Numerical results demonstrate that STM is a versatile, effective and simple method to control the instabilities and chaos of MCA algorithm. Future study in the area can be conducted to explore the dynamics of other MCA algorithms on a wider and deeper level.
4,011
2012-03-15T00:00:00.000
[ "Computer Science", "Engineering" ]
Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach. Introduction Autonomous small-scale aerial vehicles such as drones have drawn significant attention from academia and industry due to their accessibility, low cost, and easy operation, with many potential applications. A continuous and robust navigation solution is crucial for these vehicles to perform automatic control and guidance. To operate in cluttered environments or in proximity to environments where the Global Navigation Satellite System (GNSS) signals can be partially or fully blocked, various perception sensors (e.g., laser scanners or cameras) are incorporated for odometry or simultaneous localization and mapping (SLAM) solutions. Due to the lightweight and rich information, the camera-based system has been actively researched for small-scale aerial vehicles. In particular, affordable, consumer-grade RGB-D cameras (providing color and depth, such as Microsoft Kinect and RealSense) have enabled considerable advancement for 3D reconstruction and SLAM odometry navigation [1][2][3][4]. Although quite successful, most current applications have been limited to indoor scenarios due to the limited sensing range and depth dropout problems. The presence of strong infrared interference from the sunlight significantly reduces the maximum depth range (less than 4m in typical outdoor conditions). In addition, aerial vehicles typically require enough clearance from the environment to avoid any collision and operate safely. Consequently, the RGB-D sensor would act virtually as a monocular camera, causing a depth dropout problem, limiting the usability of RGB-D sensors in outdoor flying conditions. Figure 1 shows typical RGB-D images collected from an aerial vehicle, showing partial or no depth images. It also shows a reconstructed 3D map and trajectory obtained from this work. In addition, aerial vehicles typically experience a high rotational rate and/or acceleration during maneuvers. For example, a high angular motion of the vehicle but with small parallax can make the triangulation process slow and difficult. A high dynamic sensor, such as an inertial measurement unit (IMU), is required to track the motion and features. In the IMU-aided visual navigation system, the challenge occurs when the RGB-D sensor degenerates to the monocular mode. The scale-ambiguous (non-metric) visual translation needs to be fused with the (metric) inertial output. Although the scale can be estimated from the inertial navigation system, the unaided low-quality inertial sensor cannot converge until the features are robustly initialized. This work addresses the depth dropout problem by proposing a novel Inertial-RGB-D (Kinect) fusion method that effectively integrates the inertial odometry outputs and RGB-D or monocular images. The contributions of this work are as follows: • The use of the directional constraint of the non-metric visual translation to aid the inertial solutions. It is based on our preliminary work [5], providing more thorough results using a public dataset as well as outdoor experiments. • Our Inertial-Kinect odometry system integrates the full 6 degrees of freedom (DOF) (rotation and translation) and partial 5DOF (rotation and scale-ambiguous translation) information from the Kinect to estimate the pose of an aerial vehicle. Most existing works have been directed at indoor applications in which the full 6DOF Kinect poses are available. • We demonstrate real-time, front-end odometry while the back-end pose-graph SLAM supports low-priority multi-threaded processing for the keyframe optimization. The real-time odometry outputs are subsequently used for hovering flight control in a cluttered outdoor environment. This directional constraint essentially comprises the epipolar constraints of features between a pair of images that can aid an inertial system [6,7], and recently more computationally efficient multistate-constraint filters [8,9]. Although we rely on the same epipolar principle (actually any visual ego-motion method relies on this constraint), our method is different in that we cast the epipolar constraint as the directional constraint of the vehicle motion, which is not limited to a planar scene or estimating the epipolar points. The key benefit is the undelayed aiding of the IMU solution even under low parallax motion. In addition, our method does not require the popular inverse depth parameterization, which requires augmented state dimensions, thus more computational complexity. If monocular configuration is used all the time, for example, due to the extended period of depth-dropout, the performance will be similar to the standard visual odometry method, causing scale drift over time along the direction of the motion. The tangential direction error can be limited from the directional fusion. Figure 2 illustrates the architecture of the navigation system, which consists of a real-time, front-end odometry part and an off-board processed back-end SLAM part. An extended Kalman filter is designed using a loosely coupled integration. When 3D images are available from the Kinect sensor, a window-based, fixed-size map filter estimates the features' positions to compute the full pose of the vehicle. The window-based map filters do not maintain the cross-correlations between the features and vehicle. Thus they are suboptimal but are computationally efficient and suitable for real-time estimation. When only 2D images are delivered, the translation information with scale ambiguity is converted as a directional motion constraint to aid the inertial outputs. The back-end SLAM is processed off-board and maintains keyframe images to detect loop closures and correction. The estimated pose of the vehicle is fed back to a flight controller, which subsequently generates control signals to the onboard microcontroller. Figure 2. A loosely-coupled Inertial-Kinect odometry system architecture. RGB-D images are processed in a local Kinect odometry module that utilizes a window-based map for real-time processing. 2D RGB images are used for directional motion constraints and rotation rate and fused with inertial odometry within an extended Kalman filter. There is an off-board back-end SLAM that utilizes a keyframe-based graph SLAM to handle loop detection and update. The paper is outlined as follows: Section 2 provides the literature review related to the RGB-D-based navigation and mapping. Section 3 provides the methods of inertial odometry, visual pose measurements with and without directional constraints, and the integration filter. Section 4 presents the experimental results and discussions from the indoor and outdoor environments, followed by conclusions. Related Work There exists a vast amount of literature on visual navigation and SLAM, and thus this review will focus on the RGB-D-related work and its integration with inertial sensors. The work by Huang et al. [10] uses full RGB-D information for 3D SLAM on aerial vehicles. It uses full color and depth information from a Kinect sensor to detect features from the gray-scale image and use their corresponding depths for the motion estimation. Keyframe-based feature matching is performed to estimate the final camera pose of the aerial vehicle in an indoor environment. The final smoothing is performed by graph-based optimization to build a globally consistent map. The use of depth-only information is proposed by Izadi et al. [11] for a hand-held scenario, utilizing the iterative closest point (ICP) method for structured indoor environments. Another work [12] focuses on the realtime performance in which an ICP and a constant size feature map are maintained for real-time implementation. Scherer et al. [13] also use depth information in the context of the mono-SLAM framework. Another work by [14] integrates the 3D visual odometry with the ICP-based SLAM approach. The above mentioned RGB-D techniques heavily rely on the full depth information. The work of [15] addresses the depth dropout issue by solving the offline SLAM optimization problem for indoor conditions. Their work combines monocular and RGB-D measurements into a local map formation in an offline setting. The scale of the monocular camera is recovered in an offline scenario. Considering the work in the visual-inertial domain, there exist two paradigms: tightlycoupled and loosely-coupled architecture. In the tightly-coupled paradigm, the work of [16][17][18][19][20] addresses the fusion of visual and inertial information using optimization or EKF-based SLAM. Ref. [16] applied the bundle adjustment technique for the visual-inertial odometry with an efficient loop-closure method. Ref. [17] applied a similar optimization method while eliminating any moving objects, such as pedestrians, improving the robustness of the visual odometry. Ref. [18] used the filtering approach exploiting the planar geometry of the ground plane. Although quite successful, these methods are computationally expensive as well as dependent on specific visual processing pipelines. Considering the rapid development of vision processing algorithms, the integration algorithms need to be revised accordingly. Any bundle adjustment (e.g., VINS mono, DUI-VIO) or depth estimation methods (inverse depth parameterization) can cause drift in the IMU solution during the process. Other papers mentioned exploit certain geometry such as planar ground or moving object elimination, which are different to our focus. An alternative architecture is a loosely coupled method in which the visual and inertial information is treated as a separate entity, and visual constraints are used to update and aid the inertial sensor [21]. The gyro information is also used to help the RGB-D pose estimator as in [22,23], in which gyroscopes are used to estimate the rotation of the cameras, or as a prior to the ICP algorithm. Ref. [4] addresses the degeneracy problem of the IMU-Kinect sensor utilizing the indoor plane features from the camera. These Kinect-based approaches either work indoors or require a structured environment. Refs. [24,25] uses an indirect Kalman filter that is based on the errors in the estimated measurement instead of the direct measurements from the camera and IMU systems. The work estimates the scale of the monocular camera motion estimate in the filter with an assumption of a smoothly changing scale of the scene. Learning techniques can provide a good alternative to fill the gaps in the depth image. There have been several supervised/semi-supervised depth mappings mostly in road environments, and it would be interesting to see their performance in outdoor/forest environments, which are unstructured and irregular. Our work follows the loosely-coupled approach with direct-filter implementation (the advantage of the loosely-coupled system is constant-time processing and a modular implementation). Using the concept of visual directional constraints, we avoid the explicit estimation of the scale in integrating the monocular and IMU. The proposed framework consists of two modules, a front-end EKF-based odometry system, and a back-end module based on pose-graph optimization for global consistency. The map is not maintained in the EKF, hence resulting in the loosely coupled architecture. The benefit is the system becomes more modular, and other vision algorithms can be effectively incorporated. Inertial Odometry The inertial odometry model consists of the kinematic equations of an inertial navigation system driven by the IMU measurements, which are the specific force (or the sum of the dynamic acceleration and gravity) and angular rate. The position (P n ), velocity (V n ), and Euler angles (Ψ n ) of the vehicle are defined with respect to a local tangent, local-fixed navigation frame, and evolve aṡ • ω n ie is the Earth rotation rate in the navigation frame; • g n (P n ) is the acceleration due to gravity; • f b is the accelerometer measurement in the body frame; • ω b is the gyroscope measurement in the body frame; • b b a is the accelerometer bias in the body frame; • b b g is the gyroscope bias in the body frame; • R n b is a direction cosine matrix transforming a vector from body to navigation frame is a matrix transforming a body rate to an Euler angle rate. Although the Euler angles have a singularity problem when the pitch angle approaches 90 • , it rarely happens in most drone operational scenarios. Thus, due to the simplicity compared to other representations such as the quaternion, the Euler angles are adopted in this work. 6DOF Pose Measurement The 6DOF pose measurement is the rigid-body transformation (R, P) of the camera from its original pose and is obtained in two steps. First, an initial pose is computed using the closed-form solution from the point clouds as in [26]. It is then used to run a weighted-ICP (iterative closest point) for fine refinement. The spatial location of the feature in the pixel coordinates with raw depth gives (u, v, d) ∈ R 3 , which can be converted into a 3D Euclidian feature position, (x, y, z) ∈ R 3 , relative to the camera. The mapping function g : (u, v, d) → (x, y, z) becomes: where f is the camera focal length, (u 0 , v 0 ) is the center of the image, and L is the baseline length between the infrared emitter and the receiver in the Kinect sensor. The related covariance matrix W of the transformed Euclidian 3D position can be computed using a Jacobian of the mapping function, assuming independent noise in pixel and depth measurements: The 3D features are declared as a map (M) defined in the local navigational frame. All the subsequent feature measurement data (D) are matched with the existing map features using the SURF descriptors. The comparing score is based on the sum-of-absolutedifference and if it is within a specified threshold then it is declared a matched-pair. As this matching can still lead to wrong matches, RANSAC is used to remove the outliers during the optimization: where i stands for the index of inlier feature-set A, and c i is the correspondence with W being the weighting matrix from (3). A ring buffer of features is maintained to track the locally tracked features, while the global keyframe map is retained in the pose-graph module as discussed in Section 3.5. The features within a predefined Euclidean vicinity are declared as update points, whereas others are declared as new points. The existing points are updated using a weighted averaging method. If the limit of the ring buffer is reached, then the old features are deleted. 5DOF Measurement Using Directional Constraints The 2D image processing pipeline is similar to the 3D case except the local feature map is not utilized. The rotation (R) and translation (λP) are estimated using the standard 5-point visual odometry algorithm together with RANSAC. Using the sampling time, the motion between two consecutive images is converted to the rotational rate and translational velocity (up to scale) (ω, λV). In order to integrate these motion estimates with the inertial sensor (which operates in metric space), the translational velocity is further converted into a unit directional constraint in the body frame U b . This constraint can also be related to the inertial odometry. That is, the unit velocity in the body frame can be obtained from the unit velocity in navigation frame If the vehicle motion is constrained to the ground, this is similar to the non-holonomic motion constraint. For example, the tangential components of the velocity (V b y = 0, V b z = 0) become zero in the body frame, assuming no side skidding. In a general 3D case, such as for a flying vehicle, this constraint does not hold. The concept of the directional constraints naturally extends this non-holonomic motion constraint to the visual velocity in which the lateral image velocities of the visual motion are treated as zero. The key benefit of this concept is the undelayed aiding of IMU outputs without requiring 3D information of the features or map. However, the longitudinal image velocity is unobservable and thus requires additional depth information, which is delivered from a pose-graph SLAM module. Integration Filter with Directional Constraints An extended Kalman filter is designed to integrate the inertial and Kinect measurements in a loosely-coupled integration architecture. After discretization, the state Equation (1) and the observation equation with directional constraints (5) become: where x(k), u(k), and z(k) are the state vector, control input, and measurement vector at time step k, respectively. w(k)and v(k) are the process and observation noise, which have zero means and strength matrices Q and R. Given the models, the estimate of the statex(k|k) and covariance P (k|k) can be recursively computed within the filter. First, the predicted state and covariance become: where ∇ represents the gradient operator. The switching criteria between RGB-D and RGB measurements are based upon the availability of depth features and their spatial distribution. If the number of features is uniformly distributed over the image and depth features are available, then the RGB-D measurements are used to update the EKF filter. Otherwise, the monocular directional constraints are used for the filter update. The uncertainty of the measurements is scaled directly with the number of inliers in order to gauge the quality of motion estimates. In order to cater to the measurement delay in the vision processing pipeline, we maintain a timestamp of each predicted state (from EKF) in the ring buffer. Whenever the Kinect measurements (RGB-D or visual constraints) are available, the past EKF state is retrieved/updated accordingly, and the corrected state is then propagated to the current state. When a measurement is available, innovation and its covariance are calculated as follows: Then the state estimate and its covariance are updated: with a Kalman gain matrix: Pose-Graph Optimization As a back-end module, a keyframe-based pose-graph SLAM is applied to constrain the inertial-Kinect odometry further. Keyframes are selected from the Kinect measurements using the threshold on the accumulated motion estimates. Their corresponding pose/state from the EKF filter is passed to the pose-graph optimizer (only for selected keyframes). A new edge constraint is added to the pose-graph when a loop is detected using the SURF descriptor matching between the keyframes and the current image frame. Subsequently, the graph is optimized, and on convergence, the filter state (for the respective timestamp of the keyframe) is updated in the ring buffer. The corrected state is then propagated to the current EKF state to minimize the effect of drift. Observability of the System The extended Kalman filter designed in the previous section integrates the 3D or 2D visual measurements depending on the availability of the depth information. If the directional constraints are incorporated as in Equation (5), it is clear that the velocity vector becomes partially observable due to the unknown velocity scale λ. In addition, the velocity estimated from the IMU requires integration of the acceleration and thus does not increase the observability of the velocity state. If we use an instantaneous coordinate system of the motion (m) and express the velocity along the axial ( ) and normal (⊥) directions, the velocity vector V m = V m + V m ⊥ = V m , as the tangential velocity components are zero. The axial velocity component can be made observable from the 3D measurements with depth information, which effectively computes the scale of the translation and thus the velocity. Please note that the unknown velocity scale can be estimated within the EKF, as in the popular inverse-depth parametrization approaches. However, the predicted velocity from the IMU is also unobservable due to the integration process, and thus the estimated scale suffers from drifting, causing the so-called scale drift problem. It can only be properly estimated from the 3D measurements as in our work or the loop-closures in SLAM. Depth Calibration The Kinect sensor used is reasonably well-calibrated from the factory settings. However, the raw range output is expressed as inverse disparity, not actual depth, thus requiring further calibration. We adopted the methods from [27], in which a checker-board is used for intrinsic/extrinsic parameter estimation using bundle adjustment-based refinement. We estimate the depth provided from the Kinect sensor for a region of interest (where the object is present) and average it. After the calibration, the depth with respect to the ground truth shows less than 1% error for up to a 3-m range, showing consistent depth results. After the depth calibration, the RGB camera is calibrated using a standard camera method. Finally, the calibration between the vision and inertial sensor is performed using the method proposed by [28], where the rotational misalignment is estimated by using the direction of gravity (from the accelerometers) and the camera's vertical orientation. Indoor Experiment A hexacopter platform is developed, which is equipped with a low-cost IMU with a 38 Hz data rate and a Kinect RGB-D sensor at 22 Hz, as shown in Figure 4. To evaluate the performance in an indoor environment, a Vicon motion capture system is utilized. A dual-core Atom embedded computer mounted on the platform collects and processes the data, running under the Robot Operating System (ROS). All data are timestamped for the synchronization, and ring-buffers are also used to handle the time difference between the acquisition and processing time. The hexacopter autopilot system is modified to accommodate the position control commands from the Atom processor. A cascaded PID position controller running at 50 Hz generates the waypoints and hovering commands using the Inertial-Kinect odometry outputs. To verify the method, 900 Kinect frames and 1501 IMU data packets were collected from an indoor environment. To simulate the depth dropouts, some of the depth data were discarded to verify the proposed approach. The estimated pose from the proposed method was compared against the ground truth data from Vicon, as shown in Figure 5. The trajectory shows the take-off and lateral movements of the hexacopter platform, and the dropouts are shown in a rectangular box. The errors were computed using the ground truth in terms of root-mean-square error (RMSE). Table 1 summarizes the performance showing that the RMSE is less than 0.2 m and 0.5 • , and there is an improved performance closely resembling the ground truth. Figure 6 also confirms the consistency of the system, showing a visibly consistent 3D map after the pose-graph SLAM optimization. Figure 6. Indoor results: before (left) and after (right) pose-graph optimization where the room wall was textured with forest-like images. Public Indoor Dataset We also tested the proposed method for the publicly available dataset (fr1/desk, fr2/desk and fr1/room) from the University of Freiburg [30] to compare the performance of the Inertial-Kinect solutions. Each dataset comes with an accurate ground truth captured by external motion capture systems (Vicon). Table 2 summarizes the results on the relative pose error (RPE) for more datasets (fr1/xyz and fr2/xyz), confirming accurate estimates compared to the ground-truth data. Table 3 compares the proposed method with the stateof-the-art SLAM methods in terms of the absolute trajectory error (ATE): robust edge-based VO (REVO) key frame (KF) [31], REVO frame-to-frame (FF) [31], FOVIS (an ROS module for visual odometry) [10], and dense visual odometry [14]. The comparison confirms that our proposed method performs better or with competitive accuracy compared to those methods. Outdoor Experiment Currently, to our knowledge, there is no public 3D dataset from a forest-like environment. Therefore outdoor flight tests were performed in a cluttered tree environment. The average flight height was 10 m above the ground, and the maximum ground speed was approximately 5-7 m/s. The environment was challenging due to the absence of GPS position sensing due to the tree canopy. An area of 10 m × 12 m was explored by a manual pilot mode collecting 1701 RGB-D Kinect frames in which 240 data frames lacked depth information due to the depth dropout. Figure 7 shows the 3D trajectory of the aerial vehicle, which was estimated in real-time from the onboard computer. Figure 8 also shows the pose-graph optimization results processed on an off-board laptop, which also shows an input image, a 3D depth map used in the Kinect odometry. It can be seen that the pose graph optimization makes the global keyframe maps visibly more consistent. As there is no absolute GPS information available for the comparison, the normalized innovation sequences were used to check the filter consistency, showing that most of the sequence falls within the 95.5% confidence interval. The results confirm that the proposed Inertial-Kinect algorithm is capable of estimating the vehicle states in a challenging outdoor environment. The Kinect processing time is also summarized in Table 4, having less than 100ms processing time and thus the real-time capability of the method. However a high-speed camera with fast optical-flow algorithms can also be utilized to improve the navigational accuracy, thanks to the loosely coupled integration of the vision processing module. The real-time management of the estimator is also crucial for the control and guidance of the vehicle. Currently, the 10 Hz pose output rate is adequate for the high-level control of the vehicle, thanks to the fast internal angular stabilization within the drone. Conclusions An Inertial-Kinect integration framework was presented, which fuses an IMU odometry and Kinect odometry in a loosely coupled EKF integration architecture. The Kinect odometry system computes the full 6DOF, or partial 5DOF poses depending on the depth availability. An efficient and fixed-size local feature map is maintained to calculate the full Kinect odometry. When depth dropouts occur, the visual translation is used as a directional motion constraint. The lateral image velocity components become zero, which enables a seamless aiding of IMU errors without delay. The back-end SLAM module performs the pose-graph optimization detecting the loop closures and further correcting the IMU errors. Indoor and outdoor flight results demonstrate the robustness of the proposed approach in challenging outdoor environments. Future work will involve combining the Inertial-Kinect odometry outputs and path-planning algorithms with exploring the outdoor settings. Author Contributions: Conceptualization, methodology, validation, and writing-original draft preparation were contributed by U.Q.; conceptualization, writing-review and editing and supervision were contributed by J.K. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding.
6,057.8
2021-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Giving tranexamic acid to reduce surgical bleeding in sub-Saharan Africa: an economic evaluation Background The identification of safe and effective alternatives to blood transfusion is a public health priority. In sub-Saharan Africa, blood shortage is a cause of mortality and morbidity. Blood transfusion can also transmit viral infections. Giving tranexamic acid (TXA) to bleeding surgical patients has been shown to reduce both the number of blood transfusions and the volume of blood transfused. The objective of this study is to investigate whether routinely administering TXA to bleeding elective surgical patients is cost effective by both averting deaths occurring from the shortage of blood, and by preventing infections from blood transfusions. Methods A decision tree was constructed to evaluate the cost-effectiveness of providing TXA compared with no TXA in patients with surgical bleeding in four African countries with different human immunodeficiency virus (HIV) prevalence and blood donation rates (Kenya, South Africa, Tanzania and Botswana). The principal outcome measures were cost per life saved and cost per infection averted (HIV, Hepatitis B, Hepatitis C) averted in 2007 International dollars ($). The probability of receiving a blood transfusion with and without TXA and the risk of blood borne viral infection were estimated. The impact of uncertainty in model parameters was explored using one-way deterministic sensitivity analyses. Probabilistic sensitivity analysis was performed using Monte Carlo simulation. Results The incremental cost per life saved is $87 for Kenya and $93 for Tanzania. In Botswana and South Africa, TXA administration is not life saving but is highly cost saving since fewer units of blood are transfused. Further, in Botswana the administration of TXA averts one case of HIV and four cases of Hepatitis B (HBV) per 1,000 surgical patients. In South Africa, one case of HBV is averted per 1,000 surgical patients. Probabilistic sensitivity analyses confirmed the robustness of the model. Conclusion An economic argument can be made for giving TXA to bleeding elective surgical patients. In countries where there is a blood shortage, TXA would be a cost effective way to reduce mortality. In countries where there is no blood shortage, TXA would reduce healthcare costs and avert blood borne infections. Background The risks and costs associated with blood transfusions have increased interest in the identification of safer and cheaper alternatives. Blood sparing interventions are particularly important in sub-Saharan African countries due to the high prevalence of blood borne viral infections and blood shortages. In the African region, an average of 5 units of blood per 1000 population are donated each year compared to between 30 and 60 units in high income countries [1,2]. It has been estimated that about 150,000 women die each year during pregnancy or soon after delivery because of a shortage of blood for transfusion [3]. Even when blood is available, it can transmit potentially fatal viral infections. It has been estimated that in the African region, 99% of blood is screened for HIV, 95% for HBV and 96% for HCV [2]. The administration of the antifibrinolytic agent tranexamic acid (TXA) could be a cost effective way to reduce the need for blood transfusion. A recent systematic review of randomised controlled trials showed that the administration of TXA to elective surgical patients reduces the number of transfusions by one third and the volume of blood required per transfusion by one unit [4]. Ongoing studies are also being conducted to investigate the effectiveness of administering TXA in cases of trauma and women with post partum haemorrhage [5]. In countries with blood shortages, the administration of TXA could increase the supply of blood for those who need it. On the other hand, where blood is readily available the administration of TXA could, by reducing the need for transfusion, decrease the risk of life threatening blood borne infections and reduce costs since fewer units of blood would need to be given ( Figure 1). This suggests that the use of TXA may have benefits in sub-Saharan Africa, where resource constraints argue for cost-effective alternatives to using blood products. Nevertheless, a comprehensive literature review failed to find any studies investigating the cost-effectiveness of giving antifibrinolytic agents among elective surgical patients in developing countries. This study uses a decision analytic model to evaluate the cost-effectiveness of using TXA to reduce the need for blood transfusion, thus potentially reducing mortality from blood shortages and preventing blood borne viral infections in four African countries. The settings Four African countries were selected to represent a range of blood donation rates and HIV seroprevalence among blood donors (Table 1). South Africa has the highest blood donation rate at 17 units of blood donated per 1,000 inhabitants per year, whereas in Tanzania fewer than 3 units of blood are donated per 1,000 inhabitants per year (Table 1) [6]. Kenya and South Africa have a low HIV prevalence among the donor population (1.2%, and <0.1% respectively) whereas the prevalence is higher in Tanzania and Botswana (2.8% and 2.1%) [6]. Model A decision-analytic model was developed in DataTM PRO (TreeAge software Inc., MA, USA) as shown in Figure 2. Two costs are considered in the economic analysis: the cost of blood transfusion and the cost of TXA. The analysis did not include indirect costs such as wages and productivity losses due to illness and death. The model consists of two different strategies: routinely giving TXA to surgical patients and not giving TXA [7]. The structure of the two strategies is identical, but the associated probabilities and payoffs differ. The decision model starts with the choice between administering or not administering TXA to a hypothetical cohort of 1,000 bleeding surgical patients. Whichever strategy is chosen the patient can reach the transfusion trigger and require a blood transfusion or can be healthy without requiring a blood transfusion (transfusion trigger not reached). If the patient is transfused he/she can remain healthy, can be infected (HIV, HBV, or HCV) or can die. If a patient did not receive a clinically indicated blood transfusion because it was not available then he/ she has a higher probability of dying. For simplicity, it is assumed that a patient cannot be infected by more than one viral infection. The outcomes considered in the decision tree are: deaths of patients who could not receive blood transfusions due to blood shortage and the number of HIV, HCV and HBV infections. Data on probabilities and costs are required in order to populate the decision model. Probabilities were estimated from published studies and from simple mathematical models reported in the following text. All the parameters of the model, data sources and values used are presented in Tables 1, 2 and 3 and are discussed in greater detail in the 'Cost' and 'Probability' sections. Because the costs and consequences included are within a year of treatment, no discounting is required. Oneway deterministic sensitivity analysis was performed in order to estimate the impact of parameter variation on the incremental cost-effectiveness ratio. In addition, a probabilistic sensitivity analysis was also undertaken in which all utilisation and outcome variables were varied. Probabilities The probability of a surgical patient reaching the transfusion trigger and thus requiring a blood transfusion without receiving TXA (mean probability: 0.66, range: 0-1) was obtained from Davies et al. [8]. According to a recent systematic review conducted by Henry et al. [4] the relative risk of requiring a blood transfusion following TXA for a surgical patient is 0.61 (CI 95%: 0.54-0.69). Thus, the probability of requiring a blood transfusion after TXA administration (0.40) was estimated by multiplying the baseline risk by the relative risk. Due to the low rate of voluntary blood donations, less than 52% of the blood requirement was available to be transfused in the WHO African Region [2]. Since there are no published data on the likelihood of adult surgical patients receiving a blood transfusion or on the demand and supply of units of blood during surgery in the four countries considered in the study, the probability of being transfused in a setting where TXA is not routinely used was based on the WHO's recommendations [9,10]. According to WHO [9], a blood supply of between 10-20 whole blood units per 1,000 population will satisfy baseline clinical demand [6,9,10]. Thus, it is assumed that if the volume of blood donated (v) exceeds 0.01 times the population (p), there is no shortage of blood and all surgical patients requiring a transfusion will receive one, otherwise a shortage is assumed and only a proportion of patients will be transfused. Probability of receiving a blood transfusion no TXA With routine administration of TXA, in a situation of blood shortage, the probability of receiving a blood transfusion, for patients who reach the transfusion trigger, increases because some patients receiving TXA may no longer require a transfusion or will need fewer units thus more units will be available across the proportion of the population that needs transfusions. Thus, the probability of receiving a transfusion (given a shortage of blood) becomes: Where m represents mean number of units transfused in the absence of TXA and n is the mean number of units transfused given routine administration of TXA. According to Davies et al. [8] the mean number of units transfused without TXA in elective surgery patients (all types of surgery) having an allogenic blood transfusion, m, is 3.13 (2.52-3.73). Thus, the mean number of units required by a patient who received TXA was estimated by subtracting from m the estimated reduction in the volume of blood transfused (-1.12 95%CI; -1.59 -0.64) [11]. If patients require on average 3/4 as much blood following TXA, then 4/3 times as many can be transfused using a fixed total number of units. R is the relative risk of a surgical patient requiring a transfusion given routine administration of TXA (0.61 95%CI: 0.54-0.69) [4]. Thus, if half as many require transfusion following TXA, the proportion that can be transfused doubles. The probabilities of being transfused with and without TXA are reported by country in Table 2. In Botswana and South Africa, where the donation rate is relatively high, the administration of TXA does not increase the likelihood of being transfused. However, in countries such as Tanzania and Kenya, which have very low donation rates, the probability more than doubled (Table 1 and 2) [6]. No studies were found that reported the probability of death among elective surgical patients in Africa. According to the meta-analysis of international studies conducted by Davies et al. [8] the probability of death for elective surgical patients in high income countries (HICs) is 0.03 (95% CI 0.00-0.21). In order to account for the higher underlying mortality rate of the SSA region, the HICs surgery mortality rate was adjusted according to the following equation: The estimated probability of dying in SSA is 0.06 [8,12]. In a one way sensitivity analysis, this value was assumed to range between 0.04 and 0.11, with the lower figure representing the underlying probability of dying in sub-Saharan Africa, and the upper figure calculated using the above mentioned formula and Davies' upper estimate [8]. Several studies conducted in SSA have shown that between 16 to 71% of deaths from maternal haemorrhage are due to lack of blood [13]. Nevertheless, no studies were found on death rates of surgical patients in Africa who did not receive clinically indicated blood transfusions. The only data available in the literature come from a cohort study of adults admitted for surgery in the US who refused blood transfusions for religious reasons [14]. This study estimated that the odds of death for a patient with a postoperative Hb of ≤8 g/ dl increased 2.5 times (95% CI: 1.9-3.2) for each gram decrease in haemoglobin [14]. In SSA due to the high risk of infections and the lack of blood, patients are usually transfused when their haemoglobin level (Hb) is 5 g/dl or below. Using the relationship estimated by Carson et al. [14] a surgical patient in SSA who does not receive a necessary blood transfusion has an estimated probability of dying of 0.45% (95% CI:0.0 to 0.91). The per-unit risk of HIV, HBV and HCV infections in the four target countries was obtained using a risk model developed by Jayaraman et al. [15] where the probability of being HIV, HBV and HCV infected per single unit of blood in each country was estimated according to the formula: The risk of an infected unit entering the blood supply is estimated using prevalence of infection in donors, screening coverage and test sensitivity. The infectivity risk is the probability of developing HIV, HBV or HCV after receipt of a contaminated unit of blood. The risk of a susceptible person receiving a blood transfusion is dependent on the prevalence of infection in the recipient population. Assuming that each unit of blood transfused comes from a different donor, administering TXA would reduce both the units of blood required per transfusion and the probability P t of acquiring HIV, HBV and HCV infection through a blood transfusion [4,16]. Where p is the probability of being HIV, HBV or HCV infected per unit and u is the number of units transfused with and without TXA [16]. The average number of units transfused (3.13 units), when TXA is not given, was taken from the study conducted by Davies et al. [8] while the number of units transfused, when TXA is given (2.01), was calculated using Henry et al. [4] results on the effectiveness of TXA in reducing volume of blood transfusion. The HIV prevalence in donors was obtained from the PEPFAR latest release [6]. Data on the HBV and HCV prevalence among blood donors was obtained from a systematic literature review. In Tanzania and Kenya, where more than one estimate of HCV and HBV prevalence was available from the literature, an average of the available values was used to populate the model (Table 3). No data regarding the HCV prevalence in both South Africa and Botswana were retrieved. Costs All costs are reported in 2007 International dollars ($). Two cost items are considered in the analysis: the cost of blood transfusions and the cost of giving TXA. The cost of providing a unit of blood in Africa from a health and social service perspective depends on the type of system established to provide blood transfusions [17]. Where centralised transfusion services have been established the final cost of producing one unit of blood is higher than for a hospital based service because of the higher cost of recruiting, screening and distributing blood to individual hospitals throughout the country [17]. It has been estimated that costs associated with donor recruitment account for half of the budget of centralised transfusion services [17]. In a hospital based service, on the other hand, the cost of recruiting donors is shifted to the families of the patient that donate the blood or purchase it on the black market [17]. The cost of one unit of blood in Kenya, which relies on a hospital-based service, was assumed to be $15.60 (in 2007 prices) [18,19] because of a lack of country-specific data. A cost of $57.10 was assumed for Tanzania, South Africa and Botswana as these three countries have successfully introduced centralised blood transfusion services with 100% of voluntary donations [1,17,18]. Administering TXA to elective surgical patients is an inexpensive and easy intervention [20]. The time required to administer TXA and observe the patient is short (maximum 15 minutes) and no additional training is required to administer the drug (IV administration is a routine procedure for qualified nurses). Also the supplies required to administer TXA (e.g. 10 mL syringe, 100 ml bag of saline, large gauge needle) are likely to be available and affordable even in limited resource setting. Thus, the assumed costs in the present analysis were $2 and $3 for TXA administration and supplies respectively. The main cost item of the intervention is the drug cost per ampoule of TXA. The cost of TXA might vary by country and by producer [20][21][22]. Also the dosage needed to prevent fibrinolysis is not well established [4,23]. Horrow et al. [24] observed that a dose of 10 mg/kg of TXA followed by 1 mg/kg/hour is effective in decreasing bleeding among surgical patients and that larger doses did not provide any additional haemostatic benefit. As a result, a fixed dose of 2 gram intravenously infused was assumed. In previous trials, it has been observed that this dosage would be efficacious for both larger patients (>100 kg) but also safe in smaller patients (<50 kg) without adverse events [25]. Overhead costs associated with storage, distribution, and inventories were assumed to be zero. TXA is thermally stable and does not require specific storage conditions. Thus, storage and distribution costs per intervention are negligible [5]. The global cost of TXA (Cyklokapron® Pfizer) was obtained from the British National Formulary and converted to 2007 $ using the purchasing power parity exchange rate [26,27]. The overall cost of administering a dose of TXA is estimated to be $13 (made up of a drug cost of $8, staff time $2 and supply cost $3). Handling of uncertainty Univariate deterministic analyses were performed to investigate the impact of selected model parameters on the cost effectiveness of TXA in each of the four countries. As the probability of requiring a blood transfusion without TXA can vary according to the type of surgery, the age of the patient and the adoption of a restrictive transfusion trigger, a broad range of values between 0 and 1 were explored [8]. The relative risk of requiring a blood transfusion with TXA versus no TXA was varied between 0.54 and 0.69 following Henry et al. [11]. The probability of death for a surgical patient requiring and not receiving a blood transfusion was estimated to range between 0.75 (assuming that the patient has a Hb value as low as 3 g/dl) and 0.15 (for patients who are moderately anaemic, Hb = 7 g/dl) [14]. One way sensitivity analysis was also performed to explore how cost effectiveness changed according to the probability of death for a surgical patient (0.04-0.11) and for the number of units transfused to a surgical patient without TXA (2.52-3.73) [8]. In order to account for the potential differences in TXA prices across countries, we use a range of values obtained from the published literature to explore the cost-effectiveness of TXA for different TXA prices in a one way deterministic analysis [28]. The lowest price considered $3.13 (2007 prices), came from a study conducted in Spain that estimated the effectiveness of TXA administration during total knee arthroplasty. The highest value, $44 (2007 prices) was retrieved from an American study on the use of antifibrinolytic agents in surgery for congenital heart disease [22,29]. The cost of a unit of blood was assumed to range between $15.60 (cost of a unit of blood in a hospital based blood system in Africa) and $262 (cost of a unit of red cells in high income countries) [19,30]. A further sensitivity analysis assuming no blood shortage in any the four countries was performed to investigate the effectiveness of TXA in preventing blood borne infection. A probabilistic approach was adopted in order to assess the impact of the uncertainty more accurately. The beta, gamma and lognormal distributions were chosen for probability, cost and relative risk parameters respectively, following the suggestions of Briggs et al. [31]. Monte Carlo simulations were conducted to generate 1,000 samples from the parameter probability distributions [31]. The incremental cost (ΔC), incremental effectiveness (ΔE) and the incremental net benefit of TXA versus no TXA were calculated for each of the Monte Carlo simulations according to the following formula: [31] Incremental net benifit Where l is the willingness to pay for a unit change in the outcome (e.g. lives saved). Cost-Effectiveness Acceptability Curves (CEACs) show the probability that the intervention is cost effective, given the range of monetary value the policy maker is willing to pay for a particular unit change in the outcome [31]. CEACs plot the proportion of simulations for which the incremental net benefit of giving TXA versus no TXA is greater than zero (the intervention is cost effective) for a willingness to pay range of $0 to $1,000. Base case analysis The effectiveness of the intervention varies across countries and depends on the probability of receiving blood with and without the routine use of TXA ( Table 2). The overall number of lives saved with TXA versus no TXA is given by the difference in the number of deaths with and without TXA per 1,000 patients. The number of deaths with and without TXA is the is the sum of three elements: deaths of those patients who need a blood transfusion and do not receive it, deaths of patients who need a blood transfusion and receive one, and deaths of patients who did not need a transfusion (die from surgery-related conditions). In Botswana and South Africa, where every patient who needs blood is transfused, the administration of TXA is not lifesaving. In Kenya, where the probability of receiving a blood transfusion despite receiving TXA is 33%, the administration of TXA saves 150 lives per 1,000 patients compared to the do-nothing scenario (Table 4). In Tanzania, TXA is also life saving but the effect is slightly lower with 140 lives saved. According to the present model TXA does not prevent blood-borne viral infections in countries where there is a blood shortage since any blood saved from giving TXA is reallocated to other patients. However, giving TXA in countries where blood is not tested for all viral infections can avert new cases of blood-borne infections. In Botswana, the administration of TXA can avert one HIV case and four HBV infections per 1,000 patients since TXA reduces both the total number of blood transfusions administered in the country and the probability of being infected per blood transfusion. TXA did not prevent HIV in South Africa because the probability of acquiring HIV is very low, less than 1%, even if patients do not receive TXA. However, based on our model, the use of TXA can prevent one case of HBV per 1,000 patients in this setting. The incremental cost of administering TXA in countries where there is blood shortage is $13,000 for 1,000 patients (Table 4). Thus, the estimated incremental cost per life saved is $87 and $93 for Kenya and Tanzania respectively. However, in South Africa and Botswana, where adequate availability of blood ensures access to transfusion for every surgical patient, use of TXA could save $59,000 per 1,000 patients since a lower number of transfusions will be performed (Table 4). Sensitivity analysis assuming no shortage of blood Assuming no shortage of blood in Tanzania, the country with the highest percentage of HIV seropositive blood donations two HIV infections, five HBV infections and thirty-four HCV infections are averted per 1,000 surgical patients who received TXA compared with the no TXA scenario. In Kenya, if blood was available for all patients, the administration of TXA would result in the prevention of four HBV infections and eighteen HCV infections per 1,000 surgical patients. One-Way sensitivity analyses Many parameters used in the model were based on assumptions, or were calculated through equations, using the limited data available from literature. Since these parameters can vary both between and within countries, we performed extensive one-way sensitivity analyses to examine how the cost-effectiveness of TXA is affected by changes in the input values ( Table 5). As shown in Table 5, the results for Botswana and South Africa are identical. In both the countries, all patients needing a blood transfusion receive one. In all four countries the probability of requiring a blood transfusion without TXA has the greatest impact on the cost effectiveness of TXA. If no-one requires a transfusion, giving TXA increases costs by $13,000 (the cost of administering TXA to 1,000 patients). For the case in which everyone requires a transfusion, TXA is cost saving in Botswana and South Africa (-$97,000 per 1,000 patients) and life saving in Tanzania and Kenya (incremental cost per additional life saved $63 and $58). Changes in the probability of death for an anaemic patient who does not receive a blood transfusion also affects model outcomes. The higher the probability of death for those not receiving a transfusion, the lower the incremental cost per life saved of administering TXA on a routine basis to surgical patients. Assuming that patients not receiving transfusions are moderately anaemic (7 dl/g), the incremental cost per life saved of administering TXA is $380 and $416 for Kenya and Tanzania respectively. However, if all the patients requiring transfusion are severely anaemic (4-3 g/dl), this value decreases to approximately $50 per life saved [14]. The incremental cost per life saved is also sensitive to the cost of TXA. Assuming a cost of $3.13, the incremental cost per life saved is $54 in Kenya and $59 in Tanzania. With a TXA cost of $44, the cost per life saved increases to $327 and $350. In both Botswana and South Africa, TXA is always cost saving regardless of the price of the intervention. In those countries where there is a shortage of blood, variations in the cost of the blood do not affect the cost effectiveness of TXA since the same amount of blood will be transfused overall. The cost-effectiveness of administering TXA is also relatively sensitive to changes in the probability of death for surgical patients in SSA. Holding other variables constant, the incremental cost per life saved increases as the probability of death among surgical patients increases. Ranging from $83 and $88 per life saved, assuming a probability of death of 0.04%, to $100 and $108 assuming a probability of death of 0.11%. Both the Relative Risk reduction of requiring a blood transfusion with TXA and the number of units transfused without TXA affect the model results. However, independent of these two parameter changes, TXA remains very cost effective in Tanzania and Kenya and cost saving in Botswana and South Africa. horizontal axis. The probability that the intervention is cost saving is indicated by the point where the CEAC cuts the vertical axis since a zero value for l implies that the policy maker places no value on lives saved. Thus in Botswana and South Africa, routine use of TXA is expected to be cost-saving for all of the potential combinations of parameters considered in this analysis. Whereas, in Kenya and Tanzania, the routine use of TXA has a zero probability of being cost effective when the WTP per life saved is zero; this value rises to 0.7 for a WTP per life saved higher than $100 and reaches a plateau above 0.9 for a WTP higher than $400. Discussion The routine administration of TXA in bleeding elective surgical patients could be life saving in countries such as Kenya and Tanzania where there is a shortage of blood because more blood will be available for those who need it. In countries where blood is readily available, such as South Africa and Botswana, the use of TXA is likely to be cost saving because the savings from reducing the number of blood transfusions needed exceed the cost of administering TXA routinely to bleeding surgical patients. In addition, where there is no blood shortage, the administration of TXA decreases the risk of transfusion-transmitted viral infections because fewer units of blood will be transfused. Therefore, independent of the cost-effectiveness threshold for adopting a health care intervention, administering TXA is a dominant strategy in countries where blood transfusions are readily available. There is reliable evidence from randomised controlled trials that the administration of TXA to bleeding elective surgical patients reduces the need for blood transfusions and reduces the amount of blood transfused [4]. Kenya, Botswana, Tanzania and South Africa were selected in order to evaluate how the cost effectiveness of administering TXA to bleeding surgical patients varies according to country-specific circumstances, specifically different blood donation rates and HIV seroprevalence [6]. Even in countries where there is a shortage of blood, TXA is a highly cost-effective intervention. According to the Commission of Macroeconomics and Health [32], in the context of developing countries a very cost effective intervention would avert one disability adjusted life year (DALY) for less than the average per capita income for a given country or region. The estimated cost per life saved is $93 and $83 in Tanzania and Kenya respectively. Thus, assuming that a surgical patient whose life was saved due to TXA administration survives for even one year (in perfect health) the cost per DALY averted resulting from using TXA would be well below the average per capita income of Tanzania ($400) and Kenya ($680) [33]. However, there are important limitations to the data used in the model and these need to be taken into account when interpreting the findings. Several model parameters were not available in the literature and were estimated indirectly through equations which made several strong assumptions. For example, it was assumed that the blood savings arising from use of TXA would be re-distributed among other surgical patients and not used to treat other patient groups requiring blood transfusions. Also the model did not account for intra-country variation in healthcare infrastructure. In rural areas for instance, the probability of death for a surgical patient and the probability of being HIV, HBV and HCV infected may be higher as both qualified personnel and reagents for blood screening are less likely to be available. Another potential limitation is that the risk of being transfused and the risk reduction driven by TXA were taken from studies conducted in developed countries [4,8]. In particular, since the rate of preoperative anaemia among surgical patients in SSA is higher than in HICs, it is likely that the present study is underestimating the risk of being transfused and so the potential benefit of administering TXA. According to the metaanalysis conducted by Henry et al. [4] TXA there is no reliable evidence that TXA is associated with an increased risk of adverse events such as myocardial infarction (RR 0.96 95%CI 0.48 to1.90), increased risk of stroke (RR 1.25 95%CI 0.47 to 3.31) and thrombotic (RR 0.77 95%CI 0.37 to1.61) events in HICs. However, it is unclear if this may also be the case in elective surgical patients in SSA. Cost estimates could also have been a source of error in our model. The analysis did not account for the potential cost savings for surgical patients arising from TXA administration. For example, in those African countries, such as Mozambique, where the cost for blood transfusion services is recovered from the beneficiaries of the transfusion, TXA administration would reduce the financial burden for the patients [34]. The epidemiological transition in Africa is moving the demand for surgery to conditions similar to those observed in developed countries. Especially in the urban areas, where the change in life expectancy and health behaviours occur at a faster pace, the higher incidence of non communicable diseases will contribute to an increased demand for elective surgical procedures. Ischemic heart disease, which is the most common cause of cardiac surgery, now ranks 8 th among the causes of death in SSA and is already the leading cause of death among the elderly (>60 years) [35]. For example, it was estimated that ischemic heart disease alone accounted for 13660 deaths in Kenya and 27013 in South Africa in 2002 [35]. Another cause of elective surgery (e.g. hip replacement), rheumatoid arthritis, once considered a rarity in SSA, has now become a common disease in many countries [36]. This study evaluates the cost-effectiveness of administering TXA among elective surgical patients in general without distinguishing between different types of surgery (e.g. cardiac surgery, orthopaedic surgery). This is justified since according to the meta-analysis conducted by Henry et al. [11] TXA shows similar effectiveness in reducing both the risk and the volume of blood transfused across all the types of elective surgery. In order to account for the difference in the risk of receiving a blood transfusion (with and without TXA) between types of surgery and between countries, extensive one way sensitivity analyses have been performed. As this was a simulation study, the data to populate the model came from different sources and settings, which could have affected the parameter estimates. It is possible that TXA may also reduce mortality (RR 0.60, 95% CI: 0.32-1.12) and the risk of re-operation for bleeding (RR 0.67, 95% CI: 0.41-1.09) [11]. Although both these outcomes were not statistically significant, it would be important to consider them in future studies evaluating TXA cost effectiveness. Finally, as no data were found for the HCV prevalence among the donor population in both South Africa and Botswana, it was not possible to estimate whether in these two countries administration of TXA would lead to a reduction in the number of HCV infections transmitted through blood transfusions. According to Ozgediz and Riviello [37], although surgical conditions account for 11% of the global burden of diseases, with 25 million disability-adjusted life years in Africa, surgical procedures are "neglected diseases" in LMICs and in particular in sub-Saharan Africa [37,38]. This study has shown that the routine administration of TXA could be a very cost effective intervention for reducing both the cost and the risks associated with surgical procedures requiring blood transfusions in sub-Saharan Africa [37,38]. It has been demonstrated that TXA could be potentially lifesaving in those African countries where there is blood shortage. Moreover, it can also reduce cost and prevent some blood borne infections where blood is readily available. Sciences, University of California at San Francisco, San Francisco, California.
7,798
2010-02-17T00:00:00.000
[ "Economics", "Medicine" ]
One-pot synthesis of monolithic silica-cellulose aerogel applying a sustainable sodium silicate precursor Abstract Cellulose aerogel is an advanced thermal insulating biomaterial. However, the application of cellulose aerogel in thermal insulation still faces critical problems, for instance, the relatively low strength and large pore size without Knudsen effect. In this study, a silica areogel made from olivine silica rather than traditional tetraethoxysilane or water glass is employed to synthesize silica-cellulose composite aerogel applying a facile one-pot synthesis method. The silica aerogel nanoparticles are formed inside the cellulose nanofibrils by using sol-gel method and freeze-drying. The developed silica-cellulose composite aerogel has an obviosuly lowered thermal conductivity and is significantly stronger compared to plain cellulose aerogel. The microstructure of silica-cellulose aerogel was characterized by SEM, TGA, FTIR and N2 physisorption tests. The developed silica-cellulose aerogel had a bulk density of 0.055 ~ 0.06 g/cm3, compressive strength of 95.4 kPa, surface area of 900 m2/g and thermal conductivity of 0.023 W/(m·K). The thermal stability of the composite aerogel was also improved and showed the higher cellulose decomposition temperature. Furthermore, the composite aerogel is modified by trimethylchlorosilane making it hydrophobic, reaching a water contact angle of ~ 140°, enhancing its volumetric and thermo-phycial stability when applied in a humid environment. In conclusion, the resulting green silica-cellulose aerogel is a promising candidate for utilization as a high performance insulation material. Introduction Aerogel was first invented in 1931 by extracting the solvent in a silica gel without collapsing the silica gel structure [1]. Aerogel shows unique properties compared to other lightweight materials, such as polycarbonate, carbon fiber reinforced plastic or aluminum [2]. Thanks to the high porosity (/ > 95%) and low thermal conductivity, aerogels are excellent materials for thermal insulation, catalytic support and chemical absorbers [3][4][5][6][7][8]. Nowadays, with the increasing demand on green chemistry, aerogel made from nanocellulose have gained much focus due to its wide availability and renewability. Cellulose nanofibers are lightweight, mechanically strong nano/microfibers produced from plant-based materials [9]. Normally, it is applied in the textile industry and in bio/ polymer composite fields as well. Cellulose mainly consists of repeating glucose molecules units attached to each other. Compared to other polymer fibers from petrochemical resources, naturally occurred cellulose fibers are acknowledged as a sustainable and green alternatives with high aspect ratio and specific surface area [10]. However, although cellulose aerogel has a very high porosity that beyond 97% and good formability, the thermal insulation property still cannot comparable to conventional silica aerogel. This is due to the much bigger pore size between cellulose fibers (around tens of micrometers) and hence the Knudsen effect cannot play a major role in the thermal conduction. Moreover, cellulose aerogel has a quite low strength compared to other aerogels, for instance, polymer aerogels like PU (Polyurethane) and PI (Polyimide) aerogels, which limits the application of its use in real world. Therefore, finding a suitable method to decrease the thermal conductivity and increase the strength simultaneously is still in demand. Silica aerogel, on the other hand, is a conventional aerogel mainly used for thermal insulation, for instance, building energy saving, subsea pipeline heat conservation and interior insulation coatings [11][12][13][14]. However, silica aerogel is more fragile than other aerogel materials, such as cellulose aerogel. Therefore, most commercial silica aerogels are in the form of small granules or powder, making it difficult to apply them in practical conditions like thermal insulation [15]. Hence, it is important to smartly utilize silica aerogel to improve its engineering properties while without compromising porosity and thermal insulation properties significantly. Currently available silica aerogel is mainly produced from organic silica sources or commercial water glass. For instance, tetrathoxysilane (TEOS) and methyltrimethoxysilane (MTMS), are relatively expensive and contain high embedded energy [5]. Meanwhile, commercial water glass is conventionally manufactured by reacting sodium carbonate (Na 2 CO 3 ) with quartz sand in the molten state at 1300~1600°C [16]. Therefore, exploring a costeffective and environmentally friendly method to produce silica aerogel is of great interest [17,18], especially considering the sustainability development and environmental impact [19]. For the silica precursor, the silica produced by dissolving the mineral olivine in waste acid has lower energy requirements than conventional methods which include a spray pyrolysis (1200-1600°C) process. In our previous research [20][21][22][23], it was shown that silica produced from olivine at 50-90°C had a purity higher than 99% and a specific surface area between 100 to 400 m 2 /g, which is much higher than normal silica [24], while the cost and CO 2 emission are much lower. Thus, the obtained nano-silica can react rapidly with sodium hydroxide (NaOH) to produce low modulus (SiO 2 /Na 2 O) sodium silicate at ambient pressure and low temperatures, thanks to its high surface area and reactivity. Thus, applying olivine-derived sodium silicate as a precursor instead of organic silica source (TEOS or TMOS) or commercial water glass can help to significantly reduce the energy consumption to produce aerogel. In the past few years, several studies were focusing on the silica-cellulose composite aerogel (SCA). For example, Demilecamps et al., [25] explored the possibility of impregnating silica into the cellulose aerogel scaffolds via molecular diffusion and forced flow, with a final supercritical drying. The resulting composite aerogel showed a higher Young's modulus and lower thermal conductivity compared to the original cellulose aerogel. Zhao et al., [26] investigated the multiscale assembly of superinsulating silica aerogels within silylation nano-cellulosic scaffolds. It was demonstrated that the novel composite aerogel had low thermal conductivity and improved mechanical strength. However, most of these studies prepared silica-cellulose aerogel by forming cellulose aerogel first and with organic silica source. To be specific, the cellulose scaffold needs to be prepared first and later impregnated with silica components from sols derived from organic precursor. Therefore, it is interesting to explore methods to prepare silicacellulose aerogel from the sol-gel process of green sodium silicate and impregnate cellulose fibers in the silicate sol. Table 1 lists several typical synthesis methods mentioned in recent literatures using water glass. As can be seen, most of the studies investigate the cellulose hydrogel immersed in commercial water glass with a high modulus (3.3) and then used acid to form silica nanoparticles, followed by supercritical drying. However, it was found that with the cellulose nanofibers in silica hydrogel, the hydrogel can withstand the safer and more cost-effective freeze drying to obtain the aerogel. Hence, the conventional supercritical drying could be avoided. In this study, the cellulose nano-fibrils are introduced in the inorganic and cost-effective silicate sol-gel process. The hydroxyl groups of the polymerized silicate sols during condensation and gelation can react with the -OH groups on chains of the cellulose fibers, leading the two materials chemically attached with each other and form composite hydrogel. The final silica hydrogel was reinforced with the cellulose fibers. Since the purpose of using Table 1 Production methods and properties of silica-cellulose aerogel using commercial water glass as precursor. Literatures Synthesis methods Drying method Liu et al., [9] Cellulose hydrogel film dipped in water glass followed by ethanol and sulfuric acid catalyst. Supercritical drying Demilecamps et al. [27] Cellulose-8%NaOH-1%ZnO suspension was added with sodium silicate solution to form cellulose gel. Acid was used to form silica particles in the composite aerogel. Supercritical drying Sai et al., [28] Bacterial cellulose hydrogels immersed in sodium silicate solution to gel and followed by acid catalyst. the composite aerogel was to explore the possibility to apply for thermal insulation, the volume stability and cost-effective of the developed aerogel are significantly important. Therefore, hydrophobization of the silica-cellulose composites aerogel is necessary to avoid water penetration into the hydrophilic aerogel, to increase the volume stability and service life of the composite aerogel. Because the wetting-drying processes caused by moisture in the environment can damage the pore structure of the composite aerogel, leading to the collapse of the structure. Hence, TMCS was applied for hydrophobization by chemical vapor deposition. The schematic diagram of the mechanism is shown in Fig. 1. Overall, a facile synthesis of silica-cellulose aerogel (SCA) is presented by incorporating renewable cellulose nano-fibrils into the low-cost silicate sol-gel process and freeze-drying the composite gel. Olivine silica is used to prepare the green sodium silicate precursor. The procedure is promising to prepare sustainable SCA with ultra-low density, low thermal conductivity and relatively higher mechanical properties than plain cellulose aerogel. Starting materials Olivine silica used for aerogel preparation was provided by Eurosupport. The specific surface area, pore volume, pore size, particle size and silanol content of olivine silica are shown in Table 2. The amorphous state of olivine silica is visible by X-ray diffraction pattern as shown in Fig. S1 (a). The olivine silica has a surface area of around 274 m 2 /g, indicating a fast reaction rate with sodium hydroxide. Moreover, the pore volume and pore size are both high, reaching 0.72 cm 3 /g and 10 nm, respectively. The silanol content of olivine silica reaches 8~20 OH/nm 2 , which is far beyond the commercial fumed silica and pyrogenic silica, which have a silanol content of 3~4 OH/nm 2 [29]. Olivine-derived sodium silicate with a modulus of 1.5 with 8% silica content was prepared by reacting the olivine silica with sodium hydroxide (NaOH) solution at 80°C for 2 h. The recipe for preparation of sodium silicate is presented in Table 3. The practical modulus was determined by using X-ray florescence. As observed in Fig. S1 (b), the mass percentage of dissolved silica in sodium hydroxide was around 99.73%, indicating a nearly completely dissolution of olivine-silica. The undissolved silica particles have limited influence on the quality of prepared sodium silicate due to the very small fraction in sodium silicate (0.27%). The pH of the prepared sodium silicate solution was 12.98, which is slightly lower than that of the commercial water glass (13.69). In order to determine the types of silicate species in olivine sodium silicate, 29 Si NMR test was carried out to measure the silicate state and the results are shown in Fig. 2. The sharp peak at À72 ppm represents the existence of Q 0 monomers, while Q 1 dimers and Q 2 trimers at c.a. À80 and À82 ppm can also be observed. Meanwhile, a moderate number of Q 2 /Q 3 groups can be observed at around À86 to À90 ppm. No Q 4 sites can be observed, with the locations lower than À100 ppm, which means all the silica in Q 4 form is dissolved in solution. For the olivine sodium silicate, a significant peak at À72 ppm indicates most of the silicate structure is monomers silicate. While the minor peaks at the chemical shifts of around À80 ppm show a less extent of Q 1 and Q 2 sites for silicate. Trace number of Q 2 /Q 3 sites can be observed at À87 to À90 ppm, indicating few percentage of highly polymerized silicate. However, compared to the NMR analysis of commercial water glass with a modulus of 3.3, there is significant difference, indicating a much higher polymerized silicate species. This is because commercial water glass production includes a silica sand at a much higher temperature of 1300 to 1600 degree of calcination with sodium carbonate (Na 2 CO 3 ), so more silicates are supposed to polymerize in solid solutions and thus more Q 4 silicate species are expected. However, more energy is also supposed to be involved in this process which is not sustainable and green. Therefore, the difference in structure of silicate species in sodium silicate solution may influence the properties and microstructure of the resulting aerogel. Water suspensions of two kinds of cellulose nanofibrils (CNF) were provided by Sappi, the Netherlands. CNF1 has a Fine S of 94% and Fine P of 4.6% while CNF2 had a Fine S of 48% and Fine P of 23.5%. The cellulose was derived from wood pulp that has been sourced from sustainably managed forests. The CNFs were prepared by the mechanical super-milling method with the a-cellulose source in the form of a white gel. The original concentration of CNF1 and CNF2 were 2.7 wt% and 3.1 wt%, respectively, which was determined by heating the raw CNF suspension at 105°C until constant mass and then calculate the concentration of solid content in the suspension. The pH of the CNFs was between 6.5 and 7.5. A good dispersion of cellulose fibers is critical to utilize its full benefits. The two CNFs were mixed for 30 min at 2000 rpm with a high shear mixer (Model L5M, high shear laboratory mixer, Silverson Machines Ltd.) to improve their dispersion until showing efficient thickening effect with a cream-like appearance. The SEM and TEM images of these two CNFs are presented in Fig. 3. The diameter of the two kinds of nanofibers are similar, however, the length of these two fibers were different, ranging from a few micrometers to tens of microns. The surface charge of CNF1 and CNF2 measured by zeta potential was À52.5 mV and À40.5 mV, respectively. The properties of the used CNF1 and CNF2 are shown in Table S1. Preparation of silica-cellulose composite aerogel The 8% as-prepared olivine sodium silicate was passed through ion exchange resin to obtain silicic acid, with a final pH of 2.0~2.5. Then, 25 mL of silica sol was mixed with CNFs in a beaker for 60 min at room temperature. Later, the pH of the silica-cellulose composite suspension was increased to 5.0~5.5 by adding 1 M ammonium hydroxide to accelerate the gelation process. Afterwards, the suspension was placed into a mold to cast the silicacellulose hydrogel. For all the hydrogels, the gelation times were around 20 min. The composite hydrogel image is shwon in Fig. S2, showing trasnparent and homogeneous gel. Lastly, the mold was sealed air tight with a plastic film. After 1 day aging at room temperature, the silica-cellulose composite hydrogel was freeze-dried. Specifically, the hydrogel was immersed in liquid N 2 at a temperature of À 196°C. The frozen sample was dried in a freeze dryer (Alpha 2-4 LD plus from Martin Christ, Salmenkipp) under the following conditions: ice condenser = À 57°C; vacuum 0.1 mbar; and time = 48 h. For the hydrophobic treatment of SCA, the as-prepared composite aerogel was treated by thermal chemical vapor deposition with trimethylchlorosilane (TMCS). Magnesium chloride saturated solution was poured into a vacuum desiccator for regulating relative humidity at range from 35% to 65% for 24 h. SCA was placed in a 200 mL beaker, while a 3 mL of TMCS was inserted in another 10 mL beaker. The smaller beaker containing TMCS was placed inside the 200 mL beaker. This double beaker setup was placed in the desicator and was designed to prevent direct contact of the aerogel with TMCS. The 200 mL beaker was sealed with a cap and placed in an vaccum oven at 160°for 1 h. Unreacted silanes were removed by keeping the aerogel in vacuum drying oven until the pressure reached 0.03 mbar or less. The prepared hydrophobic SCA was ready for characterization. The schematic diagram of the preparation of silica-cellulose composite aerogel is presented in Fig. 4. Preparation of pure cellulose areogel Pure cellulose aerogel was prepared according to the previous researches as reference [30,31]. The CNF1 and CNF2 suspensions were first diluted with distilled water to a concentration of 0.55% and 0.60%, respectively. The diluted suspensions were continuously stirred at 480 rpm at 20°C for 30 min using a magnetic stirrer. Then, the diluted suspension was moved into a cylindrical plastic mold with a diameter of 10 mm and a height of 20 mm. Afterwards, the assembly was frozen dried with liquid nitrogen and then moved to a freeze dryer to extract the solvent for 2 days. The recipe of all the six samples is shown in Table 4. Table 2 Properties of olivine silica. Characterization of the composite aerogel The skeletal density of the prepared silica-cellulose aerogels was determined with a Helium pycnometer (AccuPyc II 1340 Micromeritics). The bulk density of the as-prepared SCAs was determined by using the bulk volume and mass of the prepared samples. Based on the two densities, the porosity of the SCAs was determined according to: Where / is the porosity of the tested aerogel, qs the skeleton density of the tested aerogel, q b the bulk density of the tested aerogel. Water suspension of CNF was prepared for TEM analysis. The suspension was diluted to 1% of original concentrated CNF solution. A 200 mesh Cu grid covered with a continuous carbon film was used to support the CNF sample. FEI Tecnai 20 Sphera instrument with a LaB6 filament was operated at an accelerating voltage of 200 kV to observe the microstructure of CNF. The mechanical property of the SCAs with a cylindrical shape (10 mm diameter  20 mm height) was tested in an MTS Criterion equipped with a load cell of 200 N at a speed of 1 mm/min to e = 80% of its original height. The thermal conductivity of the SCAs was determined with TPS-instruments (Hot disk). The water contact angle (CA) measurements using sessile drop technique was used to determine the hydrophobicity of SCAs (Dataphysics Contact Angle System, TBU 90E). The volume of the Milli-Q water droplet used for the contact angle test was 3.000 lL. The final results correspond with the average measured CA of five droplets on the surface of SCAs. The margin of error was defined as the 95% confidence interval of the five measurements. The microstructure of the SCAs was observed with scanning electron microscopy (SEM), by using a JOEL JSM-5600 instrument at an accelerating voltage of 15 kV. The thermal stability of the SCAs was determined with thermogravimetric analysis by using a NETZSCH STA449-F1 instrument with a heating rate of 5°C/min under air atmosphere. Chemical bonds in the SCAs were detected by using a Varian 3100 Fourier-transform infrared spectroscopy (FTIR) with wavenumbers ranging from 4000 to 400 cm À1 at a resolution of 2 cm À1 . The specific surface area and pore size distribution were measured by nitrogen physisorption, which was carried out with a Tristar 3000 Series micrometer employing nitrogen at 77 K. The samples were pretreated by nitrogen gas flow with a heating rate of 10°C/min and heated up at 80°C for 4 h to remove moisture. Solid state MAS NMR spectra were carried out using a Bruker Advance 400WB spectrometer. The 29 Si NMR spectra were collected at 79.5 MHz on a 7 mm probe, with a pulse width of 6.5 ls, a spinning speed of 15.9 kHz and a relaxation delay of 10 s. Fig. 5. The SCA shows a random distribution of silica and cellulose fibers, due to the heterogenous nature of cellulose fibers and because silica aerogel was also attached to these randomly distributed fibers. As observed in Fig, 5, the silica aerogel has a relatively strong affinity with cellulose fibers. For SS-CNF1, the nanofibers were crossly linked within the silica aerogel structures; the silica surface was very smooth and showed a more homogeneous structure. This silica microstructure was rather different from those of conventional silica-cellulose aerogels using commercial water glass as shown in Table 1, which implies silica monodisperse spherical particles inside the cellulose matrix (Liu et al., 2013;. Therefore, the developed SCA in this study could have a higher surface area and enhanced homogeneity. As can be seen from the NMR analysis of silica precursors in section 2.1, the commercial water glass contains more highly polymerized silicate (Q3/Q4) than olivine sodium silicate, which may be the reason of the difference in the silica morphology. Most of the micrometer-sized pores (20~50 lm) in original cellulose aerogel [31] (Fig. S3) are filled with silica aerogel, making it a more compacted composite than plain cellulose aerogel. However, there still exists a few pores with the size of 10~20 lm between cellulose fibers. For SS-CNF2, the cellulose fibers are slightly wider and the less homogeneous than SS-CNF1 due to the higher fraction of coarser cellulose nanofibrils in the raw material. After surface modification of SCA by TMCS reagent, the shape of the cellulose fibers remained the same while the surface of silica aerogel particles became rougher and clustery, which is in accordance with the BET results shown later, showing lower surface area of SS-CNF1-M. This change in morphology was expected since the Si-O-H group was substituted by the Si-O-Si(CH 3 ) 3 group as presented in Fig. 1. The plain cellulose aerogel contains the macropores between cellulose fibers with the size of around 20~50 lm, which was rather large compared to that of silica aerogel (10~20 nm) and SCA. Therefore, this loose structure makes plain cellulose aerogel less thermal insulating due to the air molecules can still move freely in the micron-sized pores and promote gaseous heat transfer via convection. Therefore, the silica-cellulose composite aerogel can overcome this drawback by incorporating silica aerogel in the pores to decrease gaseous heat transfer and even improve the thermal stability of cellulose fibers. Specific surface area and pore structure The physisorption isotherm and pore size distribution of SCAs from nitrogen physisorption test are presented in Fig. 6. The specific surface area, total pore volume and average pore size of the developed SCAs are shown in Table 5. Pure cellulose aerogel has a SSA BET of around 100~200 m 2 /g, while silica aerogel has a SSA BET of 600~700 m 2 /g (See Fig. S4). Table 4 indicates that the specific surface area of all the composite aerogels increased significantly compared to cellulose aerogel, suggesting the existence of nanostructured silica aerogel filling in the surface and pores of the cellulose matrix. Table 5 shows the SSA of SS-CNF1 and SS-CNF2 are much larger than that of pure cellulose aerogel and are similar to pure silica aerogel, reaching 958 m 2 /g and 614 m 2 /g, respectively. This result probably relates to the silica 3D network that has a higher surface area and thus leads to the increased surface area in the composite aerogels. The SSA of SS-CNF1 is also much higher than other researchers' work, which obtain a SSA of silica-cellulose aerogel reaching only 340 m 2 /g and 150 m 2 /g, respectively [9,32]. This phenomenon can be explained by the difference in the silicate structure as shown in the NMR test for the silicate precursor. It may indicate that low modulus silicate could form smaller silica particles inside the cellulose nanofibrils and increase the surface area. The physisorption isotherms of SS-CNF1 and SS-CNF2 present a typical Type IV isotherm, with a relatively small hysteresis, which is due to the narrow pore size distribution, with uniformly distributed pores below 4 nm. (See Fig. 6 (b)). At p/p 0 = 0.1 of the nitrogen isotherm exist a slight leap, indicating a moderate amount of micro-porosity, which may contribute to the large surface area as well. Therefore, this indicates silica aerogel covers the surface of cellulose fibers, thus changing the randomly distributed nanopores of cellulose to uniformly distributed nanopores of silica aerogel. The isotherm and pore structure of SCA changed significantly after TMCS modification, as shown in Fig. 6. The isotherm changed to a non-typical Type IV isotherm, indicating hydrophobic modification of aerogel result in a change in the pore size distribution. The relatively large hysteresis was caused by the broad pore size distribution with most pores ranging from 5 nm to 20 nm and concentrated in 8 nm. The larger pore sizes could be attributed to -CH 3 groups attached on the surface of silica, resulting in swelling of pores due to the repulsive force between the -CH 3 bonding. FTIR spectra The FTIR spectra of reference NCA and SCA before and after TMCS modification are shown in Fig. 7. The reference NCA spectra present typical bands of cellulose fibers, for instance, the hump at 3340 cm À1 and 1632 cm À1 suggesting O-H stretching and bending, and the sharp peak at 1053 cm À1 indicating C-O-C skeletal vibrations. Meanwhile, the characteristic peaks of C-H bending, C = C and C = O stretching are visible at 1376, 1310, and 1253 cm À1 , respectively. The SCA before hydrophobic treatment has typical Si-O-Si peaks, corresponding to 1059 cm À1 (asymmetrical stretching vibration), 795 cm À1 (asymmetrical stretching vibration) and 455 cm À1 (rocking vibration). Also, the Si-O-H bond can be observed at 967 cm À1 , showing the large amounts of hydroxyl groups on the silica surface that are available to react with the TMCS reagents for hydrophobic treatment. Minor amounts of H-O-H groups can be observed at 1632 cm À1 and 3340 cm À1 corresponding to bending vibration and stretching vibration of physically bound water. It shows the water was mostly removed through freeze drying. However, due to the hydrophilic nature of unmodified SCA, moisture in the air can be easily absorbed on the surface of aerogel, thus hydrophobic modification was necessary to resist water penetration that can damage the structure of SCA. Due to the modification with TMCS, the SCA becomes hydrophobic. TMCS has a Si-Cl bond that can react with the silanol groups of the unmodified SCA. For SS-CNF1-M and SS-CNF2-M, the characteristic peaks of the Si-C bonds are visible at 896 cm À1 and 1273 cm À1 . Also, the bending vibration of the -CH 3 group appears at 760 cm À1 . Furthermore, the characteristic peak of the Si-OH bond at 967 cm À1 disappears after surface modification, which means the silanol group was substituted by the Si-O-Si(CH 3 ) 3 group (896 cm À1 , 760 cm À1 ). The intensities of Si-C and -CH 3 bands of the two SCA are slightly different, even though the volume of TMCS used for surface modification remains the same. Hence, as discussed before, the BET result show that SS-NFC has a significantly higher surface area than SS-MFC. Therefore, the TMCS usage for NFC should be higher than that of MFC to reach the same level of trimethyl silyation, leading to the reduced intensity of Si-C and -CH 3 bands for SS-NFC. TG/DTG analyses The thermal gravimetry (TG) and differential thermal gravimetry (DTG) curves of SCA with different cellulose fibers and surface modification are illustrated in Fig. 8. The thermal decomposition of the reference NCA aerogel both consists of two phases. Firstly, the physical bound water to the surface of cellulose fibers evaporates before 105°C. It can be observed the physical bond water was 4 wt% for NCA. The second phase of decomposition lasts from 250°C to 375°C, which was attributed to the burning of cellulose fiber. The carbon chain normally decomposes at around 300~350°C. At this stage, most of the mass of the cellulose aerogel was lost. After heating from 375°C to 1000°C, the residual mass of the cellulose aerogel is carbon black, which is only 5 wt% for NCA. Therefore, pure cellulose aerogel can be vulnerable to higher temperatures and then loses the structural stability, consequently leading to a total collapse. However, the SS-CNF1 and SS-CNF2 aerogels showed different decomposition phases and residual mass at 1000°C, as shown in Fig. 8 (a). The amount of physical bound water for SS-CNF1 and CNF2 are higher than CNF1 and CNF2 samples, reaching 11.6% and 12.5%, respectively, indicating a more hydrophilic property. The mass loss of SS-CNF1 and SS-CNF2 between 250~400°C was 9.36% and 15.57%, respectively. Also, it can be noticed that the peak position in the DTG curve for this temperature range was different for SS-CNF1 and SS-CNF2. For SS-CNF1, the peak situated at 345.5°C, while for SS-CNF2 the peak shifted to 333.0°C. The residual mass after 1000°C of both samples showed a relatively high value due to the existence of silica, reaching 72.35% and 62.89%, respectively, indicating that the thermal stability of silica aerogel was much higher than that of cellulose aerogel. As observed from Table 6, the decomposition temperature of the DTG peak was increased due to the incorporation of silica, rising from 309°C to 345.5°C for CNF1 and to 333°C for CNF2, respectively. Therefore, silica dosage can slightly improve the thermal stability of the composite aerogel. After modification by TMCS, the physically bond water was significantly reduced for both SS-CNF1-M and SS-CNF2-M, showing the successful hydrophobic treatment. The decomposition temperature of cellulose fibers also increased to 331°C and 329°C for SS-CNF1-M and SS-CNF2-M, as compared to reference cellulose fibers. However, surface modification cannot further increase the thermal stability of the composite aerogel compared to hydrophilic SCA. The peak DT was still situated at around 329~331°C. Density and porosity of silica-cellulose aerogel The skeletal density q k , bulk density q b and porosity / of cellulose aerogel and silica-cellulose composites aerogel are shown in Table 7. The bulk density of the SCAs varied from 0.052 to 0.061 g/cm 3 , which is in between the plain silica aerogel (~0.1 g/cm 3 ) and cellulose aerogel (~0.012 g/cm 3 ). This is because the silica aerogel was attached on the surface of cellulose fibers and thus increases the pure cellulose density, while the scaffold of cellulose aerogel provides an ultralight matrix for silica aerogel. Thus, the density of SCA falls in the middle of the density of cellulose aerogel and silica aerogel, which was in line with the SEM and FTIR analysis. Pure cellulose aerogel obtains a much lower density than SCA, because silica aerogel (~0.1 g/cm 3 ) fills in the interparticle pores of cellulose nanofibrils (~0.02 g/cm 3 ). Also, the pore size of silica aerogel and cellulose aerogel was very different: silica aerogel has nanometer-sized meso-pores while for cellulose aerogel the pore size was tens of micrometers. Therefore, it is reasonable for SCA to have a higher density than that of cellulose aerogel, while it has a lower bulk density than silica aerogel. It is also observed that the four types of SCAs have slightly different densities and porosities (Table 6). SS-CNF2 has the lowest density (0.052 g/cm 3 ) among all the SCAs, showing the highest porosity of 97.1%. However, the density of SS-CNF1 was similar to SS-CNF2 ones. Furthermore, the density increases after TMCS modification, indicating the replacement of -OH group by -Si (CH 3 ) 3 group can increase the density of SCA since it has a larger molecular mass. Although the density of SCA increases, the porosity only slightly decreases, indicating a minor influence of surface modification on the porosity of SCA. It also can be observed that NCA1 has a slightly higher bulk density than NCA2. NCA1 contains more finer nanoscale fibers, so more fibers are in close contact with each other. The NCA2 has much longer fibers and thus the micrometer pores are much larger than for NCA1, resulting in a looser structure. Overall, the SCA composite aerogel has a lower bulk density than plain silica aerogel, while higher than that of reference cellulose aerogel. Mechanical properties The uniaxial compression results of the reference cellulose and SCAs are presented in Fig. 9. The mechanical parameters are summarized in Table 8. The stress-strain curves of the tested groups show three stages: a linear trend at very low strains (<5%), an increased slope at higher strains, and a final densification because of the collapse of the fibers pore walls. The tests were all performed until the sample was about to break at around 80% strain. For the reference CNF aerogel, the curve was typical for aerogel prepared from cellulose fibers at a very low concentration of 0.60% and 0.55% in aqueous solution [30]. The main deformation was due to the bending of the fibers and collapsing of the pores, while the compressive strength was provided by the physical cross-linking fibers and hydrogen bonds [25]. When the strain reaches higher values, the micrometer-sized pores were compressed and broken, leading to the densification of the pores resulting in load bearing of the samples. As can be seen from Table 8, the Young's modulus and compressive strength of the reference NCA was very low, reaching only 29.8 kPa. This is attributed to the ultralow density and high porosity and weak cellulose strength of the cellulose aerogels. The plain silica aerogel shows a very low stress value at low strains, according to silica aerogels prepared by other researchers, which is due to the brittleness of silica aerogel and lack of flexibility that lead to the limitation to reach higher strain. [33]. Although differences exist among the accurate stresses of different silica aerogels, the nature of brittleness of the silicon-oxygen bond is widely acknowledged. However, cellulose-silica composites aerogels presented a relatively clear improvement in Young's modulus and stress-strain curves, compared to both reference silica aerogel and cellulose aerogel. The fracture stress of SS-CNF1 and SS-CNF2 reached 62.8 and 42.8 KPa, respectively. The improvement in compressive strength was firstly due to the increase in density and decrease in total porosity. As more silica aerogel was impregnated in the pores of cellulose fibers, the density was increased thus also the strength. Another important reason was the covalent bond between cellulose changed to silicon-oxygen bond as seen in FTIR analysis. In fact, the Si-O bond is a very strong bond (452 kJ/mol), however not ductile due to the silicon-oxygen tetrahedron. Therefore, it is interesting to compensate for this shortcoming by combining cellulose fibers which can improve the ductility of the composite materials. At higher strain when the interpore of cellulose fibers is condensed, the impregnated silica can also support the pores from collapsing. Therefore, the SCAs have improved mechanical properties. After surface modification, the density of SCA further increases and the strength of SS-CNF1-M increases significantly, reaching 95.4 kPa. The synergy of -CH 3 groups with cellulose silica aerogel matrix provides stiffer and more ductile aerogel [34]. The repulsive forces between -CH 3 groups further increase the compressive strength of SCAs. Therefore, surface modification of SCAs can improve the mechanical properties of SCAs significantly. Thermal conductivity The thermal conductivity of SCAs and reference cellulose and silica aerogels are shown in Table 9. The reference silica aerogels possess thermal conductivities of 0.016-0.018 W/(mÁK) at room temperature and pressure, which are known to be superinsulating materials. The reference NCA1 and NCA2 have much higher thermal conductivity of 0.036 W/(mÁK) and 0.038 W/ (mÁK), respectively. Although cellulose aerogel has very low bulk density (~0.01 g/cm 3 ), they contain numerous micron-sized open pores inside the aerogel, which cannot immobilize the air inside. The silica aerogel, however, has nanometer-sized open pores of around 5~20 nm. These tiny mesopores can immobilize the air movement inside the nanopores. The mean free path of air is 68 nm at ambient pressure and room temperature. Due to the Knudsen effect, the movement of air molecules was restricted and thus the thermal conductivity was significantly decreased and even lower than air [12]. For SS-CNF1 and SS-CNF2, the silica aerogel with lower thermal conductivity filled the micron-sized pores of the cellulose aerogel and covered the surface of the cellulose fibers as well. Thus, the thermal conductivity was reduced to 0.023~0.026 W/(mÁK). These results also support the conclusion that the silica component was successfully incorporated into the matrix of cellulose fibers. However, the thermal conductivities of SCA were higher than that of plain silica aerogel. This is due to the remaining macrospores (cannot restrict air movement) inside the composite aerogel which are not fully occupied by silica aerogel and due to the increased density of SCA increases the phonon conduction through the skeleton network of silica and cellulose fibers. There is also a slight difference among these four SCAs samples in terms of thermal insulation properties. SS-CNF1 has the thermal conductivity reaching 0.026 W/(mÁK). While for SS-CNF2, this value decreases to 0.023 W/(mÁK), most probably because of the difference in bulk density, as shown in Table 8. It is noticed that surface modification increases the thermal conductivity for both SS-CNF1 and SS-CNF2. The reason can be that more -CH 3 groups are attached to the silica aerogel which can increase the density of silica components in these samples. Also, the pore size of silica was bigger and randomly distributed after modification as discussed in BET analysis. Above all, the thermal conductivity of SCAs can reach a very low value which is desirable in thermal insulation fields. Since the composite aerogel can obtain low thermal conductivity and high thermal stability at the same time, it has the advantage over traditional insulation materials, for instance, styrene foam (0.4 W/(mÁK)) and asbestos (0.08 W/(mÁK)). Hydrophobicity The water contact angle of SCA with surface modification by the TMCS/heptane reagent solution is shown in Fig. 10. For the original SS-CNF1 and SS-CNF2, the Mill-Q water was immediately absorbed into the matrix due to the hydrophilic nature of the Si-OH bond and also due to the numerous micron-sized pores of hydrophilic cellulose fibers (-CH 2 -OH bonds in cellulose), which makes the measurement of the water contact angle impossible. Contrariwise, as observed from Fig. 10, the water contact angle was very high for both SS-CNF1-M and SS-CNF2-M, reaching an average water contact angle of 137.0°and 140.4°, respectively, indicating their high hydrophobicity. Hydrophobicity was classified by a water contact angle above 90°. In addition, Fig. 10 (c) and (d) show that the water droplets stand on the surface of cylindrical and cubic composite aerogel without penetration. The surface modification method is in accordance with other researchers using silane containing materials [35][36][37]. The high level of hydrophobicity can improve the durability of SCAs applied in the indoor environment because the moisture in the air can constantly penetrate the matrix of hydrophilic SCA leading to the wetdrying shrinkage of silica aerogel or even corruption of the cellulose fibers. The deterioration of the pore structure of SCA can result in significant increase of thermal conductivity, leading to the loss of thermal insulating performance. Therefore, surface silylation treatment can further prolong the service life of SCA. Also, thanks to its high thermal insulation and thermal stability, it could be an ideal candidate for interlayer thermal insulation material. Sustainability A significant motivation and potential advantage of using waste olivine silica to synthesize aerogel is the reduction of the carbon emission associated with the silica aerogel synthesis. In most cases, commercial water glass is used as the precursor. However, commercial water glass that is prepared using hydrothermal treatment has a CO 2 emission of 1.514 tCO 2 /t [38], and the detailed energy use is presented in Table 10. The traditional hydrothermal method uses silica sand and sodium hydroxide as the raw material at temperatures from 150°C to 300°C at elevated pressures (1.8-2.0 MPa) to dissolve the low reactive silica sand. Therefore, electricity (1.065 tCO 2 /t) is the major energy source to produce traditional water glass. In addition, the extraction of raw silica sand also requires energy, for instance, sand dredging, washing and drying. The CO 2 emission of silica from olivine is only 0.461 tCO 2 /t according to a life cycle analysis performed by VTT (EU F7th project, ProMine internal report). It can be calculated from Table 3 that only 0.073 t olivine silica is needed to produce 1 ton of olivine sodium silicate. Silica from olivine dissolves in sodium hydroxide solution at 80°C and atmospheric pressure in just 2 h, which indicates a significantly lower electricity requirement in the production process for low modulus water glass. In order to calculate the CO 2 emission of precursor for producing 1 ton of silica aerogel, the detailed calculation is shown in Table 11. According to Table 3, the prepared olivine sodium silicate has a silica concentration of 8%, while for commercial water glass, it normally contains a silica concentration of 28~30%. Therefore, if 1-ton aerogel was produced, it is calculated that 12.5 t olivine sodium silicate or 3.57 t commercial water glass is needed, respectively. The CO 2 footprint in this calculation: Sodium hydroxide pellets (1.915 tCO 2 /t), olivine silica (0.46 tCO 2 /t), water (0.03 tCO 2 /t) and commercial water glass (1.514 tCO 2 /t). Therefore, the final calculation Table 9 Thermal conductivities of reference cellulose/silica aerogel and SCAs. of CO 2 footprint of silicate precursor is the sum of each component used in preparation. Considering aerogel production can be varied due to the solvent exchange and drying method, therefore only the precursor is regarded as a variate and the rest preparation is assumed to be the same. As can be seen, the embedded CO 2 footprint is significantly lower for olivine silica precursor (2.481 tCO 2 /t) than commercial water glass (5.517 tCO 2 /t). Moreover, it must be emphasized that the carbon emission of the silica from olivine is calculated without taking the extra heat from the exothermal reaction into account. If this heat can be used, the carbon footprint could be even lowered. Therefore, if only the commercial water glass for aerogel production was replaced to olivine sodium silicate and the rest of the synthesis is the same, the produced aerogel will have a significantly reduced CO 2 footprint and is thus more environmentally friendly. Conclusions Cellulose aerogel can be function as an ultralightweight material for thermal insulation. However, the limitations of cellulose aerogel including relatively high thermal conductivity and weak mechanical properties have retarded its use in real-world applications. This paper presents a method to prepare green sodium silicate from olivine silica, a low-cost alternative silica source to impregnate silica aerogel within cellulose matrix. The silicacellulose aerogel (SCA) shows improved compressive strength (95.4 KPa), high surface area (958 m 2 /g) and low thermal conductivity (c.a. 23 mW/(mÁK)) compared to plain cellulose aerogel. Moreover, it has an ultralow density (0.055 g/cm 3 ) and high porosity (98%). Based on these results, the following conclusions can be drawn: Specific surface area of the SCA reaches c.a. 958 m 2 /g for SS-CNF1 and 614 m 2 /g for SS-CNF2, compared to pure cellulose aerogel with SSA of 200~300 m 2 /g, indicating the sol-gel process of olivine silica derived low modulus silicate can result in higher surface area. The compressive strength of SS-CNF1 and hydrophobized SS-CNF1-M increased from 29.8 kPa to 62.8 kPa and 95.4 kPa, respectively, showing the silica-cellulose aerogel has a better mechanical property than plain cellulose aerogel. Thermal conductivity of composite silica-cellulose aerogel was significantly lower than pure cellulose aerogel due to the incorporation of fine silica aerogel particles. Surface modification by TMCS trimethyl silylation can make SCA composites hydrophobic, with a water contact angle reaching 137.2~140.4°, which will potentially improve durability and thermal insulating performance of SCAs in the relatively high humidity environment. The monolithic silica-cellulose aerogel can be synthesized from low modulus (1.5) silicate sol-gel process. The embedded CO 2 emission of this new aerogel is significantly reduced, reflected by the obviously lower footprint of olivine sodium silicate compared to commercial water glass, namely 2.481 tCO 2 /t and 5.517 tCO 2 /t, respectively. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
9,982
2021-07-01T00:00:00.000
[ "Materials Science" ]
Classification of upper limb center-out reaching tasks by means of EEG-based continuous decoding techniques One of the current challenges in brain-machine interfacing is to characterize and decode upper limb kinematics from brain signals, e.g. to control a prosthetic device. Recent research work states that it is possible to do so based on low frequency EEG components. However, the validity of these results is still a matter of discussion. In this paper, we assess the feasibility of decoding upper limb kinematics from EEG signals in center-out reaching tasks during passive and active movements. The decoding of arm movement was performed using a multidimensional linear regression. Passive movements were analyzed using the same methodology to study the influence of proprioceptive sensory feedback in the decoding. Finally, we evaluated the possible advantages of classifying reaching targets, instead of continuous trajectories. The results showed that arm movement decoding was significantly above chance levels. The results also indicated that EEG slow cortical potentials carry significant information to decode active center-out movements. The classification of reached targets allowed obtaining the same conclusions with a very high accuracy. Additionally, the low decoding performance obtained from passive movements suggests that discriminant modulations of low-frequency neural activity are mainly related to the execution of movement while proprioceptive feedback is not sufficient to decode upper limb kinematics. This paper contributes to the assessment of feasibility of using linear regression methods to decode upper limb kinematics from EEG signals. From our findings, it can be concluded that low frequency bands concentrate most of the information extracted from upper limb kinematics decoding and that decoding performance of active movements is above chance levels and mainly related to the activation of cortical motor areas. We also show that the classification of reached targets from decoding approaches may be a more suitable real-time methodology than a direct decoding of hand position. Background The possibility of bypassing neuromuscular control or, in other words, activating an alternative pathway for the brain to act upon the environment, has triggered a fascinating field of research. Brain-Machine Interfaces (BMIs) are devices aimed at translating subjects' brain activity into commands [1,2]. They enable people with motor disabilities to interact with their environment in a completely new way [3]. They have been used alone or in combination with other systems, such as Functional Electrical Stimulation (FES), prosthetic arms or hand orthoses, to restore grasping functionalities in subjects with Spinal Cord Injury (SCI), where the loss of motor function is permanent [4]. Moreover, BMIs have become a promising tool in rehabilitation procedures where patients have movement limitations or difficulties to control their limb function [5][6][7]. Particularly, motor impairment after stroke is one of the main causes of permanent disability. This section of the population usually suffers from upper limb movement limitations and the recovery of the arm movement is often variable and incomplete [8]. This recovery is crucial in order to perform activities of the daily life, so the use of BMIs during the rehabilitation may be a key factor of improvement [3]. Currently, one of the main challenges of BMIs is to characterize and decode upper limb kinematics from brain signals. Up to now, decoding approaches were mainly centered on intracortical recordings, usually performed in non-human primates, where arrays of microelectrodes are implanted directly in the motor cortex. In some studies, the motor cortical activity of monkeys was used to perform reaching and grasping activities with a robot arm [9], or to perform three dimensional movements that included force grasping for self-feeding using a mechanical device [10]. Invasive approaches have been successfully used in people with motor disabilities to perform reaching and grasping tasks [11,12]. Less invasive procedures such as electrocorticography (ECoG) have also been used to decode two-dimensional arm trajectories [13] and different types of grasping [14]. Despite their potential, invasive approaches require surgery, which limits their use. In this respect, non-invasive methods can compensate the drawbacks of intracortical recordings. Some studies have used magnetoencephalographic (MEG) signals to predict hand movements to perform 2D trajectories [15]. MEG signals have also been used in combination with electroencephalographic (EEG) signals to discriminate between different center-out movements [16]. However, the low signal-to-noise ratio of EEG signals makes it difficult to decode hand movement trajectory. Recent works suggest that it is possible to decode hand or arm kinematics (position and velocity) from slow cortical potentials (SCPs), i.e., EEG signals oscillations below 2 Hz [17][18][19][20]. To that end, multidimensional linear regression models are applied to the data. However, it has been pointed out that this methodology has the risk of overestimating the decoding performance due to the mathematical properties of linear regression between signals in the same frequency range (in this case, slow arm movements and slow cortical potentials) [21]. Furthermore, this later study states that decoding accuracies achieved with SCPs are not above chance level. A previous work also proposed the use of multidimensional linear regression as the decoding method to control a cursor [22]. It reports that it is possible to accomplish a two-dimensional control of this cursor with performance levels comparable to those of invasive BMI systems. In their study, the decoding models had to be recalibrated to include a scaling factor due to the fact that the correlation metric is invariant to scale. Again, the way of how these results are assessed is still a matter of discussion [22][23][24], so it is necessary to gather further evidence of the real possibilities of decoding arm trajectories from EEG SCPs. In this regard, some studies have suggested the introduction of electromyographic information (EMG) into this decoding procedure [25] or even the use of muscle synergies activation coefficients extracted from this EMG information [26]. In this paper, we compare several results obtained by applying linear regression techniques to decode upper limb kinematics from EEG signals using a center-out reaching approach. We analyze arm movements using the same decoding approach proposed in previous studies [17]. The results show that arm movement decoding was significantly above chance levels. Moreover, we have analyzed passive arm movements using the same protocol to study if the neural information for decoding was related to the execution of movement, instead of being linked to proprioceptive feedback. The final decoding performance obtained from our study suggests that, although neural correlates can be decoded when performing upper limb movements, the decoding accuracy may not be high enough to perform a real-time control of a cursor in a 2D environment and the method is also subject to the scaling limitations. As a consequence, we also evaluated the classification of the reached targets which yielded a very high classification accuracy. From these findings, it appears that the a classification of reached targets from decoding approaches may be a more suitable real-time methodology for rehabilitation purposes (where movements are often repetitive) than a direct decoding of hand position. Experimental tests The experimental tests are based on a center-out protocol in which subjects sat in front of a computer screen where a cursor moves from a central position to several targets equally distributed around it (see Fig. 1, top). EEG signals • Active center-out movement: subjects control the cursor movement using a planar manipulandum (see Fig. 1, top). The goal is to reach the target that is randomly highlighted on the screen. The subject must reach it and then return to the central position. Targets are distributed around this central position in a circumference with a radius of 10 cm. Each time a target is reached or the cursor enters the central position, a waiting period of 400 ms is introduced. Each subject executed 10 runs in which 40 targets were randomly highlighted (around 3 minutes per run). All reaching positions were equally highlighted (each of them 5 times per run). 5 able-bodied subjects (B1-B5)(26.4 ± 3.1 year-old) performed the tests. 16 electrodes were recorded distributed over the central and parietal cortex, where a higher activity related to arm movements is expected. The equipment used was the gUSBamp (g.Tec, GmbH, Austria) with a sampling frequency of 1200 Hz. The reference was placed on the right earlobe and ground was placed on the AFz position. • Passive center-out movement: subjects are asked to passively grasp the planar manipulandum while the researcher operates it. The experimental tests are the same as with the active center-out movement. Subjects carried out 5 runs in which 40 targets were randomly highlighted (around 3 minutes per run). All reaching positions were equally highlighted (each of them 5 times per run). 5 able-bodied subjects (C1-C5)(25.2 ± 2.6 year-old) performed the tests. Only one subject performed the experiments for both active (B1) and passive (C1) movements. EEG human recordings used in this study have been approved by the ethics committee of the Miguel Hernández University of Elche, Spain. Written consent according to the Helsinki declaration was obtained from each subject. Preprocessing First, cursor kinematics were resampled to match EEG signals. EEG signals were visually inspected to reject blinks, and frontal channels were discarded to diminish ocular artifacts. For this reason, the same 16 electrodes were considered for the analysis of all conditions: FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P3, Pz, P4, PO3 and PO4. According to previous literature, neural correlates of movement kinematics are mainly found in SCPs above 0.1 Hz [27]. As a consequence, EEG signals were band-pass filtered with a zero-phase 4th-order Butterworth filter between 0.1-2 Hz. For comparison purposes they were also filtered between 8-12 Hz, 14-30 Hz and 0.1-40 Hz, to estimate the amount of information present in each frequency band, similar to the study performed by Antelis et al. [21]. Cursor kinematics (position and speed) were also low-pass filtered with a zero-phase 4th-order Butterworth filter below 2 Hz. Finally, for each run, EEG data from each electrode i were standardized by subtracting, for each time sample (t), the mean (V i ) of the signal and dividing the result by the standard deviation (SD Vi ) as shown in (1). Decoding A multidimensional linear regression was applied to decode kinematics from EEG signals, where x[t] is the kinematics state (position and velocity) at time t and S n is the signal from channel n. L corresponds to the number of lags and N to the number of channels. The decoding parameters, a and b, were estimated using a cross-fold validation for both the active movement condition (ten folds) and the passive movement condition (five folds). The values for the parameters L and N are: L = 10 (around 80 ms of signal) and N = 16 (central and occipital electrodes uniformly distributed). To simplify the process, the matrix form of (2) has been used as follows: where X is the kinematic state [PxPyVxVy] , B is the transformation matrix, S is the features array, A is the scale matrix and NF is the number of features used which depends on the time lag L and the number of channels N (NF = L * N + 1). Movement profiles We report the speed profiles (mm/second) for each subject and movement condition (active and passive movements). To that end, the average speed for each point in the trajectory (from the central position to the corresponding target) has been computed for each reaching movement, normalized in length and averaged between all trials for each subject and condition. Speed was considered negative when the direction of movement was negative regarding the considered axis. For instance, this means that when the subject was approaching horizontally to a target on his/her left, his/her speed was computed as a negative value, and when he/she was moving to the opposite direction, speed was computed as a positive value. Continuous decoding For the continuous decoding, the matrices B and A in (3) were obtained using a cross-fold validation (10 folds). For each fold, the training data was used to compute the decoding matrices that are then applied to the test data to obtain the decoded kinematics. We computed the Pearson correlation coefficient between the real and decoded kinematics for each testing fold and reported the performance in terms of average correlation. The results have been compared for different ranges of frequencies (0.1-2 Hz, 8-12 Hz, 14-30 Hz and 0.1-40 Hz). Additionally, shuffled and random data have been used as input to assess if the decoding accuracy was above chance levels. Shuffled data was obtained by randomly mixing target labels of real data and the associated kinematics to keep the temporal structure of the EEG signals in a way equivalent to [21,28,29]. Random data was generated as a standard uniform noise with the same size of real input data. Both shuffled and random data were filtered and standardized in the same way as the actual experimental data. Random and shuffled data decoding coefficients were computed 1000 times to avoid chance effects due to the stochastic nature of the process. Classification of reached targets We evaluate the possibility of classifying reaching movements towards a particular target by analyzing EEG signals in the frequency range (0.1-2 Hz). Only SCPs have been taken into account as the continuous decoding shows non-significant results in other bands (see section Results -Continuous Decoding). To that end, EEG signals and kinematics were manually segmented into blocks for each center-out movement and labeled with the corresponding target. First, the trajectory of the cursor was decoded for each movement block (from the vectors of decoded X and Y positions) and, then, a straight line was fitted using the obtained trajectory and compared to the angular position to each target to infer the movement direction. This classification was performed using a crossfold validation for 5 different target configurations (see Fig. 1, bottom). The movement workspace was divided into sectors depending on the configuration of targets. For example, for two targets, the workspace was divided into two sectors and the estimated trajectory orientation was assigned to the nearest target. As before, shuffled and random data were used to estimate chance levels. We also assessed the performance through the estimation of the classification confusion matrices and the information transfer rates. Firstly, confusion matrices have been computed for each configuration and subject to show the extent of misclassification. Secondly, Information Transfer Rates (ITRs) have been computed for the average classification rates obtained by each subject for the different target configurations according to the following equation (for further information see [30]): where N is the number of classified targets and P is the accuracy of the classification ITR values have been plotted over the ITR curves obtained for 2, 4 and 8 classified targets to better show the performance of each subject. Figure 2a reports the average speed of the reaching movements for the active and passive conditions. It shows comparable velocities for both conditions (averaged: 46.49 ± 9.73 mm/s for the active movements and 42.93 ± 6.44 mm/s for the passive movements). For the passive experiments, the same researcher performed the movements for all subjects, which may explain the reduced variability with respect to the active condition where subjects performed the movements by themselves. Figure 2b-e shows the average time courses of the X and Y hand speed for an exemplary subject (B1, active movements, and C1, passive movements) and direction (bottom-right target) showing the expected initial acceleration and final deceleration for both conditions. Continuous decoding The Pearson correlation coefficient has been obtained after computing a cross-fold validation between all runs for each subject. Figure 3 shows the Pearson correlation coefficients obtained while performing center-out movements when decoding signals in the frequency band 0.1-2 Hz. The results show high decoding correlations (Fig. 3). Particularly, subjects B3 and B5 obtain the best decoding accuracy with some components reaching a value of 0.5. Figure 4 shows an example of 30 s of kinematic reconstruction (2D position and velocity) for one of the subjects performing active movements. In this particular example, decoding coefficients above 0.5 show an accurate reconstruction of the performed trajectories (X Position and Y Position). When the decoding correlation decreases (X Velocity, Y Velocity), the reconstructed signal preserves its tendency but reduces its accuracy. Previous studies have claimed that upper limb kinematics are better reconstructed from low frequency EEG signals [17,19,21]. We tested this hypothesis by analyzing the decoding performance using the signal in four In agreement with these studies, our analysis showed that decoding correlations of higher frequency bands were close to zero and that the low frequency band (0.1-2 Hz) yielded the best decoding accuracies (Fig. 5). Decoding performance using SCPs was slightly but not significantly above results obtained with a broader frequency band (0.1-40 Hz) that includes the irrelevant higher frequencies. To estimate the significance of our findings, the decoding approach was tested with random and shuffled data and compared with the results for active movement (Fig. 6). Active movement was decoded significantly above chance level for all kinematic components (p < 0.001, Wilcoxon Sum-Rank Test) (Fig. 6). Also, the decoding performance of error and shuffle conditions was not significant (p > 0.05, Wilcoxon Signed-Rank Test). These findings differ from a previous study [21], where the correlations and normalized errors of the results of real models were not statistically different from shuffled and random models, but are similar to what is obtained in several works related to the topic [20,28,29]. This discrepancy could be due to the nature of the experimental data or the way EEG data were processed. However, the results obtained in most of the previous works suggest that decoding performance is significant when linear decoders are applied to slow cortical potentials. Figure 7 shows the success rate of targets correctly classified after computing a cross-fold validation between all runs recorded for center-out movements. For each subject the graph shows the five different target configurations proposed (Fig. 1, bottom). The results yield a high performance for all the configurations (averaged: 29.0%±11.8% for configuration A, 51.3%±19.2% for configuration B, 52.3%±20.5% for configuration C, 79.6%±15.9% for configuration D and 75.6%±17.0% for configuration E). As expected, the performance of each subject in the decoding is consistent with the results in the continuous case. Unsurprisingly, subjects B3 and B5, who obtained the best decoding accuracies in the continuous approach, also had the highest success rates. The success rate obtained in the classification of two targets (configurations D and E) is particularly remarkable (subject B3, 93.0%±6.7% and subject B5, 89.0%±11.0% for configuration D; and subject B3, 88.0%±11.3% and subject B5, 87.0%±9.4% for configuration E). Classification of reached targets Theoretically, chance level for configuration A (8 targets) should be 12.5%, for configurations B and C (4 targets) 25%, and for configurations D and E (2 targets) 50%. However, as the number of trials is small, these levels may not be representative. As a consequence, the Fig. 4 Example of decoded kinematics. Continuous decoding of kinematics using the linear regression decoding method (Subject 3 -Active Centerout Movement). The grey dotted line represents the real performed movement. The continuous black line represents the decoded kinematics (a X Position, b Y Position, c X Velocity, d Y Velocity). The correlation coefficient (CC) obtained from the correlation of both signals is also shown classification of targets was computed for shuffled data and random data in the same way as for the continuous decoding and compared with active movement results (Fig. 8). The results show that the decoding of active movements was significantly above chance level for all configurations (p<0.001, Wilcoxon Sum-Rank Test). Confusion matrices show that misclassification is mainly focused on the targets closest to the classified target (Fig. 9), suggesting that the classification method is quite robust. This is particularly visible in subjects B3 and B5, who obtained the best decoding accuracies. Consistently, when analyzing information transfer rates (ITRs), subjects B3 and B5 obtain the highest ITRs (Fig. 10). Rates are remarkably high (over 0.5 bits/trial) for configurations B to E. For the remaining subjects and, in general, for configuration A (8 targets), ITR is usually lower. Decoding passive movement The results obtained from the decoding of active centerout movement were significantly above chance level. One possible explanation for these results is that decoding is driven by the influence of proprioceptive sensory feedback while reaching each of the targets instead of reflecting neural correlates of motor intention. To study the influence of afferent feedback in the decoding, we performed a second experiment using passive movements. This new data set was then analyzed the same way as the previous data (decoding of low frequency components 0.1-2 Hz). Figure 11a shows the Pearson correlation coefficient obtained while performing passive center-out movements (continuous approach) and Fig. 11b shows the success rate of targets correctly classified (classification approach). In both cases, performance was not above chance level (p > 0.05, Wilcoxon Sum-Rank Test), supporting the hypothesis that EEG slow cortical potentials do carry significant information related to the execution of active center-out movements and proprioceptive feedback is not enough to decode upper limb kinematics. The significance of neural activity during active center-out movements is illustrated in Fig. 12 showing that the decoding accuracy was always significantly higher than for passive movements for all the The boxplot represents the Pearson correlation coefficient obtained after computing a cross-fold validation between all runs (n = 10) for each subject and then averaged between subjects (n = 5). On each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme datapoints which are considered not outliers, and the outliers are plotted individually. Position (Px and Py) and velocity (Vx and Vy) are shown for different experimental data: center-out movements (a), shuffled data (b) and random data (c) kinematic components (X Position, Y Position, X Velocity and Y Velocity)(p < 0.001, Wilcoxon Sum-Rank Test, Fig. 12a) and the success rate was significantly above the levels of passive movements for all configurations (p < 0.001, Wilcoxon Sum-Rank Test, Fig. 12b). Discussion This paper contributes to the assessment of the use of linear regression methods to decode upper limb kinematics from EEG signals. Previous work states that it is possible to decode hand or arm kinematics (position and velocity) from slow cortical potentials, i.e., EEG signals below 2 Hz [17][18][19][20]. However, these results may have been misinterpreted due to the inherent properties of linear regression methods, particulary, when comparing EEG signals with the same frequency range as the decoded kinematics [21]. To confirm or reject this conclusion, we have applied a similar methodology to experimental data during the performance of active and passive center-out movements in a two dimensional space. As previously reported [17,21,22], low frequency bands (0.1-2 Hz) concentrate most of the information extracted from upper limb kinematics decoding. According to [21], as slow cortical potentials and the decoded kinematics are sinusoid-like, the correlation of this kind of signals with equal amplitudes and small time-shifts is higher at these low frequencies [21]. This can lead to an overstimation of the decoding performance not related to discriminant Fig. 6 Continuous decoding significance. Decoding performance of center-out trajectories comparing different experimental data: active center-out movement, shuffled data and random data. The Pearson correlation coefficient (mean ± STD) is obtained after computing a cross-fold validation between all runs (n = 10) and then averaged between subjects (n = 5). The graph shows results for position (Px and Py) and velocity (Vx and Vy) and reflects differences of active center-out movement versus random and shuffled data. The stars represent significant differences with respect to random and shuffle conditions Fig. 7 Classification performance. Classification performance of center-out trajectories for active center-out movements. The barplot represents the success rate (mean±STD) of targets correctly classified obtained after computing a cross-fold validation between all runs (n = 10). For each subject (1-5) the graph shows results for all the different target configurations (as shown in Fig. 1) Fig. 8 Classification significance. Classification of center-out trajectories comparing different experimental data: center-out movement, shuffled data and random data. The success rate of targets correctly classified (mean±STD) is obtained after computing a cross-fold validation between all runs (n=10) and then averaged between subjects (n = 5). Each graph shows results for all the different target configurations: A-E (see Fig. 1) and reflects differences of center-out movement versus random and shuffled data modulations of neural activity. Our results and the experimental protocols we have explored do shed light on the nature of the SCPs if interpreted rigorously. On the one hand, compared to the active movements, passive movements differ in that the CNS does not need to compute the detailed trajectory of the arm. However, neural correlates of proprioceptive sensory feedback are still present. Nevertheless, our results show that passive movements cannot be decoded from SCPs suggesting that there is little influence of proprioceptive feedback in the decoding. The velocity profiles of the movements performed for both conditions are similar suggesting this should not influence the final decoding performance. Shuffle and random conditions show residual correlations which do not yield appropiate trajectory reconstruction and could be again a consequence of the correlation metrics. However, with this small sample size, caution must be applied and further evaluation should be performed using larger datasets. The decoding accuracies are lower than those reported in a recent work [22], where the authors state that it is possible to accomplish a two-dimensional real time control of a cursor with performance levels comparable to those of invasive BMI systems. In this case, decoding performance is also subject to scaling limitations. For these reasons, we have proposed a simplification of the method by computing a classification of reached targets (discretization of the continuous decoding). This kind of approach has been also explored in several works [16,31,32]. In our case, the results have shown high success rates for different target configurations, presenting a clear consistency with the previously obtained decoding performance for continuous movements. These results are quite encouraging and suggest that an online application of this methodology may provide an accurate identification of upper limb movement intention. By reducing the dimensionality of the classification output, this classification approach presents promising advantages in future neurorehabilitation procedures, where EEG slow cortical potentials could be exploited to classify arm movement directions [33] and even detect movement onset [34]. This again corroborates the trajectory-encoding features of the SCPs for the active condition. Regarding rehabilitation assistance, a classification of reached targets may be more suitable as rehabilitation therapy is often based on repetitive movements [35]. In future studies it would be interesting to also assess the role of high-frequency modulations, for instance, by correlating the envelopes of those higher frequency bands to the cursor signal. Recent works by Farina and colleagues have suggested that force generation is mainly due to low-frequency neuromuscular inputs as the neural drive acts as a linear filter that removes any component over 10 Hz [36]. Contrary to this, most of the studies that deal with corticomuscular coherence show that EEG signals are generally coupled with EMG at higher rates (beta and gamma bands), which, apparently, has no functional meaning [37,38]. One interesting point would be to assess if high-frequency oscillations are modulated at a slower rate and, thus, carry information of functional motor cortical inputs, which could explain those findings in corticomuscular coherence. This behavior of high-frequency cortical components could also explain functional modulations of alpha (8-12 Hz) and beta (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) Hz) bands, widely used in classical BCI-based protocols, such as motor imagery. Another interesting point would be the evaluation of which bands provide more information before the movement (planification) and during the movement (execution). Fig. 11 Passive movements decoding. Continuous decoding of center-out trajectories (a) and classification of reached targets (b) for passive center-out movement. a represents the Pearson correlation coefficient (mean±STD) obtained after computing a cross-fold validation between all runs (n = 5). On each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme datapoints which are considered not outliers, and the outliers are plotted individually. For each subject (1-5) the graph shows results for position (Px and Py) and velocity (Vx and Vy). b represents the success rate of targets correctly classified (mean±STD) obtained after computing a cross-fold validation between all runs (n = 5). For each subject (1-5) the graph shows results of five different target configurations (see Fig. 1) Fig. 12 Passive vs active decoding. Continuous decoding of center-out trajectories (a) and classification of reached targets (b) comparing active center-out movement and passive center-out movement. For the continuous decoding, the Pearson correlation coefficient (mean±STD) is obtained after computing a cross-fold validation between all runs (n = 10 for active and n = 5 por passive) and then averaged between subjects (n = 5). The results for position (Px and Py) and velocity (Vx and Vy) are displayed. For the classification of reached targets, the success rate of targets correctly classified (mean±STD) is obtained after computing a cross-fold validation between all runs (n = 10 for active and n = 5 por passive) and then averaged between subjects (n = 5). The results of five different target configurations (see Fig. 1) are displayed Conclusion The main goal of this study was to shed light to the controversy of current decoding procedures. For this reason, we have replicated the same core methodologies of previous studies [17,21,22], i. e., multidimensional lineal regression applied to center-out reaching tasks. We have found significant decoding performance when applying these linear decoders to slow cortical potentials (0.1-2 Hz). However, decoding performance is subject to scaling limitations and there is also variability on the decoded trajectory. For this reason, we have proposed a more reliable way of decoding subject's motor execution from the continuous decoding of trajectories (classification of reached targets) aiming at a future application in a rehabilitation context. Additional control experiments (passive reaching tasks) have been assessed to show that proprioceptive feedback has little influence in the decoding, suggesting that discriminant modulations of low-frequency neural activity are mainly related to the execution of movement.
6,977.6
2017-02-01T00:00:00.000
[ "Computer Science" ]
The preserve of the rural elderly, or a language for modern life? Authenticity, anonymity and indexical ambiguity in Martinican Creole . This paper investigates the effects of (ongoing) standardization on linguistic attitudes and representations in the French Caribbean island of Martinique, where traditionally stigmatized Martinican Creole (MC) boasts a quasi-official orthography and some representation in formal domains. We use socio-biographical, perceptual and attitudinal data from a questionnaire-based study to investigate the relation between respondents’ (i) exposure to ‘activist’ MC – as a proxy for standardization; (ii) attitudes to MC on the status dimension; (iii) purism and (iv) breaking away from traditional MC indexicalities. Two findings are particularly noteworthy. First, exposure to activist MC fails to predict purist attitudes towards MC, which are similarly high regardless of respondents’ degree of exposure. Secondly, we find a mismatch between highly positive status attitudes and the persistence of traditional low-status MC indexicalities. We argue that, while some traditional indexicalities may wane as the standardization process progresses, others are essential to MC’s enduring representation as an authentic language and, therefore, less likely to recede. existence of positive status-related attitudes and traditional low-status indexicalities (Section 3).Such indexical ambiguity is accounted for by pointing to the enduring framing of MC as a language of 'authenticity' and traditional culture, even within the activist milieu.Section 4 takes a more speculative turn: by analyzing reported indexicalities against individuals' specific statusrelated attitudes and degree of exposure to activist MC, we make predictions as to which traditional indexicalities may wane over time, as standardization progresses.Finally, Section 5 provides a brief conclusion for the paper, foregrounding its contributions and limitations. 1.1.LANGUAGE STANDARDIZATION IN MARTINIQUE: FROM TRADITIONAL DIGLOSSIA TO MC'S GROWING REPRESENTATION.For the greater part of its history, Martinican Creole has entertained a diglossic relation with French, its lexifier (Bernabé 1983).With MC relegated to low-prestige environments, French had the monopole of formal domains such as schooling, the administration, and the media (Prudent 1980).The prestige differential between the two languages was also reflected in their respective indexicalities, with MC being associated with the underprivileged classes, and French with the elites. In 1946, Martinique was elevated from the status of colony to that of French overseas department.Besides its political implications, this event had momentous consequences for Martinique's sociolinguistic setup.The combination of cultural assimilation and compulsory French-language education resulted in French becoming an L1 for most Martinicans (March 1996;Beck 2017).As a result, French has since gained ground in informal domains which had previously been the preserve of MC. In addition to provoking a major departure from the canonical model of diglossia (Ferguson 1959), the shift to French had a profound and paradoxical impact on the prestige differential between the two languages.Having expanded beyond the elites and its traditional domains of usage, French can no longer be associated exclusively with formality and prestige.Moreover, the rapid advance of French was perceived as a threat by Creole activists, who reacted by promoting MC in spheres from which it had hitherto been banned, viz. the media, the arts, and school.Their endeavors also included corpus planning initiatives.Chief among those are the standardization efforts led by the GEREC-F research group (Groupe d'Études et de Recherches en Espaces Créolophones et Francophones), which included the development of a phonemic orthography and grammatical descriptions of MC (Schnepel 2009;Ardoino 2023).Previously stigmatized variants gained wider acceptance even among the middle class (Prudent 1980), not least because of their perceived authenticity and distance from French. These various changes call for a reexamination of Martinican bilingualism.Is MC still viewed as a minoritized variety devoid of overt prestige, or does it display some of the indexical attributes of standard languages (e.g. an association with formality and modern life)?If the latter is true, can we really positin keeping with traditional understandings of the Martinican continuum (Bernabé 1983)that more Creole-sounding forms/varieties would be perceived as less prestigious than forms/varieties closer to French? LANGUAGE STANDARDIZATION OUTSIDE MARTINIQUE: INDEXICAL CHANGE, PURISM AND INSIGHTS FROM THE LITERATURE.The need to take stock of Martinican bilingualism is made even more glaring by crosslinguistic studies showing that standardization can have far-reaching consequences for minoritized languages and their speakers.These consequences are of two main kinds more positive attitudes (Vari 2021) and increased purismand can be attributed to the ideological shift brought about by the standardization process, which the linguistic anthropological literature has captured through the notions of 'authenticity' and 'anonymity'. While international standard languages are anonymous in their seeming geographical/ethnical 'neutrality', minoritized languages are perceived as belonging to authentic speakers for whom they act as identity markers (Woolard 2016).When minoritized languages undergo standardization, however, this binary breaks downin ways that are yet to be fully understood.Although standardization has been linked to increased purism and the erasure of traditional indexicalities (Eckert 1983), several studies have cast a doubt on the strength of such a link (Jaffe 2003;Sallabank 2010;Urla et al. 2016), showing that purism can predate standardization and standardization needs not undermine the appreciation for traditional varieties. In the Martinican context, the effects of (ongoing) standardization are even less clear.So far, the literature on purism has either foregrounded (Térosier, François-Haugrin & Duzerol 2022) or qualified (Ardoino 2023) the danger of MC standardization for Martinican speakers' linguistic security.Moreover, such studies only rely on public discourse and do not explore individuals' attitudes/perceptions and their causes, nor the wider indexical changes that might feed purist discourse. By directly investigating Martinicans' attitudes towards Creole and the persistence/waning of its traditional indexicalities, this paper gauges both the effectiveness of Creole activist discourse in promoting a more prestigious image of MC, and the potential positive/negative impact of the standardization endeavor on Martinican speakers.Moreover, it also provides empirical evidence to inform crosslinguistic discussions about standardization, which are often based on purely qualitative data about language ideologies (for an exception, see e.g.Vari & Tamburelli 2020). 1.3.RESEARCH QUESTIONS.This paper asks whether Martinicans represent MC as a low-prestige, 'authentic' variety, or as an emerging standard language subjected to standard language ideology or, finally, as somewhere in between.This overarching question can be broken down into the following research questions (RQ): • RQ1.Is Creole(ness) still associated with informality and informal domains? This question is about individual attitudes and shall be addressed by investigating both attitudes to MC on the status dimension and actual perceptions of MC speech. • RQ2.Is there a MC purism and, if so, is it comparable to the degree of purism towards French? • RQ3.Does MC enjoy 'authenticity' or 'anonymity' indexicalities?This question is about broader linguistic representations, which tend to be shared at the community level but may show some degree of variation across individualsespecially at times of indexical change.To address this question, one should investigate the extent to which Creole is (still) associated with people and places indexical of solidarity/authenticity (e.g.rural environments and the street), as opposed to status/anonymity (e.g.urban environments and university).1 Methodology and data collection. To address these research questions, we used an online Qualtrics questionnaire that elicited the following attitudinal, perceptual and socio-biographical data. 2.1.ATTITUDES TO MC ON THE STATUS DIMENSION.Respondents were presented with two attitudinal statements (Creole should be an official language in Martinique, alongside French and Creole cannot ever become the language of trade and science) and asked to express their agreement on 5-point Likert scales ranging from 'fully disagree' to 'fully agree'. PERCEPTIONS OF FORMALITY FOR DIFFERENT VARIETIES OF MC (MORE VS LESS FRENCHIFIED). Respondents were presented with eight pairs of semantically identical oral stimuli that contrasted more Frenchified (traditionally less stigmatized) and less Frenchified (traditionally more stigmatized) varieties of MC.For each pair, participants were asked to choose the version they found more appropriate.Half or the pairs were presented in (fictitious) informal contexts such as gatherings with friends, and the other half in (fictitious) formal settings like conferences and newscasts.This perceptual task also included French stimuli pairs (contrasting more Creolized and less Creolized French) to be used as a term of comparison for perceptions of MC.The goal of this task is to find out (i) whether formal settings are associated with an increased preference for less Frenchified or more Frenchified MC, compared to the informal contexts (used as the 'baseline') and (ii) how formality 'norms' for MC compare to those applying to a fully standardized language like French.2Findings from this task provide further insights into MC's status.While a higher preference for Frenchified Creole would indicate that Creoleness still indexes low status, the opposite would point to its increased prestige and the emergence of purist norms wherebyin MC just like in Frenchformality is tantamount to the absence of language mixing. PURISM (FOR MC AND FRENCH ). Respondents were presented with two attitudinal statements for each language (Martinican Creole/French is too Frenchified/Creolized and When speaking Creole/French, one should avoid using any expression that is clearly French/Creole) and asked to express their agreement on 5-point Likert scales ranging from 'fully disagree' to 'fully agree'. SOCIAL INDEXICALITIES. We tapped into respondents' stereotypical social representations of Creole by eliciting their association of 'good Creole' with more vs less traditional speakers and places (the elderly, men, the countryside, the street vs the youth, women, urban environments, university). These contrasts were presented through bipolar scales, as in the example below: 'Creole is spoken…' (1) a lot better by men than women (2) slightly better by men than women (3) equally well by men and women (4) slightly better by women than men (5) a lot better by women than men While preferences close to the traditional pole would attest to the persistence of traditional indexicalities, ratings closer to the mid-point (i.e.no preference) or the less traditional pole would signify a change of indexicalities, towards more modern and/or high-status representations of MC. 2.5.EXPOSURE TO 'ACTIVIST' STANDARDIZED MC.Finally, we collected data about respondents' exposure to the unofficial 'standard' variety of MC associated with schools and the activist milieu.This measure is needed to estimate both Martinicans' overall familiarity with standardized MC andby comparing the attitudes of speakers reporting higher and lower degree of exposure the effect of standardization on the language attitudes/representations listed above. Exposure to activist MC was estimated through a combination of questions.First, respondents were asked to express their (dis)agreement with regards to the following statement, using a 4-point Likert scale: I (have) regularly take(n) part in activities for the teaching/learning of Creole.Then, they were prompted to choose their 'favorite' orthography for four MC words, amongst a series of four options more/less aligned with the (quasi-)official MC orthography. 3 Answers were then rated depending on their closeness to such orthography.We subsequently obtained an individual measure of exposure to activist MC, by calculating the mean value between (i) reported participation in MC-language activities and (ii) the mean knowledge of MC orthography, across the four words. Findings. We administered our questionnaire to 123 Martinicans ranging from 15 to 80 years of age (M = 47.7,SD = 15.96).Most respondents were women (75%) and had received some form of post-secondary education (79%).This high level of educational attainment far exceeds official statistics for Martinicans holding university qualifications, which stand at approximately 23% (Insee 2023).The disproportionate representation of women and highly educated participants is, in fact, a common drawback of online questionnaires (Smith 2008;Bethlehem 2010).Although neither gender nor education was found to significantly influence respondents' attitudes to and perceptions of MC, 4 caution is warranted when generalizing the findings below to demographics not represented in this study. ASSOCIATION BETWEEN CREOLENESS AND STATUS (RQ1 ). Across our respondents, MC appears to enjoy relatively high status.As shown in the graph below (Figure 1), most of them believe that (i) Creole should be an official language in Martinique, 5 and that (ii) it can in fact become the language of trade and science. 6 3 We deliberately phrased this question in subjective terms ('favorite', instead of 'correct' orthography) to discourage respondents from looking up the words.We assumed that respondents who had been exposed to activist norms would still report the 'official' correct orthography, even if they were asked about their personal preferences. 4There are three partial exceptions.First, men display slightly less positive status-related attitudes than women and slightly higher purism.Secondly, education is correlated with associating 'good Creole' with rural vs urban environments.In all cases, however, the effect is small and slightly above significance.The effect of these and other sociobiographical factors on individual attitudes and perceptions will be explored in a future publication. 5The questionnaire was administered in February-March 2023.In May 2023, the Territorial Collectivity of Martinique voted to make MC a co-official language alongside French.We can thus conjecture that opinions in favor of officialization could be even more prevalent now, although the decision to co-officialize MC has not been recognized by the French State. 6Since the second statement features a negative polarity item, higher disagreement amounts to more positive attitudes. Figure 1.Attitudes towards Creole on the status dimension.Reactions to the statements 'Creole should be an official language in Martinique, alongside French' (left) and 'Creole cannot ever become the language of trade and science' (right). However, these are only self-reported attitudes about the desired/accepted position of MC in Martinican society.Being elicited explicitly and in the absence of actual Creole speech, they are both prone to desirability bias and rather abstract. Complementary evidence of MC's increased status comes from the perceptual task described in section 2, which compares preferences for less/more mixed language (i) in formal and informal contexts and (ii) for Creole and French stimuli.The results show that presenting stimuli in formal settings leads to higher preferences for the less mixed versions.For French stimuli, this tendency is very pronounced, with higher formality associated with a 31% higher preference for less Creolized variants (82.11%, vs 62.80% for the informal settings).This is a fairly unsurprising finding, given the entrenchment of purist discourse surrounding the French language (Coppel 2007; for some nuanced and empirically based accounts of purism in France, see Oakes 2001, Boughton 2005 and Walsh 2016).What is more surprising, however, is that the same significant effect is also found for MC, albeit to a lesser degree.Although preferences for less mixed versions are overall lower for Creole than for French, for MC too formal settings are associated with a boost in preference/acceptance for the less mixed variants, i.e. less Frenchified MC (52.24%, vs 44.11% for the informal settings).7These trends are summarized in Table 1 One could hardly explain the relative preference for less Frenchified MC in formal settings, without positing that formerly stigmatized MC has developed some form of overt prestige.In this respect, the perceptual task confirms the positive attitudes explicitly elicited by the attitudinal items in Figure 1. 3.2.HIGHER PURISM FOR CREOLE THAN FOR FRENCH (RQ2).As seen in Section 1.2, the standardization process is thought to entail the emergence or exacerbation of linguistic purism, with variation and language mixing framed as threats to the newly developed 'standard' (Eckert 1983;Woolard 2016).The fact that our respondents have reported an increased preference for unmixed ('purer') MC in formal settings could be an indication that MC is not immune from such purism.Indeed, 75% of our respondents (strongly) agree that 'Creole is too Frenchified' and 65% of them (strongly) agree that 'when speaking Creole, one should avoid using expressions that are clearly French' (see Figure 2 below).These rates of reported purism are astonishingly high, for a variety traditionally referred to as patois, gibberish or bad French (Prudent 1980).Even more surprisingly, they are significantly higher than the corresponding rates for French (see Figure 3 below), where there is no consensus on whether Creole expressions should be avoided when speaking French, and just 24% of respondents (strongly) agree that 'French is too Creolized'.In the absence of qualitative data to complement the above picture, we cannot provide a conclusive interpretation for these findings.Instead, we propose a tentative account, to be tested in future research.We believe that MC might elicit more purist reactions than French because, being less standardized and less mastered/spoken (Beck 2017),8 it is also perceived as more threatened by language contact (Ardoino 2023;cf. Bernabé 1983).This perceived threat is likely sharpened by fears of language loss stoked by activist discourse, and the role of identity marker that MC plays in Martinican society (Pulvar 2004(Pulvar , 2005)).The fact that in our study reported purism is only weakly (or not at all) correlated with status-related attitudes and exposure to activist Creole supports an interpretation of purism as the response to fears of language and identity loss, more than a direct outcome of standardization.9If confirmed, this account would suggest that what stokes individual purismin Martinique and potentially elsewhereis not so much a standardization process that has reached completion but, perhaps, an incipient one that is just advanced enough to sharpen fears of language/identity loss, but not enough to quell them. PERSISTENCE OF TRADITIONAL INDEXICALITIES. So far, MC has displayed social attributes (relatively high status and purist norms) that set it apart from typical minoritized languages.This raises the question of whether its social indexicalities have changed, too.Is MC still associated, in folk beliefs, with its traditional speakers (men and the elderly) and low-status social places (the countryside and the street)? In terms of speakers, we find a mixed picture comprising 'new' and traditional indexicalities.On the one hand, little distinction is made between genders, with 67% of participants reporting that men and women speak equally good Creole, vs only 27% who report a preference for men's Creole.On the other hand, 'good Creole' is still strongly associated with the elderly (73%), and only 2 out of the 123 respondents attribute equally good Creole to all age groups. In terms of places, 85% of respondents (strongly) believe that Creole is better spoken in the countryside than in towns (Figure 4 below), and 67% (strongly) believe that it is better spoken in the street than at university (Figure 5 How can one account for these traditionaland seemingly low-statusindexicalities, given the findings described above?The preference for the MC of rural environments could potentially result from the word countryside evoking images of a pristine, idealized past.This, however, cannot explain the association of 'good Creole' with the 'street', which suggests that MC is still viewed as the language of informality. 10Together, these results point to the persistence of traditional (low-prestige) indexicalities in the face of standardization and growing prestigean indexical ambiguity that will be the focus of the next section. Discussion. Together, the findings presented in Section 3 raise important questions regarding MC's ongoing standardization and its implications for speakers' attitudes.On the one hand, MC has acquired a status that exceeds that of a minoritized language and appears to be subjected to purist normsboth explicitly (i.e.reporting of purist attitudes) and implicitly (i.e.increased preference for less mixed Creole in the perceptual task's formal settings).On the other hand, however, this increased status coexists with the upholding of traditional low-status indexicalities. How to account for these conflicting representations?Is this indexical ambiguity the result of change in progress (i.e.incomplete standardization), or a more stable attribute of MC that could outlive the standardization process?While we do not have a definite answer to these questions, by examining how MC indexicalities are shaped by (i) status-related attitudes and (ii) exposure to activist Creole, we can tentatively distinguish between more and less likely indexical changes to come. GOOD CREOLE IS IN THE STREET: INCOMPLETE STANDARDIZATION? When we analyze MC indexicalities on the rural-urban axis against status-related attitudes and exposure to activist Creole, we can see that respondents who report a milder preference for street Creole (27%), no preference at all (22%) or, more rarely, a preference for urban Creole (11%) also tend to report higher status-related attitudes and exposure to activist Creole.This is illustrated in Table 2 below which shows, for each possible answer on the rating scale (from 'Creole is much better spoken in the street' at the top, to 'Creole is much better spoken at university' at the bottom), the proportion of respondents who chose that answer, their mean attitudes to Creole on the status dimension (out of 4) and their mean exposure to activist Creole (out of 4). 10 Admittedly, the overwhelming bias for the 'street' can also result from respondents' unfamiliarity with the Creole spoken and/or taught at university, or the university campus environment altogether.This, however, does not invalidate the interpretation offered in the text.Since our question taps into stereotypical representations rather than individual knowledge, more entrenched associations of MC with formality and teaching could have led to associating 'good Creole' with the university environment even in the absence of personal exposure to university MC. showing, for each level of the response variable, the proportion of respondents who selected that level, their mean attitudes to MC on the status dimension and their mean exposure to activist MC As Table 2 shows, the transition from more traditional/lower-status indexicalities (the street) to less traditional/higher-status indexicalities (university) does not correspond to perfectly linear increases in status-related attitudes and exposure to activist Creole.For status attitudes, there is a gradual increase over the rating scale, but the strongest increaseand so, potentially, the strongest effect on the response variablecoincides with the reporting of 'no preference' vs a 'mild preference for street Creole'.For exposure to activist Creole, by contrast, the strongest increase (and, thus, the strongest effect on the response variable) corresponds to the choice of 'mild preference for street Creole' vs 'strong preference for street Creole'.11It is not clear why these variables may affect respondents' choices at some levels of the rating scale more than at others, or why they affect different levels from each other. 12hat is clear, however, is that the association of 'good Creole' with the street is overall weaker for respondents with higher status-related attitudes and exposure to activist Creole.We can thus hypothesize that this traditional low-status indexicality may wane over time, as MC continues to be standardized, taught and propelled into formal domains.This stands in contrast with the association of 'good Creole' with the countryside, which is further explored below. GOOD CREOLE IS IN THE COUNTRYSIDE: A MORE STABLE INDEXICALITY? Although similar at a first glance, the association of 'good Creole' with the countryside reveals meaningful differences from the case above.To start off, the preference for the traditional pole is even stronger, with almost no respondent reporting a preferenceeven a mild onefor urban Creole.Secondly, differences in the degree of preference for rural Creole cannot be predicted by either status-related attitudes or exposure to activist Creole, as shown in Table 3 below.While exposure to activist Creole shows a correlation with the choice of 'mild preference for rural Creole' over 'strong preference for rural Creole', the effect is smaller than for the street/university indexicality and fails to reach significance.Why is the association of 'good Creole' with rural environments so prevalent amongst our respondents, and why is it (seemingly) not eroded by MC's growing status and increasing degree of standardization? 13We argue that this could be due to the idealization of the 'countryside' in contemporary representations of MC, both inside and outside the activist milieu.Rural environments are widely regarded as a bastion of 'authentic', less Frenchified MChence a repository of potential lexicon to expand the emerging standardized MC (Bernabé 1983;Ardoino 2023).The privileged position that rural MC occupies in activist language/discourse might explain why neither higher status-related attitudes nor exposure to activist MC undermine its ideological leverage over urban varieties.As long as MC standardization is framed as the pursuit of (lost) 'authenticity', and authenticity is synonymous with rural life and the 'old times', the association of 'good Creole' with the countryside may remain a cornerstone of the MC standardization project.Therefore, 'rural' indexicalities may be less likely to wane over time than the association of 'good Creole' with the street which, despite apparent similarities, is better explained as a historical hangover from the times of strict diglossia and MC minoritization. Answer This ambiguous coexistence of modern and traditional indexicalities takes us back to our discussion of purism in 3.2.Although the literature often associates purism with language standardization, in this study neither exposure to activist Creole nor status-related attitudes show correlations with reported purism towards Creole.Where does the high Creole purism found in the study come from, if it is not a fallout of the standardization project?We conjecture that respondents' purism might result from a general longing for 'authentic' MC that can originate outside the activist milieu, although being potentially fed by it.This would tally with studies showing that purism can emerge even in the absence of fully organized standardization projects (Schieffelin, Woolard & Kroskrity 1998;Aikhenvald 2001;Sallabank 2017). Conclusions. This paper has drawn on different types of data to keep stock of social representations of MC at a time of sociolinguistic upheaval.What emerges is a picture of indexical ambiguity, with MC simultaneously displaying indexicalities of a 'standard' language (prestige, modernity) and a minoritized 'authentic' variety (low status, associations with old speakers and traditional places). By looking at attitudes to MC on the status dimension and perceptions of formality for different MC varieties, we have shown that MC has acquired a status that exceeds that typical of minoritized languages: most respondents are in favor of the officialization of MC and believe that MC can become a language for high-status domains and functions. At the same time, this increased social status coexists with traditional low-status indexicalities, such as the association of 'good Creole' with the street (vs university) and the countryside (vs the town).By investigating how these two indexicalities are shaped by attitudinal and sociobiographical factors, we have shown that, in fact, these associations likely follow different trajectories and require different explanations.While the association of 'good Creole' with the street appears less stable and may be wiped out by increased standardization, the idealization of countryside Creole is more integral to the standardization project and, thus, less likely to recede. This paper has foregrounded the indexical complexity that can surround minoritized languages on the path towards standardization.Beyond its analysis of sociolinguistic dynamics in Martinique, it makes both theoretical and methodological contributions.From a theoretical viewpoint, this research draws attention to the presence of purist attitudes/ideology in understandardized languages (a purism of 'authenticity') and the persistence of traditional indexicalities in the face of waning diglossia.Both findings chime with studies showing that standardization does not always imply a departure from the 'authenticity' values associated with the minority language (Urla et al. 2016;cf. Ardoino 2023). Methodologically, this paper illustrates a multifaceted approach to the study of minority language standardization, which combines the elicitation of linguistic perceptions, attitudes and representations to better uncover (and quantify) the multitude of stances lying behind language planning and publicly available discourse.This approach is not foolproof, though, as its quantitative focus can lead to glossing over important differences in speakers' attitudes and linguistic experience.The findings presented here should, therefore, be confirmed not only on a more representative sample of the Martinican population but, also, by combining the analysis of quantitative patterns with that of richer and more nuanced qualitative data. Figure 2 . Figure 2. Purism towards Creole.Reactions to the statements 'Creole is too Frenchified' (left) and 'When speaking Creole, one should avoid expressions that are clearly French' (right) below). Table 1 . Ratio of preference for the less mixed variants for French and Creole stimuli, in relation to formality condition Table 2 . Preference for university vs street Creole.Table Table 3 . Preference for urban vs rural Creole.Table showing, for each level of the response variable, the proportion of respondents who selected that level, their mean attitudes to MC on the status dimension and their mean exposure to activist MC.
6,337.8
2024-05-15T00:00:00.000
[ "Linguistics", "Sociology" ]
iPSC-derived neuronal models of PANK2-associated neurodegeneration reveal mitochondrial dysfunction contributing to early disease Mutations in PANK2 lead to neurodegeneration with brain iron accumulation. PANK2 has a role in the biosynthesis of coenzyme A (CoA) from dietary vitamin B5, but the neuropathological mechanism and reasons for iron accumulation remain unknown. In this study, atypical patient-derived fibroblasts were reprogrammed into induced pluripotent stem cells (iPSCs) and subsequently differentiated into cortical neuronal cells for studying disease mechanisms in human neurons. We observed no changes in PANK2 expression between control and patient cells, but a reduction in protein levels was apparent in patient cells. CoA homeostasis and cellular iron handling were normal, mitochondrial function was affected; displaying activated NADH-related and inhibited FADH-related respiration, resulting in increased mitochondrial membrane potential. This led to increased reactive oxygen species generation and lipid peroxidation in patient-derived neurons. These data suggest that mitochondrial deficiency is an early feature of the disease process and can be explained by altered NADH/FADH substrate supply to oxidative phosphorylation. Intriguingly, iron chelation appeared to exacerbate the mitochondrial phenotype in both control and patient neuronal cells. This raises caution for the use iron chelation therapy in general when iron accumulation is absent. Introduction Neurodegeneration with brain iron accumulation (NBIA) disorders are a set of clinically analogous neurological diseases characterised by neuropathology of the basal ganglia coinciding with iron deposition [1]. Patients display pyramidal and extrapyramidal movement disruption as well as cognitive decline. Pathological examination highlights either axonal swellings with PLOS ubiquitinated aggregates, tau tangles or Lewy bodies depending on the NBIA subtype. Mutations in 12 genes have been shown to cause NBIA and each protein has a seemingly disparate cellular function [2]. These functions include iron metabolism, mitochondrial metabolism, lipid homeostasis and autophagy. The most common NBIA subtype is pantothenate kinase-associated neurodegeneration (PKAN), caused by recessive mutations in the PANK2 gene [3]. This accounts for 35-50% of all NBIA cases [4,5]. Pantothenate kinase (PANK) catalyses the first step of coenzyme A (CoA) biosynthesis from dietary vitamin B5. CoA has critical roles in multiple mitochondrial metabolic pathways, including the TCA cycle, β-oxidation and fatty acid synthesis. There are four human PANK isoforms; PANK1 and PANK3 are cytosolic, whereas PANK2 is localised to the mitochondria [6]. There is still some contention over the localisation of mouse Pank2 between the mitochondrial membranes [7] and the cytosol [6,8]. PANK4 is an isoform presumed to lack catalytic activity [9]. CoA is present in the mitochondrial matrix at 1000-fold higher levels than in the cytosol [10] and PANK2 is the major active PANK isoform in the human brain. Despite rodent brain tissue being less enriched for Pank2 than human brain, its central role is demonstrated as Pank2 knockout mice have 60% reduced total PANK activity in neural tissue [11]. These data support a primary role for PANK2 and CoA in neuronal mitochondria. However, mitochondrial CoA is yet to be measured in patient-derived or mouse model brain tissue. The mechanism by which PANK2 mutations lead to neurodegeneration is not known but several animal models have been generated to facilitate investigation of disease mechanisms. Drosophila have one Pank orthologue and deletion partially recapitulates some of the movement phenotypes and reduced lifespan observed in PKAN [12,13]. Addition of human mitochondrial PANK2 is able to rescue this phenotype [13]. Interestingly, while Drosophila cytosolic Pank isoforms are not able to rescue knockout fly phenotypes, addition of the human cytosolic isoforms provide a partial rescue. Pank2 knockout mice also show a similar phenotype, but only when metabolically stressed with a ketogenic diet [14]. These animal models fail to display iron accumulation. Patient fibroblasts have been shown to display defective iron handling, increased reactive oxygen species damage (ROS) and mitochondrial physiological deficits [15]-findings that were replicated in human neurons for the first time after direct reprogramming from patient fibroblasts [16] and subsequently reinforced in iPSC-derived neurons [17]. PKAN patients display iron accumulation in the globus pallidus and have cellular pathology, namely axonal swellings and gliosis, affecting the cortex as well as the neurons of the globus pallidus [18,19]. Despite being an essential element of cell survival, it is unclear whether iron accumulation is causative or consequential to neurodegeneration. Many cellular enzymes make use of heme iron for normal folding and function as well as iron-sulphur clusters for enzymatic function; notable examples are the complexes of the electron transport chain of oxidative phosphorylation. Deregulated iron can be potentially harmful to the cell as, depending on its oxidative state, it can lead to free radical formation via the Fenton reaction. Therefore, tight cellular mechanisms for import, storage and export of iron from the cell exist [20]. Neuronal iron is predominantly imported via endocytosis of the Transferrin Receptor (TfR) and either stored intracellularly through complexes such as Ferritin, consisting of both heavy (FTH) and light (FTL) chains, or utilized immediately by the cell for normal function. For its use in aerobic respiration, iron is imported into the mitochondria via mitoferrin transporters (MFRN1/2) and mitochondrial specific ferritin (MTFT) stores mitochondrial iron until it is required or when in excess. Iron is exported from neurons via Ferroportin (FPN), which requires cell surface stabilization through binding to β-amyloid precursor protein [21,22]. Iron within a healthy cell is predominantly contained within mitochondria and lysosomes [23] and upon entering the mitochondria it is thought to accumulate as no mitochondrial iron exporter has been described thus far. This has led to the suggestion that mitophagy may be a mechanism for liberating iron from mitochondrial stores [24]. The present study sets out to investigate the consequences of PANK2 mutations on iPSCderived cortical neuronal cells in culture. Fibroblasts from three atypical PKAN patients were reprogrammed to iPSCs and, along with three control pluripotent stem cell (PSC) lines, differentiated into cortical neuronal cells using a highly efficient differentiation paradigm. Mitochondrial dysfunction was observed, namely altered NADH and FADH supply to oxidative phosphorylation as well as increased reactive oxygen species (ROS) production and oxidative damage. Changes to iron and CoA metabolism were not witnessed. Additionally, it was shown that iron chelation led to increased oxidative damage in patient and control neuronal cells. These findings enable analysis of early pathological events in PKAN without the context of aging and complex late-stage disease. Materials and methods Cell culture and reprogramming of fibroblasts All culture reagents were purchased from Thermo Fisher unless otherwise stated. Patient biopsies were taken using a skin punch under informed consent (ethical approval from the NHNN and IoN joint research ethics committee, study number 10/H0721/87). Fibroblasts were cultured as previously described [25]. Briefly, 5mm biopsies were taken with a skin punch and then allowed to expand in DMEM supplemented with 10% FBS. Fibroblasts were passaged using 0.05% trypsin/EDTA. Fibroblasts were reprogrammed using the episomal plasmids as described by Okita et. al. [26]. The episomal plasmids were obtained from Addgene (plasmids #27077, #27078 and #27080) and fibroblasts were nucleofected using the Lonza P2 nucleofection kit (Amaxa). Nucleofected cells were changed to iPSC culture media after 7 days and colonies were manually picked after they appeared around 30 days post nucleofection. iPSCs and hESCs were maintained in Essential 8 media on Geltrex coated plates. Cells were routinely passaged using 0.5 mM EDTA. Karyotype counts and G banding analysis was performed by Cell Guidance Systems (Cambridge, UK). At least two clonal iPSC lines from each patient were taken forward for experimentation and compared to two control iPSC lines and one hESC line; termed Control 1, Control 2 and hESC Control respectively. The hESC line Shef6 was obtained from the UK Stem Cell Bank, Control iPSC line 1 was generated from a neurologically normal individual in the lab of Dr Tilo Kunath and Control iPSC line 2 was obtained from the Coriell repository. Stem cells were differentiated to cortical neuronal cells using the protocol described by Shi et. al. [27]. Briefly, cells were subjected to 10 days dual SMAD inhibition using 1 μM dorsomorphin (Tocris) and 10 μM SB431542 (Tocris), followed by extended neurogenesis in N2B27 media containing retinoids. The final time point for all experiments was taken as 100 days post neural induction. Immunocytochemistry Cells were fixed in 4% PFA, washed three times in PBS with 0.3% Triton X-100 (PBST) to permeabilize the cells and blocked in 3% BSA. Primary antibodies (Table 1) were incubated overnight at 4˚C in blocking solution. Cells were then washed three times in PBST and secondary antibodies (Alexa Fluor, Thermo Fisher) were added for 1 hour at room temperature in the dark in blocking solution. Finally, cells were washed once with PBST containing 1 μM DAPI and then twice in PBST before being fixed and mounted. Images were taken on a Zeiss LSM microscope or a Zen confocal microscope. Counting data was taken from 5 images per replicate; areas were randomly selected in the DAPI channel and automated counting was performed using the ITCN nuclear counting plugin for Image J, using the same threshold setting throughout. qPCR RNA was isolated from samples using Trizol reagent and purification was performed following the manufacturers' instructions (Thermo Fisher). Reverse transcription was performed on 2 μg of RNA using Superscript reverse transcriptase III and random hexamer primers. Power Sybr Green mastermix (Thermo) was used for the qPCR reaction on the Agilent MX3000P qPCR system with annealing temperatures of 60˚C for all primers used ( Table 2). All results are relative to three housekeeping genes GAPDH, Cyclophilin and β-actin. Western blot analysis Samples were treated with 50 μM FAC for 18 hours in normal cell culture media. Samples were lysed in RIPA buffer containing 10 mM Tris pH 8, 140 mM NaCl, 1 mM EDTA, 0.5 mM EGTA, 1% Triton X-100 0.1% sodium deoxycholate, 0.1% SDS plus protease and phosphatase inhibitors (Roche), for one hour on an orbital shaker at 4˚C, followed by centrifugation at 10,000 g for 15 minutes at 4˚C. Protein concentrations were measured using the BSA assay (Biorad) and samples were separated on a NuPage 10% SDS polyacrylamide gel (Novex) before being transferred onto a nitrocellulose membrane for western blotting. Membranes were blocked in 3% milk in PBS containing 0.1% Tween 20. Primary antibodies were added to the membranes in blocking solution overnight at 4˚C. Blots were then washed three times before secondary antibodies were added in blocking solution for one hour. After final washes, images were captured and densitometry analysis performed on the Li-Cor Odyssey imaging system (Li-Cor). HPLC CoA species were extracted from cells with ice-cold perchloric acid (PCA, 3.5%). After centrifugation at 21,000 g for 5 min at 4˚C, the supernatant, containing CoASH and short-chain CoA esters, was collected and 1 M triethanolamine (TEA) was added to a final concentration of 100 mM. The pH was adjusted to pH 6 with 5 M K 2 CO 3 and potassium perchlorate pellet was removed by centrifugation at 21,000 g for 3 minutes at 4˚C. For the quantification of the total level of acid-soluble CoA esters (combined level of unesterified CoA and short chain CoA), 5 M KOH and 100 mM tris (2-carboxyethyl) phosphine were added to neutralized PCA extracts to final concentrations of 0.5 M and 10 mM, respectively. KOH hydrolyses all PCAsoluble esters into unesterified CoA which were then measured by HPLC. After incubation at 25˚C for 5 minutes, the pH was adjusted to pH 6 with 5% PCA. CoASH and short-chain CoA esters were measured by HPLC as previously described [28] except EDTA was omitted from the injection mixture. For the quantification of total long-chain acyl CoAs, the PCA pellets were solubilised in 89 mM TEA, 0.44 mM KOH and 11.1 mM DTT by gentle sonication to hydrolyse long-chain CoA esters to unesterified CoA. After incubation for 5 min at 25˚C, proteins were precipitated by PCA and pelleted by centrifugation at 21,000 g for 10 min at 4˚C. The supernatant was collected and the pH was adjusted to 6-7 with 0.5 M K 2 CO 3 and centrifuged again at 21,000 g for 3 min at 4˚C. The supernatant was collected and CoA was measurement by the CoA recycling assay [29] adapted to a plate reader format. Mass spectrometry The UPLC-MS/MS instrument consisted of a Waters ACQUITY UPLC system coupled to a Xevo TQ-S triple quadrupole mass spectrometer with an electrospray ionization source. The mass spectrometer was operated in negative ion mode and data were acquired using MassLynx V4.1 software. Chromatographic separations were achieved using a Waters CORTECS C 18 column (1.6 μm, 2.1 x 50 mm), with a CORTECS C 18 VanGuard pre-column (1.6 μm), which was maintained at 40˚C. Binary gradient profiles were developed using water with 0.01% formic acid (A) and methanol (B) (HPLC grade, Merck) at a flow rate of 700 μL/min. Separations were conducted under the following chromatographic conditions: 100% solvent A for 1 min, decreased to 10% over 1 min, maintained for 1 min at 10% before being increased to 100% over 0.1 min. Column equilibration time was 0.9 min, with a total run time of 4 min. The injection volume was 10 μL. Mass spectrometric conditions were as follows: capillary voltage 2.5 kV, cone voltage 60 V, source temperature 150˚C, desolvation temperature 600˚C, cone gas flow 150 L/h, desolvation gas flow 800 L/h, collision gas flow 0.25 L/h and nebulizer gas flow 7 bar. Dwell time was set at 8 msec for each analyte. The quantitation of vitamin B5 and isotopically-labelled vitamin B5 (Sigma) was then performed using the multiple reaction monitoring (MRM) method described in Table 3. It is important to note for isotopically labeled pantothenate, treatment was performed in media deficient of pantothenate: HBSS media supplemented with N2 and B27 supplements, NEAA and L-glutamine as the above media recipes. Sample preparation. Cell pellets were reconstituted in 20 μL of buffer (pH 7.8), consisting of 100 mM Tris base, 6 M urea, 2 M thiourea and 2% ASB-14 (adjusted to pH 7.8 with HCl). Samples were shaken for 30 minutes (1000 rpm; 37˚C) before being diluted 1:100 with water and analyzed via UPLC-MS/MS. Inductively-coupled-plasma mass spectroscopy (ICP-MS) Cellular iron content was analyzed by ICP-MS using the protocol previously reported [21]. Briefly, 150 μg of total protein as measured by Bradford protein assay, was lyophilized before resuspension in 100 μl nitric acid (69% v/v; ultraclean grade, Aristar) overnight at room temperature (RT). Samples were then heated for 1 hour at 90˚C, before the addition of an equivalent volume of hydrogen peroxide (30%, Merck). Sample was incubated for 15 min at RT before a further 30 min at 70˚C. To evaluate metal content against calibration standards (#IMS-102; Ultra Scientific) samples were diluted in double-distilled water until within quantifiable parameters using a NexION 350X inductively coupled plasma mass spectrometer (Perki-nElmer, Waltham, MA, USA). Each sample was measured in triplicate and normalized to total protein concentration. Live cell imaging For live cell imaging iPSC cells were incubated with 25 nM TMRM (tetramethylrhodamine, methyl ester; a cell permeant, cationic, red-orange fluorescent dye sensitive probe for mitochondrial membrane potential and used in the redistribution mode) for 40 minutes in a HEPES-buffered salt solution (HBSS) composed of (mM): 156 NaCl, 3 KCl, 2 MgSO 4 , 1.25 KH 2 PO 4 , 2 CaCl 2 , 10 glucose and 10 HEPES; pH 7.35. Images were obtained using a Zeiss 710 Laser Scanning Microscope (CLSM) with an integrated Meta detection system and a 40x oilimmersion objective. Illumination intensity was kept at the minimum of laser output and the pinhole was set to give an optical slice of *1 μm. TMRM was excited using the 560 nm laser line and fluorescence measured above 580 nm. Calcein-AM based cell area measurements were used for normalization. The NADH autofluorescence was measured with excitation at 405 nm and emission at 440-480 nm. FAD autofluorescence was determined using 458 nm Argon laser line and fluorescence was measured from 505 to 550 nm. FAD and NADH redox indexes and mitochondrial pools were estimated by sequentially applying 1 μM of the mitochondrial uncoupler FCCP (carbonyl cyanide p-trifluoromethoxyphenylhydrazone), followed by 1 mM of the complex IV inhibitor sodium cyanide [30]. Lipid peroxidation experiments were performed using, C11-BODIPY (581/591, 2 μM, Molecular Probes) was excited at the 488 and 565 nm laser lines and fluorescence measured from 505 to 550 nm and above 580 nm (40x objective). GSH levels were assessed using monochlorbimane, Table 3 Sanger sequencing gDNA was extracted from iPS cells and differentiated neuronal cells from all utilised patients and control clones using a standardized phenol extraction. Primers and touch-down PCR programmes as available in Table 4 were utilised to Sanger sequence the specified mutations within the PANK2 gene as given in Table 5 for the respective disease and control clones to confirm their presence/absence. Statistical analysis Data represent three control PSCs and at least two clones from three unrelated PKAN patients. Number of independent inductions is depicted in histograms. Where appropriate, statistical significance was calculated using the Student's t test or ANOVA, as stated. Histograms represent mean values and error bars represent standard error of the mean. Ã p<0.05, ÃÃ p<0.01, ÃÃÃ p<0.001. Results Generation of patient-derived iPSCs and cortical neuronal cells iPSC lines from three NBIA patients with confirmed PANK2 mutations were generated to study the mutation effect in human neurons in vitro (Table 5). These patients each have compound mutations leading to atypical PKAN with later disease onset and less severe progression than classical PKAN (Table 5). Clinically, these patients display iron accumulation evident by T2 Ã MRI and a generalised dystonia phenotype. Fibroblasts from the 3 patients were reprogrammed to iPSCs using the non-integrating episomal reprogramming method [26]. At least two iPSC clones from each patient were picked and expanded for further characterisation, alongside two control iPSC lines and one control hESC line. To confirm pluripotency in the newly derived iPSC lines, expression of the pluripotency markers OCT4 and SSEA4 were immunocytochemically identified ( Fig 1A). PANK2 iPSCs displayed similar expression of these pluripotency markers and characteristic colony morphology to control lines. qPCR analysis also showed that the gene expression of the core pluripotency markers OCT4, NANOG and SOX2 was comparable to hESCs in contrast to nascent fibroblasts ( Fig 1B) and that the newly formed iPSCs faithfully silenced fibroblast markers S100A4 and VIMENTIN ( Fig 1C). The genomic integrity of the newly formed lines was confirmed by karyotype stability and G-band analysis (S1 Fig) and all disease lines were confirmed to carry the compound heterozygous mutations by Sanger sequencing (S1 Fig), as specified in Table 4. Absence of these mutations was confirmed in control PSC lines. To generate neurons, we subjected the iPSCs to a cortical differentiation protocol due to the very high efficiency of differentiation (>95%) [27,31]. All lines tested from the patientderived iPSCs were able to faithfully generate forebrain patterned neural precursor cells, expressing the telencephalic marker OTX2 (Fig 1A). After 100 days of neurogenesis, control and patient-derived PSCs generated deep layer (TBR1-positive), upper layer (SATB2-positive) and middle layer (CTIP2-positive) cortical neuronal cells (Fig 1A). The ability of lines to undergo cortical neurogenesis showed some variability but was comparable between all patient lines (Fig 1D), consistent with a non-developmental disease. Sanger sequencing was performed on genomic DNA from terminally differentiated cells and all patient-derived heterozygous mutations were confirmed (S1 Fig). Western blot analysis demonstrated a reduction of mature PANK2 protein in patient-derived neuronal cultures in comparison to control cell cultures, (patient 1 32.8±4.1%; patient 2 36.5±4.5%; patient 3 30.2±2.1%), despite only one patient displaying a premature stop codon mutation (Table 5). CoA homeostasis in patient-derived neuronal cells is comparable to control Only a subset of mutations in PANK2 that lead to PKAN reduce enzyme activity and affect protein folding [32,33]. Indeed, changes in CoA levels in human cell lines or brain tissue have yet to be demonstrated [2]. Therefore, absolute levels of CoA and acetyl-CoA were investigated in the cultures via HPLC. Primary analysis of the free CoA to acetyl-CoA ratio in iPSC cultures and differentiated cells indicated an undefined switch in the metabolic states between the two cell types, seen via a relative decrease in free CoA and an increase in acetyl-CoA levels in neuronal cells. This ratio change between iPSCs and neurons was independent of PANK2 mutation (S2 Fig). No change in short chain CoA species (free CoA (CoASH), acetyl-CoA and short-chain CoA esters) was observed by HPLC (Fig 2A and 2B). As it has been previously reported that PANK2 mutations could lead to the disruption of β-oxidation of fatty acids [33], long chain CoA derivatives were also evaluated. While some variability arose between cell lines (Fig 2C), no overall significant difference was observed between control and mutant lines (Fig 2D). Together, these data signify a comparable steady state of total CoA levels in the patient-derived neuronal cells to control cultures. To infer changes of synthesis and breakdown of CoA in patient-derived neuronal cultures, stable isotope labelled vitamin B5 was used to measure neuronal uptake and handling (Fig 2E). Cells treated with labelled vitamin B5 (pantothenate) for up to 24 hours displayed a similar rate of uptake between control and patient-derived neuronal cultures (Fig 2E). No change also inferred that the cellular lifetime of vitamin B5 was unaffected by the presence of the PANK2 mutation, shown via similar homeostasis of the labelled pantothenate. Patient-derived neuronal cells exhibit dysfunctional mitochondrial respiration In order to test whether the mutations in PANK2 affect the mitochondrial health in our iPSCderived neuronal cells, mitochondrial membrane integrity and live cell imaging experiments were performed. Mitochondrial membrane potential (Δψ m ), is an indicator of mitochondrial function, and was assessed using TMRM fluorescence (Fig 3A and 3B). The presence of the PANK2 mutation increased Δψ m in both iPSC (pooled controls: 470.3±72.2, n = 48 versus pooled patients: 1207.0±95.3, n = 72. ÃÃÃ p<0.0001) and neuronal cells (pooled controls: 1254.3±67.7, n = 48; versus pooled patients: 1685.6±135.1, n = 106; p = 0.005, Fig 3A ii ). Changes in mitochondrial membrane potential have been reported in other PKAN models [7,16], albeit with opposing results. Here, altered maintenance of Δψ m is likely to represent a compensatory mechanism to mitochondrial deficits. Some variability between cell lines was apparent, as demonstrated by patient 2 and 3 derived cells displaying significantly higher Δψ m compared with controls, whereas patient 1 derived cells consistently demonstrated Δψ m similar to control lines ( Fig 3A). Due to this variability, data for individual cell lines is presented, and significant comparisons are depicted by the pooled data (Fig 3A ii ). The known association of PKAN pathology with brain iron accumulation led us to examine whether iron was involved in changes to the mitochondrial membrane potential observed in patient cells (Fig 3C and 3D 6, n = 49. ÃÃÃ p<0.001) representing a further increase in membrane potential above nontreated PKAN-associated levels as a result of iron chelation (Fig 3C and 3D). The basis for differences in Δψ m maintenance was further investigated by exposing the cells to known specific inhibitors of the electron transport chain complexes (Fig 3E). A comparable in human embryonic stem cells. Fibroblasts were included as a negative control. C) Downregulation of fibroblast enriched gene expression, S100A4 and VIM, was assessed using qPCR. Normalization to untransfected fibroblasts demonstrated low expression in newly derived iPSCs similar to embryonic stem cells. D) Quantification of the relative proportions of TBR1, CTIP2 and SATB2 positive cells relative to total cell numbers in differentiated cultures. Individual data represent 2 iPSC clones from patient 1, 3 clones from patient 2 and 1 clone from patient 3. E) Representative western blot analysis of PANK2 protein levels in whole cell lysates from control and patient neuronal cultures at day 100 of differentiation. PANK2 was observed at the predicted molecular weight of 48kDa. Actin was used as a loading control. F) Quantification of western blot data from three independent inductions showed consistently lower levels of PANK2 in patient neuronal cells. Scale bar represents 100 μm for all images, and 20 μm for A iv . Numbers in histogram bars represent experimental replicates. https://doi.org/10.1371/journal.pone.0184104.g001 decrease (~70% of basal levels) in control and PKAN neuronal cells of TMRM signal after oligomycin treatment (complex V inhibitor), suggested complex V in these cells was working in reverse as an ATPase to maintain mitochondrial membrane potential. Rotenone (a complex I inhibitor) also induced a decrease of TMRM fluorescence, however PKAN cells responded slower and to a reduced extent than control cells (Fig 3E). This suggested that either complex I in PKAN neuronal cells was ineffective at generating a membrane potential or NADH substrate availability from the TCA cycle was decreased. To elucidate whether complex I was dysfunctional or starved of substrates, mitochondrial NADH redox levels were measured (Fig 3F and 3G) and found to be significantly reduced in PKAN neuronal cells under basal conditions (pooled controls: 60.1±9.5%, n = 24; versus pooled patients: 24.3±10.6%, n = 36. Ã p = 0.0139). The application of the mitochondrial uncoupler FCCP, to fully deplete NADH levels by stimulating mitochondrial respiration, and NaCN, to block mitochondrial respiration and thus NADH consumption, enabled the rate of NADH production and maximal pool of NADH in mitochondria to be estimated (Fig 3F-3H). A trend towards a reduced total pool of NADH was evidenced in PKAN neuronal cells. Together these observations were thus able to explain the reduction in complex I activity due to lack of NADH substrate availability. Similar analysis of the complex II substrate FAD revealed that PKAN neuronal cells displayed a significantly lower FAD redox index (pooled controls: 89.5±8.2%, n = 24; versus pooled patients; 42.4±8.0%, n = 36. ÃÃÃ p = 0.0001) (Fig 3I-3K), indicating an inhibited rate of complex II dependent respiration. Altogether, these investigations provide evidence for a defective electron transport chain in neurons carrying the PANK2 mutation. Increased ROS production and oxidative stress in PKAN patient-derived neuronal cells Using the fluorescent dye monochlorobimane (MCB), levels of the reduced form of the antioxidant glutathione (GSH) were measured in control and patient-derived neuronal cells (Fig 4A and 4B). Compared with controls cells, MCB fluorescence intensity was significantly decreased in both undifferentiated patient-derived iPSCs (pooled controls: 512.3±73.5, n = 90; versus pooled patients: 431.0±17.6, n = 127. p = 0.3022) and PKAN neuronal cells (pooled controls: 1151.0±36.6, n = 105; versus pooled patients: 675.6±21.7, n = 120. ÃÃÃ p<0.0001), representing a decreased intracellular antioxidant pool. The lower reduced glutathione level in iPSCs compared to neuronal cells (Fig 4B) is consistent with the low oxidative phosphorylation status and ROS production of iPSCs that metabolically correspond to an early embryo [34]. (Fig 4C and 4D). Lipid peroxidation, assessed via the ratiometric dye BODIPY-C11, was also found to be significantly higher in PKAN neuronal cells from two of the patients whereas cells derived from patient 2 showed a similar trend but did not reach significance (0.59683±0.06744, n = 196 for pooled patients, compared to 0.26±0.022, n = 86 for pooled controls; ÃÃÃ p<0.0001 Fig 4E). Following the findings in Fig 3C, the relationship with iron and PKAN neuron associated oxidative damage was explored. Surprisingly, 30 minutes pre-treatment of neuronal cells with DFO increased ROS generation in both control and patient-derived cells (pooled controls: 112.5±4.6; n = 46; versus pooled patients: 122.0±2.4; n = 53. p = 0.0782) (Fig 4D). Taken together, these data provide evidence that the mitochondrial dysfunction observed in patient-derived neuronal cells leads to increased ROS generation and downstream oxidative damage. In our conditions, iron chelation exacerbated ROS generation and worsened the PKAN neuronal phenotype. Cellular response to iron in PKAN patient-derived neuronal cells Iron accumulation is a progressive pathological characteristic of PKAN seen via MRI during life [35], however, it is unclear whether this is a cause of neurodegeneration or a consequence. Here, the iron response pathway was assessed in control and patient-derived neuronal cells. Response of the cellular iron-handling pathway was analyzed in response to 18 hours iron treatment (50 μM ferric ammonium citrate (FAC)) ( Fig 5A). Across all cell lines and consistent with a reduction in cellular iron import, TfR expression was reduced at the transcriptional and expression levels (Fig 5B-5D). The cytosolic iron storage molecule Ferritin (FTH and FTL) exhibited increased protein levels in all lines, consistent with its translational control by iron response genes [36]. Interestingly, Ferroportin expression was significantly increased in patient-derived neuronal cells versus controls, in basal conditions and in response to iron treatment (Fig 5B), suggesting a cellular response for increased iron export. No significant transcriptional change in response to iron was observed for MFRN1/2 or PANK2, however, a trend to increased FTMT expression was witnessed in patient-derived cells, hinting to increased mitochondrial iron storage ( Fig 5B). It is noteworthy that PANK2 expression is similar between control and patient-derived neuronal cells, whereas protein levels are reduced in patient-derived cells (Fig 1E and 1F). . C) Basal rates of ROS production were quantified in neurons using the dihydroethidium assay. D) Quantification showed an increased rate of ROS production in patient cells compared to controls and in all cell lines in response to 30 minutes pre-incubation with iron chelator DFO (D ii depicts pooled data from D i ). E) The level of lipid peroxidation was quantified in the neuronal cells using the ratiometric dye BODIPY C11 and two of the patientderived neuronal cells displayed significantly higher levels of lipid peroxidation (pooled data shown in E ii ). Scale bar represents 20 μm. Significance was calculated via one way ANOVA with post-hoc Tukey's HSD correction for multiple comparisons *p<0.05, **p<0.01, ***p<0.001, ns not significant. https://doi.org/10.1371/journal.pone.0184104.g004 Total iron content is unchanged in patient-derived neuronal cultures To investigate whether the small differences in expression of FTMT and FPN in response to extracellular iron (Fig 5B) related to altered cellular iron content, absolute metal ion content was measured by inductively coupled plasma mass spectrometry (ICP-MS). In all neuronal lines, elevated intracellular iron content upon incubation with FAC (50 μM; 18 hours) was confirmed and no change was observed between control and PKAN neuronal cells (Fig 6). Data indicates a largely appropriate iron response from PKAN neuronal cells that preserves intracellular levels of iron even in an elevated extracellular iron environment. An exception is the consistent increase in FPN expression in patient-derived cells versus controls in both basal conditions and under iron stress, which may indicate a compensatory measure to maintain intracellular iron levels by enhancing iron export. Discussion In this study, PKAN patient-derived neuronal cells have been generated in an attempt to identify early mechanisms of neurodegeneration in NBIA. These cells represent a human model with appropriate gene dosage of clinically proven pathogenic mutations in which to study the earliest underlying consequences of PANK2 mutations in developmentally immature neuronal cells. Cortical neurons represent a relevant model of PKAN, as one of the cell types affected by axonal swellings and gliosis, additional to the main site of pathology in the globus pallidus [19]. There currently exists no stem cell differentiation protocol towards pallidal neurons and the cortical differentiation protocol employed here is highly efficient and generates very homogenous cortical cultures due to the default nature of this developmental paradigm [27]. It should be noted that other cell types exist in the culture, for example glial cells and some progenitor cells that persist, together representing less than 5% of cells [27]. These cells represent a minority, however we cannot discount contributions of these cells to our data, for example the provision of metabolites from astrocytes to neurons for oxidative phosphorylation. We observed a significant degree of variability in cellular responses to many experimental paradigms; including the fact that cells derived from patient 1 often responded similar to controls. These variabilities may represent tissue culture artefact and a consequence of the cells being cultured outside of their natural context and one example of this is the substantial change in transcription witnessed in primary microglia just six hours after being placed in culture [37]. Our observation that the relative proportion of free CoA to acetyl-CoA changes through differentiation suggests a metabolic alteration between the stem cells and neurons. This finding calls for future investigations to describe the metabolic states of the two cell types and changes through neuronal differentiation. We find that point mutations associated with atypical PKAN lead to unaltered PANK2 expression in neuronal cultures but a reduction in PANK2 protein levels. Hypothetically, this could be explained via alterations in folding and degradation of the putative unfolded protein; further investigation is required. These findings are in line with iPSC-derived neurons harbouring premature stop codons; showing unaltered PANK2 transcription but a total lack of protein [17]. It is noteworthy that biochemically, a number of peptides harbouring PANK2 mutations display normal enzyme function [32]. Thus protein dosage may be central to the disease mechanism, with juvenile onset displaying no PANK2 protein and atypical PKAN a reduced level. The data presented here demonstrates that pathogenic point mutations in PANK2 do not alter neuronal CoA levels or the metabolic flux of pantothenate. This suggests that there is either a distinct cellular function for mitochondrial PANK2 or that cytosolic PANK enzymes may be compensating for a PANK2 deficiency. The ability of mitochondrial human PANK2 and only mitochondrial isoforms of drosophila Pank to rescue phenotypes in Drosophila Pank knockouts [13], supports the former concept, whereas the ability of human PANK3 and PANK4 to partially rescue knockout flies supports the latter. Alternatively, CoA flux might respond in a faulty manner in relation to stress in the setting of the ageing or diseased brain. Dysfunctional mitochondrial oxidative phosphorylation may be a key component in the PKAN brain and altered mitochondrial membrane potential has been seen in other PKAN models. Reduced mitochondrial membrane potential and defective ATP production has been described in PANK2 knockout mouse models, patient-derived fibroblasts as wells as induced neuronal models [7,16]. The increased membrane potential seen in this study may represent a compensatory response of the cells to mitochondrial deficits, in the context of atypical disease mutations. Mitochondrial and metabolic deficiencies could explain the adult onset of neurodegeneration through altered environmental stresses and diet in this highly energy demanding cell type. This is reinforced by mutations in PANK1 being linked with hyperglycaemia [38,39]. We report a metabolically immature neuronal phenotype in PANK2 mutants as well as in control lines, as seen by depolarization of mitochondria in response to oligomycin application after 100 days of differentiation. This result highlights that all cultures analysed in this study are partially glycolytic, contrary to mature primary neurons in culture. Due to this observation, some metabolic consequences of PANK2 mutations may be masked by the fact that our cultures are not entirely dependent on mitochondrial oxidative phosphorylation. Closer analysis of the mitochondrial physiology demonstrated that the cells were defective in complex I of the electron transport chain. Our data indicate that a lack of NADH substrate provided by the TCA cycle may be central to this deficiency. In addition to NADH, PANK2 mutations reduce the amount of FADH production in complex II. This observation is reinforced by changes in NADH in tissue homogenate in Pank1/Pank2 double knockout mice [11] and validates the hypotheses gleaned in studies in iPSC-derived neurons [17]. Importantly, irrespective of PANK2 mutations, our cultures rely on glycolysis to compensate for inefficient oxidative phosphorylation and for increased ATP consumption by the ATPase to maintain the mitochondrial membrane potential. It is interesting that undifferentiated iPSCs display some mitochondrial abnormalities in addition to neuronal cells, namely increased TMRM fluorescence and reduced antioxidant levels. Levels of antioxidants have been shown to be altered in PKAN patient-derived fibroblasts and transdifferentiated neurons, including lower levels of reduced glutathione [15,16]. However, in the brain an increase in levels of glutathione-cysteine have been described, suggesting increased oxidised glutathione and potentially oxidative stress [40]. These consistencies again suggest a selective vulnerability of certain neurons to a consistent mutation-associated phenotype. The reported ultrastructural disruption of cristae organization and mitochondrial swelling in a PKAN transgenic model [14] and in iPSC-derived neurons [17] could occur as a cause or result of the metabolic dysfunction described here. This physical disruption of the respiratory chain could lead to electron leakage and could be another reason for the reversal of the ATP synthase to maintain mitochondrial membrane potential described here. ROS is a normal cellular signal for multiple physiological processes; however prolonged exposure to high levels combined with environmental stresses will inevitably lead to damaged cellular components. Increased ROS generation and subsequent lipid peroxidation as a result of altered oxidative phosphorylation in PKAN neurons could potentially explain the post-developmental onset of the NBIA. Of note, is the finding that iron chelation increases mitochondrial membrane potential and ROS generation in both control and mutant cells. This can be explained as DFO is a specific Fe (III) chelator, altering the equilibrium between the two redox states. In turn, DFO thus favours the oxidation of Fe(II) to Fe(III), leading to the release of electrons and the formation of ROS. This is an important finding with respect to the relevance of iron to mitochondrial homeostasis, however, it is important to consider that we do not see iron accumulation in our model. For this reason, we cannot comment on the effectiveness of iron chelation with respect to ongoing clinical trials in PKAN using the cell permeable iron chelator deferiprone [4]. Current findings calls for further investigations in other model systems that display iron accumulation and elucidating the role of iron in healthy mitochondria. Iron deposition is a characteristic feature of NBIAs such as PKAN and is increasingly apparent as an early pathological feature in other neurodegenerative diseases such as AD and PD [41]. It is still unclear whether defective iron homeostasis is a cause or consequence of the neuropathological events in these diseases, but brain imaging has identified that its accumulation is clearly progressive during life [42] and may occur prior to symptom onset [43]. The homeostatic response of iron regulatory proteins and total intracellular iron levels appear largely normal in patient-derived neuronal cells under these conditions. However, altered FPN expression reported here and in PANK2 knockout cell lines [44] suggest an increase in the iron export pathway may exist. A trend to increased MTFT expression in PANK2 mutant neuronal cultures not only reinforces a mitochondrial defect, but also may indicate a further attempt by the cell to sequester excess iron safely. It is tempting to speculate that a mitochondrial defect could lead to altered iron storage in the mitochondrial matrix and increased iron export via FPN, leading to iron dyshomeostasis and a potential accumulation over time. Thus, mitochondrial phenotypes may theoretically precede iron accumulation and underlie PKAN disease progression. Orellana et. al. performed investigations into PKAN in cortical neurons derived from iPSCs. This study focused on early onset-associated mutations that lead to a lack of mature PANK2 protein. The authors also see mitochondrial abnormalities and increased ROS production in iPSC-derived cortical neuronal cultures, albeit it with reduced mitochondrial membrane potential. This may hint to different compensatory mechanisms between early-onset and late-onset-associated mutations in PANK2 [17]. The authors also hypothesise altered NADH supply to the mitochondria; here we have shown this link to be valid via reduced NADH redox ratios. Orellana et al elegantly show that the wild-type PANK2 transgene can reverse the disease phenotypes, but also put forward extracellular CoA supplementation as a novel therapeutic avenue [17,45]. This strategy has also been shown to reverse disease phenotypes in model organisms of CoA imbalance, as developmental, vasculature and metabolic deficiencies from Coasy and Pank2 knockdown zebrafish are also reversed via CoA supplementation [46,47]. In conclusion, the generation and characterisation of PKAN patient-derived iPSC neuronal cells has provided new insights into the underlying mechanisms of NBIA with relevance to other diseases exhibiting iron accumulation. Reduced cofactor supply for oxidative phosphorylation can explain the mitochondrial defects in patient-derived neuronal cultures, which in turn precedes iron accumulation. Additionally, the effects of iron chelation described here call for careful consideration in the future therapeutic strategies.
9,256.8
2017-09-01T00:00:00.000
[ "Biology", "Medicine" ]
Dynamic characteristics of water-lubricated journal bearings The increasing ecological awareness and stringent requirements for environmental protection have led to the development of water lubricated journal bearings. For the investigation of water-lubricated journal bearings, a new structured mesh movement algorithm for the CFD model is developed and based on this method, the nonlinear transient hydrodynamic force model is established. Then, with consideration of velocity perturbation, a method to determine dynamic coefficients and linear hydrodynamic forces is promoted. After validation of static equilibrium position and stiffness coefficients, a comparative linear and nonlinear hydrodynamic force analysis of multiple grooves water-lubricated journal bearings (MGWJBs) is conducted. The calculation results indicate that the structured mesh movement algorithm is suitable for the dynamic characteristics investigation of water-lubricated journal bearings. And the comparative study shows that there is a considerable difference between the linear and nonlinear hydrodynamic forces of MGWJBs. Further investigation should be carried to evaluate the dynamic responses of rotor supported by MGWJBs under difference force models. Introduction The increasing ecological awareness and stringent requirements for environmental protection have led to the development of water lubricated journal bearings in many applications where oil was used as the lubricant [1]. The key applications of water-lubricated journal bearing include marine propulsion systems, water pumps and hydraulic turbines [2]. To facilitate the hydrodynamic process, the constant supply of water is commonly fed into the bearing clearance through longitudinal grooves to maintain the thin fluid film between the journal and bearing. Commonly, the dynamic characteristics of a rotor supported by water lubricated journal bearings include unbalance response and whirling instability. Those characteristics can be investigated by two methods: the linearized method and nonlinear transient analysis. For the linearized method, a small perturbation is given to the journal around static equilibrium position and the stiffness and damping coefficients can be determined. Then, the unbalance response and stability threshold speed are related to those coefficients by solving the equations of motion. Using the linearized method, Stability analysis of MGWJBs and hybrid water-lubricated journal bearings have been investigated by Majumdar et al. [3] and Ren et al. [4]. Nonlinear transient analysis model of the rotor-journal bearings system gives the orbital trajectory within the clearance circle by solving the equations of hydrodynamic lubrication and motions iteratively. This model can provide a better understanding of transient flow field and stability of rotor. For transient analysis model of MGWJBs, Pai et al. [5,6] solved the time-dependent Reynolds equation and equations of motion to predict the transient behavior of the rotor. However, a comparative linear and nonlinear dynamic analysis of cylindrical journal bearing has shown that a linear model is found to deliver acceptable results at a relatively small shaft unbalance [7]. Meanwhile, linear models can predict the imbalance response only when operating conditions are below the instability threshold speed at high eccentricities [8]. In addition, the difference of linear and nonlinear hydrodynamic forces is relative to bearing structures [9]. Therefore, understanding the difference between linear and nonlinear hydrodynamic forces of water-lubricated journal bearings is an important issue that can reveal the reason for the inherent stability and provide more precise and detailed prediction of bearing behaviors in the future. Up to now, there are two possible modeling approaches for numerical solution of hydrodynamic flow issue of waterlubricated journal bearings: on the one hand, as listed above, there are models based on classical Reynolds equation [3][4][5][6][7][8][9]. However, low viscosity of water compared to oil increases Reynolds number drastically and makes these bearings prone to significant fluid inertia effects. Dousi et al. [10][11][12] highlighted that the inertia, neglected by classical Reynolds equation, has a considerable effect on the dynamic coefficients and stability. To eliminate this limitation, on the other hand, there are the models based on the extended Reynolds equation considering inertia effect or CFD programs solving full Navier-Stokes equations. Although, extending Reynolds equation has the advantage of short computation time, it may need more work to meet requirements when a complex flow geometry is used or when a more detailed analysis is required. With the rapid development of computer technology, more and more researchers use CFD programs to predict the performance of water-lubricated journal bearings. The CFD method has already been proved to be a very useful tool in the lubrication analysis of water-lubricated journal bearings. Cabrera et al. [13] constructed a twodimensional CFD model and the calculation results showed similarity in experimental results. Gao et al. [14,15] established a 3D CFD model to analysis load-carrying capacity of water-lubricated journal bearing under hydrodynamic lubrication condition. Based on a same method, Zhang et al. [16] provide a method for determining the stiffness coefficients of hydrodynamic plain journal bearings lubricated by water. And then, the load carrying capacity of misaligned condition was analyzed [17]. Using CFD and FSI method, an elastohydrodynamic (EHD) model was established for water-lubricated journal bearings by Wang et al. [18]. Then, this method was used to analyze the performance of worn hydrodynamic waterlubricated plain journal bearings [19]. However, for the transient or dynamic characteristic investigation of journal bearings using CFD techniques, only few studies have been carried on. Guo et al. [20]. developed a CFD model for dynamic coefficients. Gertzos et al. [21] and Cheqamahi et al. [22] proposed a transient analysis model of journal bearing lubricated by Bingham lubricant and turbocharger's bearing using dynamic mesh method, respectively. Liu et al. [23] and Lin et al. [24] established the model for oil lubrication journal bearings using CFD-FSI methodology. However, due to the magnitude difference between the dimensions of clearance and journal bearing, using of unstructured grids and dynamic mesh models in CFD will stop the calculation process due to numerical failure or the negative volume of grids in the transient condition. Meanwhile, the viscosity of the water is about 30 ∼ 40 times less than mineral oils [1]. It contributes to a lower hydrodynamic load carrying capacity of a water-lubricated plain bearing. And a larger eccentricity ratio such as 0.6 or 0.7 is regarded as a proper value [15]. This also resulted in a great increase in the difficulty of nonlinear transient CFD analysis model. Recent years, Li et al. promoted a structured mesh movement algorithm to calculate the 3D transient flow of classic circular journal bearings [25]. Then, this method is revised to study the transient flow field of tilting pad journal bearings [26] and the self-circulating oil bearing [27,28]. However, the water-lubricated bearings have not been covered. In the present work, two weakness of the CFD model of water-lubricated journal bearings are overcome. Firstly, a new structured mesh movement algorithm for the CFD model of water-lubricated journal bearings is developed and based on this method, the nonlinear transient hydrodynamic force model is established. Secondly, with consideration of velocity perturbation, an efficient method to determine dynamic coefficients and linear hydrodynamic forces is promoted based on the calculation of 3D transient flow field. Figure 1 shows assembly diagram of multiple grooves water-lubricated journal bearing. Commonly, the bearing is submersed in water and water is fed from one end through longitudinal grooves. Then, hydrodynamic pressure is generated in the non-grooved region to maintain a thin water film between shaft and bearing. The 3D diagram of the fluid domain including grooves domain and clearance domain is shown in Figure 2. When steady states reached (Fig. 3), the journal is displaced from the bearing with a center distance (e), which is referred to the journal eccentricity. The parameters of the journal bearing used in the numerical analysis are shown in Table 1. Physical and meshing models of a bearing To obtain a structured grid distribution at clearance domain, the grooves is split by the outer circular surface keeping connection to entire fluid domain (Fig. 2). The clearance fluid domain could be meshed with a hexahedral type element. And the grooves domain is meshed using "cooper" method. Divisions used across the film thickness are 10 [16], and the interval size used in the circumferential and axial direction is 0.5. The grid distribution at one of the grooves and clearance domain are shown in Figure 4. Structured mesh movement algorithm Transient simulations depend on the grid quality. Using unstructured grids and the existing dynamic mesh models will stop the calculation process due to numerical failure or the negative volume of grids as mentioned before. A new mesh moving method based on structured grids stated above is proposed. The following process is used to update the generated mesh: -find the coordinate (x pt , y pt ) of each grid p at time step t and determine which fluid domain p belongs to: ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi > R j grooves domain: The nodes in the grooves domain need not to be updated because the grooves domain are treated as rigid and the nodes position would not change with the time step. And the journal is moving the under the action of hydrodynamic force, so the nodes in the clearance domain would be updated; calculate the displacement of journal center (Dx jt , Dy jt ) at time step t by solving equations of motion and store the coordinate in memory temporarily; judge which the radial reticulate layer N i each node p at clearance domain belongs to at time step t; calculate the position (x pt+1 , y pt+1 ) of each node p at clearance domain at next time step t + 1 according to which layer it belongs to: where N total is the total number of radial reticulate layers. Thus completing the motion updating. To describe the capabilities of the method clearly, Figure 5 models a simplified MGWJBs with larger clearance than in reality. It can be concluded that, at a larger eccentricity ratio, the existing mesh movement algorithm will result poor mesh quality, even stop calculating. And the grid number is decreased dramatically, which will produce a considerable calculation error. The above-mentioned mesh movement algorithm is based on structured mesh distribution, shown as Figure 5e. The grids after updating is still keeping a uniform distribution and the grids number is unchanged. The updating process is stopped manually without any mesh distortion or numerical failure. Therefore, the structured mesh movement algorithm proposed in this paper is suitable for the transient flow in MGWJBs. Assumption and governing equations A rigid aligned bearing with the geometry of Figure 2 is considered. The flow is assumed as laminar and water properties are treated as constant. The multiphase flow of the lubricate with cavitation is described by "mixture model". The "mixture model" solves the continuity equation and momentum equation for the mixture, and the volume fraction equation for the secondary phases. The continuity equation for the mixture is: where v m is the mass averaged velocity: and r m is the mixture density: where a k is the volume fraction of phase k. The momentum equation for the mixture can be expressed as: where n is the number of phases,F is a body force, and m m is the viscosity of the mixture: where v dr;k is the drift velocity for secondary phase k: Volume fraction equation for secondary phase k can be derived from the corresponding continuity equation: In a water-lubricated journal bearing, the clearance between journal and bearing is not uniform because of eccentricity of the shaft. Where the clearance is divergent, a subambient region may be formed. Once the pressure drops below vaporization pressure, a gaseous phase begins to fill the divergent region, then, cavitation occurs. In the current study, the mass transfer between the liquid and vapor lubricate is based on the "full cavitation model" [29], which treats the mass transfer as source term of equation (9). The transfer rate is calculated as follows: ð10Þ A numerical solution of equations (3)-(11) using a finite volume method (FVM) with a pressure-implicit with splitting of operators (PISO) scheme for pressure-velocity coupling equation satisfying the boundary conditions gives the pressure distribution. A time-dependent solution "unsteady" is choose to model the transient state of lubrication filed when the rotor is moving. Double precision calculations are employed to avoid the negative influence of the large aspect ratio. The PRESTO! scheme is used for pressure interpolation because of high-speed rotating flows in the journal bearing. In addition, other convection terms are discretized by the first-order upwind method to improve convergence rate. The boundary conditions of the inlet and outlet are respectively "pressure inlet" and "pressure outlet" with 50 kPa and 42 kPa. Outer surface is defined as a stationary wall. Inner journal surface is defined as a moving wall with an absolute rotational speed. And the rotational speed is defined as follows: where N is rotational speed. Linear and nonlinear transient hydrodynamic forces For a small amplitude motion Dx; Dy; D _ x; D _ y ð Þ , the expansion of hydrodynamic force around the equilibrium position in a Taylor's series, with only the linear terms retained, can be linearized as the following form: where F x and F y are the hydrodynamic force components in the xand y-axis directions, respectively; F x 0 and F y 0 are the force components at the static equilibrium position. Those dynamic coefficients have significant effects on the stability of rotor systems and to our knowledge, are still a challenge to experimental determined. Therefore, it is necessary to develop an efficient numerical method. In the reference [16], a method to the determination of stiffness coefficients of hydrodynamic water-lubricated plain journal bearing is proposed by neglecting velocity perturbation. And the hydrodynamic force is linearized as: However, it is difficult to evaluate the influence of velocity perturbation on the dynamic characteristic. Therefore, a linear velocity perturbation was used to determinate the full dynamic coefficients in this study. Firstly, the journal is moved from the static equilibrium position in x-direction and y-direction with the velocity perturbation of D _ x and D _ y, respectively. The corresponding hydrodynamic forces can be written as: ( F x2 ¼ F x0 À k xy Dy À c xy D _ y F y2 ¼ F y0 À k yy Dy À c yy D _ y y-direction: ð16Þ Then, equations (15) and (16) were fitted by linear relation: Finally, the dynamic coefficients can be calculated as: c xx c xy c yx c yy However, linear and nonlinear forces are in a good agreement for small journal amplitudes only. The transient hydrodynamic force obtained using linearized dynamic coefficients is not accurate enough at large journal amplitudes. Solving the equations of motion (Eqs. (21)-(23)) successively, the acceleration (€ x t , € y t ), velocity ( _ x t , _ y t ) and displacement (x t , y t ) can be calculated at each time step. And (x t , y t ) and ( _ x t , _ y t ) are set equal to zero at initial time step. Then, the nonlinear transient hydrodynamics forces can be directly computed from: The flow chart of the numerical procedure is described in Figure 6. A CFD package FLUENT was used to analyze flow field and the structured mesh movement algorithm was achieved by UDFs. The hydrodynamic force was fitted with linear relationship in MATLAB. Validation of CFD model and structured mesh movement algorithm To facilitate the hydrodynamic process, the constant supply of water is commonly fed into the bearing clearance through longitudinal grooves to maintain the thin fluid film between the journal and bearing. However, the dynamic mesh zone, which is called as clearance domain in Figure 2, has a similar working condition and flow state with plain water-lubricated journal bearing. Meanwhile, the plain water lubricated journal bearings is relatively full investigated by experimental measurements and numerical simulations. So, validation is carried for a plain water lubricated journal bearing. The detailed description of apparatus is shown in reference [15]. Calculation is carried out in the transient state with a constant load (e = 0). Under the action of nonlinear hydrodynamic force, the journal is moved from the concentric position of the bearing and finally stabilized at a certain position, which is called static equilibrium position and is shown Figure 7. With increasing of speed, the static equilibrium position is coming closer to concentric position of the bearing. The calculation results (expressed as "CFD") are compared with the experimental data (expressed as "Exp.") for a validation about nonlinear hydrodynamic force. They are all in good agreement as shown in Table 2. The pressure and vapor phase contours at static equilibrium position are shown in Figures 8 and 9, respectively. It is obvious that, with the increasing of speed, the maximum pressure is decreasing and positive area is increasing. The cavitation region is narrower and the gaseous phase is gathered in the rotational direction, which are in good agreement with references [30,31]. In summary, it can be concluded that the 3D transient flow calculation of water-lubricate bearings, based on structured mesh movement algorithm, is reliable. Validation of dynamic coefficients determination method As shown in Figure 10, the perturbation of velocity can lead to the increasing of hydrodynamic force by damping effect. The reference [16] neglected this damping effect. As the rotational speed is increasing, the narrower difference implies that the damping effect is weaken. However, as key application of water-lubricated journal bearing, the rotational speed of the marine propulsion systems is commonly around 1000 r/min. Therefore, the velocity perturbation will have a considerable influence of the dynamic characteristic of water-lubricated journal bearings. Although, a larger perturbation velocity can result a more obvious damping effect, the influence of solution initialization value is more obvious, as shown in Figures 11 and 12. This influence will lead to the linearization of hydrodynamic force more away from the given static equilibrium position. Because the dynamic coefficients are varied with static equilibrium position, a larger perturbation velocity will bring errors to the determination of those coefficients. By neglecting the initial value, hydrodynamic forces are fitted by linear relationship. For a comparison, initial value at the displace perturbation 0.0-1.0 mm is neglected. And fitted by linear relation, the hydrodynamic force under different perturbation velocity is shown in Figures 13 and 14. The square of correlation coefficient is keeping larger than 0.9990, which means the fitting is relatively precise. Using the equations (19) and (20), the full dynamic coefficients can be determined in Table 3. Under different perturbation velocity, the difference of dynamic coefficients, which should be the same in theory, may be the results of initial value. After a balancing consideration of damping effect and the influence of solution initial value, 2eÀ4 m/s is chosen as the perturbation velocity. The comparison with the reference [16], in terms of stiffness coefficients, is shown in Figure 15. For a constant eccentricity ratio, the stiffness coefficients are proportional to the rotational speed and show a good agreement with the reference value. The difference may be resulted by different pressure-velocity coupling scheme. SIMPLEC is used for the reference and PISO for present work. Because PISO is more stable for transient calculation. Another validation is carried between linear and nonlinear hydrodynamic forces and is shown in Figure 16. Linear and nonlinear hydrodynamic forces are calculated by equations (13) and (24), respectively. When the rotor is whiling in a circular with a radius of 1 mm, a good agreement is achieved between linear and nonlinear hydrodynamic force model. However, linear and nonlinear forces are in a good agreement for small journal amplitudes. The transient hydrodynamic force obtained using linearized dynamic coefficients is not accurate enough at large journal amplitudes, as shown in Figures 17 and 18. Therefore, in the design phase, a designer should keep in mind that the inherent nonlinear effect of the hydrodynamic force of the journal at large journal amplitudes has a considerable influence for designing an efficient water-lubricated journal bearing. Comparison of two hydrodynamic force models of MGWJBs Traditionally, in the investigation of the dynamic characteristic of MGWJBs, the linear hydrodynamic forces which are calculated from dynamic coefficient are used [6,32]. However, the nonlinear hydrodynamic force gives a more detailed understanding of the dynamic behavior of MGWJBs. To evaluate the influence of nonlinear effect of hydrodynamic force for the rotors supported by MGWJBs, further comparisons of hydrodynamic forces were explored for four-axial grooves and six-axial grooves. The pressure contours of bearing surface at the static equilibrium positon are shown in Figure 19. For the comparative study of linear and nonlinear hydrodynamic forces of MGWJBs, the motion of the journal is set to circles with different radiuses to simulate whirling effects. The whirling speed is equal to the journal rotational speed. During the whirling process, the variation of the hydrodynamic forces in time domain was shown in Figures 20 and 21. By conducting Fourier frequency transform (FFT), the spectrogram of the hydrodynamic forces can be seen in Figures 22 and 23: as shown in time domain (Figs. 20 and 21), there is a considerable amplitude difference between the linear and nonlinear hydrodynamic forces, even with a smaller whirling radius (R = 0.5 mm). However, there is no phase difference. Both hydrodynamic force models achieved the maximum and minimum point almost at the same time; as shown in frequency domain (Figs. 22 and 23), the frequency of the linear and nonlinear is equal to whirling frequency. However, the amplitude difference between nonlinear and linear model in y-direction is larger than that in x-direction. With the increasing of orbit size, amplitude of linear hydrodynamic force is much larger than nonlinear hydrodynamic force, especially for y-direction. Conclusion In the present work, two weakness of the CFD model of water-lubricated journal bearings are overcome. Firstly, a new structured mesh movement algorithm for the CFD model of multiple grooves water-lubricated journal bearings is developed and based on this method, the nonlinear transient hydrodynamic force model was established. Secondly, with consideration of velocity perturbation, an efficient method to determine dynamic coefficients and linear hydrodynamic forces is promoted based on the calculation of 3D transient flow field. After validation of static equilibrium position and stiffness coefficients, a comparative linear and nonlinear hydrodynamic force analysis of multiple grooves water-lubricated journal bearings is conducted. The conclusions drawn from the study can be summarized as follows: a self-developed structured mesh movement algorithm is used to insure the high quality of the grid in the transient calculation process of water-lubricated journal bearings. The grids after updating is still keeping a uniform distribution and the grids number is unchanged. The updating process is stopped manually without any mesh distortion or numerical failure, showing the structured mesh movement algorithm proposed in this paper is suitable for the transient flow of water-lubricated journal bearings; perturbation of velocity can lead to the increasing of hydrodynamic force by damping effect, which was neglected by previous study. Based on the transient flow calculation, a method to determine the stiffness and damping coefficients is promoted using linear fitting. The stiffness coefficients show a good agreement with the reference value; there is a considerable difference between the linear and nonlinear hydrodynamic forces, even with a smaller whirling radius (R = 0.5 mm). And with the increasing in the orbit size, amplitude of linear hydrodynamic force is much larger than nonlinear hydrodynamic force, especially for y-direction. Although the transient hydrodynamic force models are established and a comparative study is carried out, the dynamic responses of rotor supported by MGWJBs under difference force models still have not been investigated. The proposed method will be employed for further studies of the dynamic characteristic of grooves water-lubricated journal bearings-rotor system.
5,458.6
2019-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Implementation of hyyrö’s bit-vector algorithm using advanced vector extensions 2 with length m , and the goal of sequence alignment is to compute the edit distance (score) between the sequences. Then, most of the time determines within the pre-defined k-error thresholds to pinpoint regions of similarities that allow Introduction The Deoxyribonucleic Acid (DNA) is a complex molecule that contains hereditary and biological information which is found in every organism [1]. A DNA sequence can be up to 3 billion in length and is composed of nucleotide bases, namely, Adenine (A), Cytosine (C), Guanine (G), and Thymine (T). Each nitrogenous base holds genetic information and its arrangement in a genome dictates the unique genetic characteristics possessed by a living being. However, researchers discovered that the DNA sequences of all humans are nearly identical; thus, locating and analyzing the similarities or differences would yield more profound knowledge on the function or relationship between the sequences [2] [3]. Understanding the sequence's structure and function has made significant impacts on scientific, biological, and medical advancements [4]. Bioinformatics is the science that applying computer science and mathematics to create computational techniques for the collection and analysis of biological data [3]. One of the major researches in the field is performing pattern matching between DNA sequences which leads to the discovery and understanding of biological relationships. It can be used in higher-level processes, such as phylogenetic trees, genetic structure prediction, and disease diagnosis [5] [6]. Given a reference sequence length n and a query sequence with length m, and the goal of sequence alignment is to compute the edit distance (score) between the sequences. Then, most of the time determines within the pre-defined k-error thresholds to pinpoint regions of similarities that allow the analysis and assessment of relationship between species and organisms [5]- [8]. The reference is the The Advanced Vector Extensions 2 (AVX2) instruction set architecture was introduced by Intel's Haswell microarchitecture that features improved processing power, wider vector registers, and a rich instruction set. This study presents an implementation of the Hyyrö's bit-vector algorithm for pairwise Deoxyribonucleic Acid (DNA) sequence alignment that takes advantage of Single-Instruction-Multiple-Data (SIMD) computing capabilities of AVX2 on modern processors. It investigated the effects of the length of the query and reference sequences to the I/O load time, computation time, and memory consumption. The result reveals that the experiment has achieved an I/O load time of ϴ(n), computation time of ϴ(n*⌈m/64⌉), and memory consumption of ϴ(n). The implementation computed more extended time complexity than the expected ϴ(n) due to instructional and architectural limitations. Nonetheless, it was par with other experiments, in terms of computation time complexity and memory consumption. In 2013, Intel introduced the Haswell microarchitecture, which featured Single-Instruction-Multiple-Data (SIMD) capabilities as it supported Advanced Vector Extensions (256-bit operators), an extension from Streaming SIMD Extensions (128-bit operators) [12]. These instructions exploit the data stream's parallelism allowing it to process multiple data simultaneously with a single instruction improving the throughput of floating-point operations [13] [14]. The addition of SIMD instructions to Intel processors offers a rich instruction set, making it possible to implement a DNA sequence alignment algorithm to run on GPP. In this study, the researchers implemented an existing bit-vector algorithm that performs DNA sequence alignment on a query sequence and a reference sequence. This study took advantage of modern processors' bit-parallel operation capabilities utilizing Intel's SIMD technologies, specifically, Advanced Vector Extension 2 (AVX2), supported by at least 4th generation Intel processors (code-named "Haswell"). The correctness of the program was verified through multiple test cases. Furthermore, this paper also highlights the program's performance with various DNA sequences by measuring execution time and memory consumption. The study mainly focused on implementing Hyyrö's bit-vector algorithm [7] to utilize AVX2 instruction set architecture for pairwise sequence alignment. The system would be capable of handling up to 256 query sequence length since the query sequence and the bitvector variables were processed in the 256-bit vector registers. Real-world DNA sequences obtained from the National Center for Biological Information (NCBI) online GenBank sequence database [9] utilized as a data set for experimentation. Sequence Alignment Algorithms Several sequence alignment algorithms have been developed based on the dynamic programming approach; most notable are the Needleman-Wunsch and Smith-Waterman algorithm [11]. Both algorithms are useful primarily for pairwise and global alignment. The advantage of using the Needleman-Wunsch and Smith-Waterman algorithm is the capability to locate the optimal alignment between the sequences. However, these algorithms demand more time to complete and run at ϴ(nm) [15] [16]. Shehab et al. [15] developed the FDASA (Fast Dynamic Algorithm for Sequence Alignment) which executes the Needleman-Wunsch and Smith-Waterman algorithm with faster time complexity of either ϴ(3m+1) when two sequences have equal length (ϴ(3m+2)) or different lengths. Tarhio and Ukkonen [17] unveiled that the Boyer-Moore algorithm-generated optimal runtime speed for longer sequences, though increasing the k mismatch threshold will slow down the computation compared to other dynamic programming algorithms. Having said that, Gou [18] highlighted the difference between the Naïve, Knuth-Morris-Pratt, Boyer-Moore, and Rabin-Karp algorithm in terms of alignment speed for various sequence lengths. The results supported Tarhio and Ukkonen's [17] argument that the Boyer-Moore algorithm works best for longer sequences. On the other hand, it was revealed the Rabin-Karp algorithm is suitable for shorter sequences. Other researchers have delved into finite state machines to develop sequence alignment algorithms. For instance, the Aho-Corasick algorithm is one of the most commonly used algorithms that use an automata approach for exact multiple string matching. Subsequently, the Commentz-Walter algorithm was introduced as a better alternative for the Aho-Corasick algorithm since it is a combination of both Aho-Corasick and Boyer-Moore algorithm [19]. In a comparative study by Vidanagamachchi et al. [19], the results invalidated prior belief because the Aho-Corasick algorithm attained better runtime than the Commentz-Walter algorithm because the latter requires more pre-processing time to construct the finite state machine. Zhu et al. [20] formulated the Bayes block aligner algorithm for local alignment that incorporates the statistics concept of Bayes inference, which involves probability and distribution, to mitigate the need of defining parameters and variables, such as gap penalties and scoring matrices [21]. The study shows that the Bayes block aligner algorithm outperformed the widely known SSEARCH algorithm on VAST in terms of the percentage of correctly identifying structural neighbors while achieving a time complexity of ϴ(n 2 ) [20]. Aside from the algorithm, the edit distance metric also plays an important role in sequence alignment performance. Pandiselvam et al. [16] conveyed that the simplest edit distance to compute is the Hamming distance because it merely counts the number of differences at every position between sequences with equal length. The Hamming distance is mainly used for exact sequence alignment since it requires the sequences to have the same length and it only performs substitution operation. Another study from Levenshtein [22] explored the use of binary information in which mismatches can be corrected using deletions, insertions, and substitutions. The scoring scheme is called the Levenshtein distance; this metric is used for approximate sequence alignment because it is not constrained by the length of the sequences and offers more edit operations. It follows a dynamic programming approach that counts the minimum cost that is required for two sequences be equal. Research contrasted the two edit distance metrics and the investigation has proven that although the Hamming distance generated more accurate alignment results, the Levenshtein distance proved to be faster by achieving ϴ(n+m) time complexity compared to the former's ϴ(nm) time complexity [16]. A number of researchers have implemented sequence alignment algorithms by utilizing the computing capabilities of the SIMD unit embedded in GPPs since it is much easier to program, more portable, and widely available [11]. Nataliani and Wellem [23] implemented Myer's bit-vector algorithm using MATLAB to investigate the similarity of Rhodopsin protein sequence of class Aves. To conduct the experimentation, the data set consists of the sequences of 25 species that have Rhodopsin protein from class Aves that were obtained from the Universal Protein Resource (UniProt) Consortium website and DNA Data Bank of Japan (DDBJ) website. The study mainly features a proof-of-concept implementation of the bit-vector algorithm using a high-level tool. However, it falls short of evaluating the speed and memory performance of the application. Fredriksson [24] featured an alternative method to perform string matching using Myers' bit-parallel algorithm. The researcher proposed a new arrangement for comparing short query sequences (m < w, where w is the computer word size) such that the computations are performed in a row-wise approach instead of a column-wise manner to minimize the wasted bits of the computer word. The algorithm was implemented on an Intel Pentium 4 processor, coded using Intel SSE2 instruction set architecture through C/C++ intrinsics. For experimentation, the researchers used a randomly generated DNA sequence of size 64Mb as reference sequence and short query sequences with varying lengths (i.e. 8,16,32,64,128) to investigate the effects of varying w. The results showed that the execution time of the whole sequence alignment process has a linear relationship with m, and subsequently, w. The researchers argued that their implementation is very fast, however, it is dependent on the architecture. Faro and Külekci [25] promoted an exact string-matching method, called Exact Packed String-Matching algorithm (EPSM), which aims to speed up the process for short query sequences. The idea is to exploit the bit-parallelism of the word RAM model; thus, the computations are performed on words of length w (assuming w is 32). The researchers utilized Intel SSE's specialized packed string matching intrinsics that includes: wscmp, wsmatch, wsblend, and wscrc. To evaluate the performance of the proposed algorithm, the reference sequences used were a genome sequence, a protein sequence, and an English natural language text, all of which are 4Mb in size; moreover, sets of 1000 query sequences were extracted from each corresponding reference sequence, where m would range from 2 to 32. The results revealed that their implementation has achieved a worst case of O(nm) time complexity and O(2 k ) memory consumption. Comparing it with other algorithms, the researchers argued that the EPSM algorithm is the fastest when m ≤ 32. Memeti and Pllana [6] presented a large-scale DNA analysis algorithm designed to be implemented on the Intel Xeon Phi 7120P coprocessor (code-named "Knights Corner"). The proposed algorithm was based on finite automata, it exploits thread-level parallelism by dividing and distributing the input DNA sequence across threads; moreover, it also takes advantage of bit-parallelism featured in AVX-512 instruction set architecture. The DNA sequences of mouse, cat, dog, chicken, human, and turkey obtained from the GenBank sequence database of NCBI composed the reference sequence data set, while regex-dna benchmark with a fixed number of errors composed the query sequence data set for evaluation. Each test case was executed 20 times to prove the consistency of its performance. The results reported a maximum speedup of 10x compared to a sequential implementation on the Intel Xeon ES-2695v2 processor. The researchers were interested to investigate the optimal number of threads for multiple sequence alignment. In contrast, since our research work focuses on pairwise sequence alignment, this approach is not applicable to our study. Sequence Alignment through Bit-vector Algorithm The prevailing method for aligning two sequences is via the dynamic programming method. Dynamic programming incorporates a recursive approach which usually requires an (m+1)(n+1) two-dimensional scoring matrix. However, the run time of the algorithms using this approach is highly dependent on both m and n, and sometimes even k-error threshold, and consumes ϴ(mn) space [26]. Myers [8] proposed an alternative solution in finding the local alignment between a query and a reference to solve for the Levenshtein distance, a sequence alignment metric that allows 3 edit operations, namely, insertion, deletion, and substitution [7]. Myers's algorithm, widely known as Myers bit-vector algorithm, follows a dynamic programming approach that takes advantage of bit-parallel operations featured in modern processors [27]. It assumes a register size of 32 or 64, therefore restricting the length of the query sequence to the word size w [8]. Generally, the approach of the algorithm is to solve the matrix in columns rather than computing each cell individually. Each column is encoded using m-bits vector representation, namely, Pv for the positive vertical delta value, Mv for the negative vertical delta value, Ph for the positive horizontal delta value, Mh for the negative horizontal delta value, Xv for the current vertical column value, and Xh for the horizontal column value. This also follows an observation lemma that the difference between the adjacent values in each cell in the matrix has a value of either -1, 0, or +1. The matrix is completely solved once it has iterated through the whole reference sequence. Therefore, the algorithm can achieve a runtime of ϴ(n) assuming that operations will execute at ϴ(1), which is promoted to be the fastest sequence alignment algorithm as of now [27]. Hyyrö [7] modified Myers' [8] bit-vector algorithm to compute for the Damerau-Levenshtein distance between a query and a reference. The Damerau-Levenshtein distance extends the Levenshtein distance by including transposition between two adjacent characters, therefore, allowing a total of 4 edit operations [7]. The addition of transposition edit operation is achieved through the vector variable Xp. The algorithm consists of bit operations, namely, | (OR), & (AND), ^ (XOR), << (left shift), + (bitwise addition), including arithmetic and comparison operations [26]. The algorithm requires a pre-processing of the query sequence. It involves translating each character from the query into its corresponding bit-mask that represents its position in the text. The index in the vector will be set to 1 when the corresponding character occurs in the query at the specific index, and 0 otherwise. For example, the bitmask of character 'A' for the query "ACTGAC" is B['A'] = b'100010 [28]. Advanced Vector Extension 2 Instruction Set Architecture The SIMD computing capabilities featured in GPPs enabled vector operations to be executed within a single clock cycle [29]. In efforts to expand the Streaming SIMD Extensions (SSE) computing technology, Intel released the Advanced Vector Extensions (AVX) and AVX2 featured in the Sandy Bridge microarchitecture and Haswell microarchitecture respectively [30]. The AVX and AVX2 extend the SSE single-precision floating-point, double-precision floating-point, and integer commands to operate on 256-bits YMM vector registers while also increasing the peak double-precision ops per cycle [31]. Legacy SSE instructions can still be utilized to execute on the lower 128-bits of the YMM registers, this provides access to one of the key features of SSE, text string processing instructions. These instructions aim to speed up a number of string primitives whose process would usually entail nonoptimal utilization of the processor and its instruction pipelines. In addition, the Vector Extension (VEX) prefix instruction encoding format was introduced, enabling three-operand syntax, in some cases four-operand, using non-destructive source operands [32]. Although the AVX2 instruction set architecture offers a substantial amount of floating-point and integer instructions, it is not capable of performing 256-bit arithmetic addition and bit shift. Thus, the researchers must develop simulations of these operations to satisfy the requirements of the bit-vector algorithm. Research Design This study provides a discussion on the implementation of a bit-vector algorithm using AVX2 instruction set architecture as well as its performance evaluation with real-world DNA sequences. Fig. 1 presents the algorithm used for this study, it was developed and presented by Hyyrö [7] in his own paper; Hyyrö did not present any performance evaluation since his study focused on the theories and framework of the algorithm. For the purposes of this study, the algorithm was modified (See lines 14 and 15 on Fig. 1) such that the computation for the Damerau-Levenshtein distance will continue regardless of when the k-error threshold has been reached. This not only enables the evaluation of similarity between the two sequences but also allows pinpointing highly similar regions. The preprocessing of the query sequence was also modified to obtain the reverse bitmask of each character. The algorithm was implemented on the Visual Studio 2017 and compiled with Microsoft Macro Assembler. The application is composed of 2 elements: the C++ program and the assembly program. The former handles the input and output (I/O) of the application which is interfaced with the latter that is responsible for computing the Damerau-Levenshtein distance between the query and the reference sequences. Initially, the application reads the text files that contain the query and reference sequences through a memory mapping method that involves allotting a chunk of memory space where the lengthy sequences will be placed in by the operating system and stores them in their corresponding string variable. The length of the query string will be determined which will be passed along with the addresses of the query and reference strings as arguments whenever the assembly program is invoked. The assembly program uses a flat memory model and C calling convention. The bit-vector variables of the algorithm are loaded into the YMM registers from memory whenever it is used for calculation allowing up to 256 query sequence length. The implementation requires the data to be shifted to the most significant bit of the register, like zero-extending, to avoid tampering of the higher-order bits during calculation which will affect the result. The flowchart for pre-processing the query sequence for a character is shown in Fig. 2. It utilizes a series of vpcmpistrm instructions to obtain the reverse bitmask of a character. The vpcmpistrm instruction can process at most 16 characters (resulting to 16 bits of the bitmask) at a time. Thus, requiring a total of ⌈m/16⌉ to obtain the whole bitmask of the query sequence. The upper half and the lower half of the bitmask must be obtained separately since they are processed in the 128-bit XMM registers. After looping through the whole query sequence, the upper and lower bitmasks are merged through the vperm2i128 instruction. Since the order of the word elements in the YMM vector register is reversed, the vpshufb instruction is utilized to shuffle the position of the word elements and accurately reflect the query sequence. It also follows that the data should be on the most significant bit of the register. The pre-processing stage is executed for characters 'A', 'C', 'G', and 'T'. The AVX2 provides a rich set of instructions allowing for a fairly straightforward implementation of the bit-vector algorithm. The | (OR) operation corresponds to the vorps instruction, the & (AND) operation corresponds to the vandps instruction, the ^ (XOR) operation corresponds to the vxorps instruction, and the ~ (NOT) operation can be performed simply by performing an XOR to the argument and all ones. However, the SIMD instruction set architecture does not support 256-bit wide addition and left shift because the vector elements are treated independently during calculation (i.e. No carry between vector elements). Thus, the researchers must simulate these two instructions. A combination of store, load, and 32-bit addition were utilized to perform 256-bit wide addition. Initially, the two 256-bit arguments are stored in memory and the carry flag is cleared. The arguments are treated as 32-bit chunks by loading them into the 32-bit general-purpose registers and added by executing adc instruction. This replicates the addition and carry-over between doubleword elements of the vector register. The process is repeated 8 times to accomplish 256-bit wide addition. Simulating the 256-bit wide left shift involves storing a copy of the argument prior to executing vpsllq instruction which will shift the quadword elements of the YMM register to the left by 1 bit. This would allow the retrieval of the most significant bit of each quadword element that would have been lost after performing the shift instruction. To reproduce the carryover, the most significant bit of every quadword element (i.e. bit 63, 127, 191) is checked to see if it is set, if so, the next quadword element is incremented, otherwise, no action is taken. After each iteration, the assembly program calls a C++ function passing the calculated score at the current index as parameter to check whether it is equal with the current lowest score, if so, the specific index is added into the array of indexes with the same score, if it is otherwise lower than the current lowest score, the previous score and the list of indexes is overwritten, otherwise, no action is taken. The application outputs the results of the computation by writing the summary in the console and text file. The summary includes the query string, length of the query, length of the reference, lowest score, and possible substrings where the lowest score may be located. Additionally, the implementation follows some optimization guidelines from the Intel® 64 and IA-32 Architectures Optimization Reference Manual [33] that includes: keeping code and data on separate pages, aligning data on natural operand size address boundaries, using test instruction instead of cmp whenever possible, using add or sub instructions instead of inc or dec, using logical instructions to zero a register, unrolling loops, arranging code to be consistent with the static branch prediction algorithm or to reduce branches, utilizing single-precision instructions instead of double-precision, taking advantage of zero-latency mov, organizing code to maximize micro-architectural resources, and enabling flush-tozero and denormals-are-zero mode. To evaluate the performance of the study's implementation, the DNA sequences of Homo sapiens (human), Mus Musculus (mouse), Solanum Pennellii (eudicots), Brachypodium Distachyon strain Bd21 (stiff brome), Ornithorhynchus Anatinus (platypus), Cajanus Cajan (pigeon pea), Pseudomonas Syringae (gproteobacteria), Chthonomonas Calidirosea (bacteria), Prochlorococcus Marinus str. MIT 9211 (cyanobacteria), and Mycoplasma Conjuctivae (mycoplasmas) were selected for experimentation. The reference sequence dataset is composed of chromosome 1 sequences from the chosen species which can be obtained from the GenBank sequence database of NCBI [9]. For this study, the researchers have omitted the instances of the wildcard character 'N' for all sequences. Table 1 shows the reference sequence datasets and their corresponding length, excluding character 'N'. On the other hand, the query sequence dataset is composed of generated DNA sequences that have varying lengths of 32,64,92,128,160,192,224, and 256. Results and Discussion The experiment aims to investigate the effect of diverse sequence lengths on I/O load time, computation time, process memory, and power consumption of the implementation. The procedure was performed on the Dell XPS 15 laptop equipped with Intel Core ™ i7 -6700HQ 2.6Ghz 64-bit processor and 8GB of RAM. Each test case is executed 10 times to average out the result. To validate the correctness of the implementations, the application was tested to return the score of aligning a query sequence against 3 variations of the same sequence: no mismatch, with 5 mismatches, International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 5, No. 3, November 2019, pp. 230-242 and with the random number of mismatches. The score was expected to correspond to the number of mismatches the researchers inserted. Moreover, the application was cross-validated with the Python implementation of the algorithm using Mycoplasmas' sequence as the reference against the query sequence dataset. The results are consistent with the expected scores, thus, verifying the correctness of the implementation. For the purpose of proving that the implementation was optimized to enhance computation time, the performance of the optimized implementation was compared against the barebone implementation (i.e. no optimizations done). Fig. 3 to Fig. 6 show the comparison of the average computation times between the two versions. Based on the data, the improvement is apparent as the optimized version outperformed the barebone version by achieving a speedup of up to 1.36 times. The average I/O load time is illustrated in Fig. 7. The I/O load time measures how long it takes to read the query and reference sequences text files and store it in their corresponding memory space. It is evident that the length of the reference sequence has a linear effect to the I/O load time, while the length of the query sequence has little to no impact; thus, its time complexity is ϴ(n). The computation time consists of a pre-processing stage, actual computation for Damerau-Levenshtein distance, and storing of indexes where the most similar substrings may be located. Fig. 8 shows the performance of the application in terms of computation time. It can be derived that it is heavily dependent on the size of the reference sequence. However, the data also shows that the length of the query sequence affects the computation time which rejects the expected time complexity of ϴ(n). Despite limiting the size of the query (m < 256) and performing the calculations on the 256-bit YMM registers, the influence of the length of the query to the computation time is caused by the accumulation of some frequently repeated process that is dependent on m. For example, the simulated 256-bit wide arithmetic addition is completed faster for shorter query sequences because the carry out is cascaded less. Further investigation (Shown in Fig. 9 and Fig. 10) reveals that the machine word size is 64 bits since most of the AVX/AVX2 instructions used in the implementation operates by quadwords (64 bits). Therefore, since the computation time displays a linear relationship with the reference sequence size and is also affected by the length of the query sequence and machine word size, it runs at ϴ(n*⌈m/64⌉). Moreover, Fredriksson [24] had a similar finding in his study that supports this hypothesis wherein the researcher attributed it to be caused by hardware limitation such that the SIMD instructions still execute based on the native machine word size (denoted as w); therefore the bit-operations do not run at the expected ϴ(1), but rather ϴ(⌈m/w⌉). The memory consumption was consistent for each reference data-set regardless of the length of the query sequence. It consumes approximately the size of the reference sequence in bytes plus 40 MB. Therefore, it can be argued that it has achieved ϴ(n) memory consumption. Finally, the power consumption was investigated, and it reveals that the program consumes approximately 20 -25W regardless of sequence length. Conclusion This study presents an implementation of Hyyrö's bit-vector algorithm for pairwise DNA sequence alignment using AVX2 instruction set architecture to run on modern processors. To our knowledge, this is an initial attempt of developing the algorithm to take advantage of SIMD computing capabilities of AVX2 on recent processors which advances the idea of the possibility of implementing other computeintensive applications on GPP. Based on the results of the experimentation, the AVX2 implementation has achieved an I/O load time of ϴ(n) since it is mostly impacted by the length of the reference sequence. It can also be argued that the computation time complexity of the implementation is longer than the ideal ϴ(n) time complexity due to the simulation of the 256-bit addition and left shift which entails the carry out to be cascaded to the higher-order elements, and architectural limitations that causes instruction to operate based on its native machine word size (64-bits) and not on the actual SIMD vector size; thus, each operation runs at ϴ(⌈m/64⌉) and the implementation computes at ϴ(n*⌈m/64⌉), similar to Fredriksson's [24] implementation and on a par with Faro and Külekci's [25] implementation that reached a computation time of ϴ(nm). Furthermore, the implementation has a memory consumption of ϴ(n), wherein it requires approximately twice the size of the reference sequence in bytes plus 40 MB. This study's implementation displayed a linear growth of memory consumption. In contrast, Faro and Külekci's [25] implementation showed exponential growth. Performing pairwise sequence alignment using Hyyrö's algorithm is just the first step. Future research works may want to attempt extending or removing the limitations on the query sequence length or to explore multiple sequence alignment and multi-core or multi-threaded programming.
6,384
2019-10-29T00:00:00.000
[ "Computer Science" ]
BAT.jl - Upgrading the Bayesian Analysis Toolkit . In all but the simplest cases, performing data analysis based on Bayesian reasoning requires the use of advanced algorithms. The Bayesian Analysis Toolkit (BAT) provides a collection of algorithms and methods that facilitate the application of Bayesian statistics to user-defined problems of arbitrary complexity. With BAT.jl, we present a modern rewrite of BAT in the Julia programming language. Through the use of a modular software design that is capable of running parallel and distributed, and by extending the tool with new sampling and integration algorithms, BAT.jl is a high-performance framework for Bayesian inference, meeting the requirements of modern data analysis. Introduction Statistical inference is a key element in nearly all fields of scientific research, with the goal of gaining knowledge about models from observed data. Typical tasks of inference involve the estimation of unknown parameters, model comparisons and hypothesis testing. Various statistical methods have been developed for conducting statistical inference. Following the Bayesian reasoning, by interpreting probabilities as a degree-of-belief, it is possible to update ones current knowledge about the parameters λ of a model M with regard to new data D using Bayes' theorem P( λ| D, M) = P( D| λ, M) · P 0 ( λ, M) where P( D| λ, M) is the likelihood and P 0 ( λ|M) expresses prior knowledge about the distribution of the parameters. The posterior probability distribution of the parameters, P( λ| D), contains the updated knowledge when considering the data D. The denominator in Eq. (1) describes the probability to obtain the observed data when assuming the model M, which can also be written as P( D|M) and is sometimes referred to as the evidence. Only in the simplest cases, however, it is possible to evaluate the posterior distribution analytically. In most real-world problems this is not feasible due to the complexity of the likelihood and prior distribution. In such cases, a numerical evaluation of the posterior density is needed. As the phase space grows exponentially with the dimensionality of the problem (curse of dimensionality), efficient algorithms are needed for a numerical evaluation of Eq. (1) in problems with multiple parameters. A technique that allows the efficient sampling from arbitrary distributions in a multidimensional parameter space, and that therefore revolutionized Bayesian inference, is Markov Chain Monte Carlo (MCMC). Markov chains provide a systematic method to draw random samples that follow a target distribution by generating a sequence of points in the parameter space. Using MCMC methods therefore allows the evaluation of complex and high-dimensional posterior distributions in Bayesian inference. 2 From the Bayesian Analysis Toolkit to BAT.jl BAT -The C++ original The Bayesian Analysis Toolkit (BAT) [1] is a software package providing a collection of algorithms and methods for performing Bayesian inference, particularly focusing on the use of MCMC techniques. It offers the infrastructure for implementing user-defined problems in a general-purpose language, allowing to specify likelihoods and prior distributions of arbitrary complexity without requiring the use of a tool-specific modeling language. Through the use of the Metropolis-Hastings MCMC algorithm [2,3], BAT permits the sampling of posterior probability distributions in a multi-dimensional parameter space. Included optimization and integration algorithms allow marginalization and the estimation of parameters as well as their uncertainties and correlations. Methods for performing goodness-of-fit tests, error propagation and model comparisons are included as well as features for default outputs in the form of numerical results and plots. By offering templates for common analysis tasks, e.g. for histogram fitting, the usage of Bayesian methods is facilitated also for non-experienced users. Originally designed to be used in high-energy physics, BAT is written as a C++ library, depending on the ROOT framework [4]. BAT.jl -The new version in Julia As we recognized that there is a broader range of users interested in performing Bayesian analyses with BAT, not only from the high-energy physics (HEP) sector, we intended to improve BAT and make it more easily applicable in further fields of research. A major aspect for this was to remove its dependencies on HEP-specific software. Therefore, we aimed for a new software design that is adaptable and simple to use but at the same time also performant and tailored to the needs of modern scientific computation, such as running in parallel and distributed. In addition, we planned to enhance the toolkit character of BAT by including new features and extending its collection of algorithms to support Bayesian inference with BAT in more fields of application. For these reasons, we are currently developing BAT.jl [5], a successor of BAT, written in the Julia programming language [6]. Julia is a modern general-purpose programming language that is particularly suited for high-performance numerical and scientific computations. The first stable version (v.1.0) of Julia has been released in August 2018. Through its optional typing system and a multipledispatch paradigm, just-in-time compilation and numerous other features, Julia is a powerful language that offers a range of capabilities that support the design goals we pursue with the redevelopment of BAT. Julia's built-in package management system allows for a modular software design and the uncomplicated installation and use of packages. The fast growing Julia ecosystem provides a large collection of packages for various applications. Julia also offers great flexibility through its native interfaces to other languages, that allow to call code written in C/C++, python, FORTRAN and other languages. As Julia is designed for parallel and distributed computing, running code on computing clusters is inherently supported, meeting the requirements of modern data science software. All these (and further) aspects therefore make Julia well suited for our rewrite of BAT. Current status & future prospects of BAT.jl The first stable version of BAT.jl (v.1.0.1) has been released in December 2019. With this release we introduced the general infrastructure and user interface of the new tool and the main functionality for performing Bayesian inference on user-defined problems. A weighted Metropolis-Hastings MCMC algorithm is currently implemented as the default algorithm for sampling posterior distributions. Methods providing numerical estimates of the best-fit parameters, their uncertainties and correlations are included as well as functionality supporting an uncomplicated visualization of the outputs. Through the implementation of a novel algorithm for estimating integrals based on the samples, called Adaptive Harmonic Mean Integration (AHMI) [7], BAT.jl features the estimation of high-dimensional integrals of the posterior distribution. More detailed information on the structure and current features of BAT.jl is given in Sec. 4, where a simple example is presented. With the current release of BAT.jl, a first step towards a modern framework for userfriendly Bayesian inference has been accomplished. We are now working towards the next upgrade of the software, planning to extend its features and collection of methods. For example, we are in the process of including new sampling algorithms into BAT.jl, such as Hamiltonian Monte Carlo (HMC) [8] and an affine invariant ensemble sampler [9]. At the same time, we are aiming to speed up the sampling by supporting the parallelization of computations at different levels. A partitioning of the parameter space into several subspaces, for example, will allow to run independent Markov chains in each of them, making the sampling of multimodal distributions more efficient. In BAT.jl, this approach will become feasible as the AHMI algorithm permits to calculate the integrals in each of the subspaces, thus providing a proper reweighting when finally joining the samples of all individual chains. With these developments in progress and further to come, BAT.jl is going to be a performant toolkit that offers a variety of state-of-the-art algorithms for Bayesian inference, allowing the users to choose the approach that fits their problems best. Using BAT.jl -An Example The BAT.jl API facilitates a straight-forward implementation of user-defined problems. While the basic setup for performing Bayesian inference with BAT.jl requires only a minimal number of commands, options for advanced configurations are accessible and allow experienced users a detailed control over the algorithms. In the following, we will demonstrate the basic steps of using BAT.jl and highlight its current features by conducting a simple example. Installation & activation Before using BAT.jl for the first time, it needs to be installed. By being a registered Julia package, the installation is performed using Julia's built-in package manager: # installation via package manager (only once before first use) using Pkg pkg"add BAT" After the installation, BAT.jl can be used in Julia by loading the package. For our example, we also include some more packages providing helpful functionalities: # activate BAT.jl (and other useful packages) using BAT using IntervalSets, Distributions, Plots Model definition The most important task for the user is the implementation of the statistical model in terms of a (log)-likelihood function. In our example, we consider two parameters λ 1 and λ 2 with the likelihood following a linear combination of normal distributions: As demonstrated above, when defining the likelihood in BAT.jl, it is possible to implement custom functions as well as to refer to pre-implemented functions from other packages (e.g. from Distributions.jl [10]). Currently, the implementation needs to return the logarithm of the likelihood value. With the next upgrade, however, we will introduce a mechanism that allows to return either the logarithmic or the non-logarithmic value of the likelihood. When formulating the likelihood, the user is not restricted to a certain modeling language or to the use of differentiable functions, but can define models of any complexity. Since in many real-world applications, the likelihood itself is the result of sophisticated calculations and might even be distributed over several source codes, with BAT.jl it is straightforward to call external likelihoods via Julia's native interfaces to other programming languages, like python, C/C++, Fortran, R, Mathematica. It is also possible to use likelihoods from any other software running in separate processes using BAT.jl's lightweight binary communication protocol. In BAT.jl, the model parameters (in this example λ 1 and λ 2 ) are defined when specifying the prior distributions. In this step, it is possible to assign names to the parameters and formulate prior knowledge about their distributions: prior = NamedTupleDist( λ1 = Normal(6, 2.5), # normal-distributed prior for the first parameter λ2 = -30.0..30.0 # uniform prior in a given range for the second parameter ) It is again possible to use pre-defined distributions (e.g. from additional packages) or to provide custom implementations of distributions. BAT.jl also enables to use histograms as prior distributions. Following Bayes' theorem in Eq. (1), the posterior density is defined by the likelihood and a prior distribution for the corresponding parameters: posterior = PosteriorDensity(likelihood, prior) Sampling The algorithm to be used for generating samples of the posterior distribution needs to be chosen and the number of Markov chains, as well as the number of samples that should be generated per chain, need to be set: For advanced users, a fine-grained control over the settings of the MCMC algorithms allows to modify the default choices of the initialisation, burn-in and tuning strategies of the chains, the selection of the proposal function and the convergence criteria. Results & Output As a result of running the Markov chains with the selected algorithm, weighted samples of the posterior distribution are obtained and statistical estimates such as mode, mean and covariance matrix of the parameters are provided: The resulting plots for this example are shown in Fig. 1. Based on the samples, the integral of the posterior distribution (and a corresponding uncertainty estimate) can be computed using the AHMI algorithm: evidence = bat_integrate(samples) Among further use cases, this facilitates the calculation of Bayes factors for model comparisons. Conclusions Bayesian inference is a powerful technique for data analysis. In most cases, however, it requires the use of sophisticated algorithms, such as Markov Chain Monte Carlo. With the current release of BAT.jl v.1.0.1, we present a rewrite of the Bayesian Analysis Toolkit (BAT) that provides the infrastructure and algorithms for performing Bayesian inference within a contemporary and user-friendly tool. By developing and implementing new algorithms for efficient sampling and integration, we are currently in the progress of extending the functionalities of BAT.jl. Together with novel approaches for parallelization, these developments will make BAT.jl a modern high-performance tool providing a collection of sophisticated algorithms facilitating Bayesian inference on user-defined problems.
2,825.2
2020-01-01T00:00:00.000
[ "Computer Science" ]
Designing an Online Geospatial System for Forest Resource Management Geographic and Geospatial information systems (GISs) have especially benefited from increased development of their inherent capabilities and improved deployment. These systems offer a wide range of services, for example, user-friendly forms that interact with the geospatial components for locational information and geographic extents. An online distributed platform was designed for forest resource management with map elements residing on a GIS platform. This system is accessible on non-authenticated browsers optimized for desktops; whereas the online resource management forms are also accessible on mobile platforms. The system was primarily designed to aid foresters in implementing resource management plans or track threats to forest resource. Baseline data from the system can be easily visualized and mapped. Other data from the systemcan provide input for stochastic analyses especially with respect to forest resource management. Introduction Decision Support Systems (DSSs) are fundamental in addressing complexity of making coherent, integrated, and interdependent resource management decisions.This is due to their inherent nature of ability to cohesively formulate those parameters or pertinent information that otherwise cannot be processed effectively by human heuristic processes.Decisions formulated from DSSs must be defensible by stakeholders (e.g.[1]), factor in multi-scalability and temporal issues, factor in other relevant considerations, aid in resolving potential conflicts amongst other factors.Interactive computer-based systems have been adopted to help decision makers utilize data and models to solve unstructured problems or decision support systems [2] [3].DSSs have evolved to encompass multi-component systems that include various combinations of simulation modeling, optimization techniques, heuristics and artificial intelligence techniques, geographic information systems (GIS), associated databases for calibration and execution, and user interface components [4].Each of these six components may to some degree individually satisfy Sprague and Carlson's [3] generic DSS definition. An Adaptive Decision support systems (ADSS) may be interpreted to include any system that is capable of self-teaching, which is accomplished by integrating unsupervised inductive learning methods (e.g.[5]- [7]).ADSSs reduce effectively the need for implementing complex spatial analytical capabilities on an ArcGIS server platform by generating the best result to a problem by refining an initial solution.This can be done by essentially incorporating the results of the spatial analyses as a layer with identifiable features.Adaptability of such a system may arise from GUI designs (dialog subsystem) with pertinent factors accounted for (e.g.[8]- [11]), degree of interactivity of data (database subsystem), auxiliary information (knowledge subsystem), spatial analyses (problem processing subsystem), statistically derived data (model base subsystem) and lastly expert analyses (decision-based subsystem). The primary aim of this study was to integrate an out-of-the-box ArcGIS Server system with hallmarks characteristic of an ADSS essentially to: 1) provide a secure gateway for the North Dakota SAP derived data layers for resource management by depicting lands rich in natural resources, vulnerable to threat or both.2) Serve as a city, county, state and federal reporting mechanism for NDFS and affiliated partners' forestry management accomplishments; a resource locale identifier; a tracking and monitoring geospatial interface; and a public resource to monitor or track threats or vulnerabilities including invasive species, catastrophic wildfire, and climate change effects on forestry and forest conversion.3) Provide baseline data on the forest resources of North Dakota to model potential forest resource threats and offer management opportunities identified through the state forest resource assessment and/or vulnerabilities identified using the State and Private Forestry Redesign national assessment tool.4) Determine and identify tracts that were not included in the original spatial analysis project.5) Provide a concise central repository and inventory of forestry programs identifiable by searchable attributes such as city, county or associated wild land-urban interface.The specific objective was to design a widely distributed online portal for forest resource management. Overview of North Dakota North Dakota was formerly the northern portion of Dakota Territory, located in the Midwestern region of the United States, became a US state in 1889.It borders Minnesota to the east, South Dakota to the south, Montana to the west and Canadian provinces of Manitoba and Saskatchewan to the north.It spans a latitudinal range of 45.93˚ -49˚ and extends westwards from 96.55˚ to 104.05˚ longitude.Its areal coverage makes it the 19 th most extensive US state and comprises of 53 counties (Figure 1).North Dakota state capitol is located in Bismarck on the banks of the Missouri River just downstream from Lake Sakakawea.Lake Sakakawea, a large man-made lake, is behind Garrison Dam.The largest city is Fargo on the banks of the Red River.North Dakota was considered part of the Great American Desert.A precipitation gradient exists from east to west.The eastern regions generally receive more precipitation.The area in the past was resplendent with devastating prairie fires making establishment of arboreal ecosystems extremely difficult.The western half of the state consists of the hilly Great Plains, and the northern part of the Badlands to the west of the Missouri River [12] [13].The state's high point, White Butte at 3506 feet (1069 m), and Theodore Roosevelt National Park [12] [13] are located in the Badlands.North Dakota is abundant in fossil fuels, for example, natural gas, crude oil, lignite coal predominantly in the western part of the state.Fossil fuels form primary economic activities in the western part of the state whereas the eastern part of the state has a thriving agricultural sector industry.Natural trees in North Dakota include riparian forests around perennial streams, around Killdeer and Turtle Mountains, and in significant plantings, for example, managed forests and in other areas such as shelterbelts.North Dakota forests are comprised of four major types: elm-ash-cottonwood, aspen, oak and ponderosa pine [12].The North Dakota Forest Service (NDFS) was established in 1906 to practice sound land stewardship to enhance and preserve forests, grassland, and wetland ecosystems found within the state boundaries [12].By 1954, the total acreage for protection plantings was 89,000 acres (360,170,221.59m 2 ), earning North Dakota the distinction of having more protection plantings than any other state in the United States.To date, the natural woodlands of North Dakota covered about 824,000 acres of forested land that includes shelterbelts. Phase I: System Structure The system design entails three user levels accessible to: 1) local users and administrators, 2) registered foresters, and 3) rural fire departments, researchers and the general public (Figure 2).Administrators are able to view log files, update databases, update the geospatial database elements, register new users and perform system maintenance tasks.The design schema has two main databases accessible to 1) foresters (private database), and 2) public (public database) with pertinent security and protection mechanisms instituted.The public database is accessible through non-authenticated browsers (Figure 3). Phase II: Management System Several databases were linked to web forms to capture resource management data.The forms were designed to, capture user information, represent underlying features integral to the database, for example, associated FIPS codes, store the dialog or knowledge base, and provide on demand pertinent information from multiple sources for forestry resource management.For example, for Forest Stewardship plan, a dynamic interaction between the user and the system enables amongst other alternatives, ability to determine if a proposed plan would be in a prioritized area.We also implemented minimum data storage and applied security measures in retrieval of non-sensitive data.Most of the forms designed are available to the general public, while the databases can only Types of forms and databases designed include: 1) Community Accomplishments Reporting System For Urban And Community Forestry Program (form), 2) North Dakota Rural Fire Department Wild land Fire Report (form and database).3) Redesign-Innovation in State and Private Forestry form, 4) Technical Forestry Assistance and Accountability Measures Report for Information and Education (form and database), 5) Training Program/Presentation Template (form and database).The following forms require login: 1) Accountability Measures Report (form and database).2) Forest Stewardship Program and Rural Forestry Assistance (form).3) Sick Tree Assistance Form (form and database).4) Forest Resource Management Plan (template).5) Report a Forest Threat (template and database).6) Talk to a Forester (chat). Community Accomplishments Reporting The CARS (Community Accomplishments Reporting System for Urban & Community Forestry Program) allows for measured outcomes on: 1) Percent of population living in communities managing programs to plant, protect and maintain their urban and community trees and forests.2) Percent of population living in communities developing programs and/or activities to plant, protect and maintain their urban and community trees and forests.The designed form allows for multi-year tracking of these percentages to gauge community participation and success of community based programs for various stakeholders.The form accepts all generic CARS related Microsoft Excel® comma delimited files.Various outputs can also be tracked for each fiscal year.These include: 1) Number of people living in communities provided educational, technical and/or financial assistance.2) Number of people living in communities that are developing programs/activities for their urban and community trees and forests.3) Number of people living in communities managing their urban and community trees and forests.4) Number of communities with active urban & community tree and forest management plans developed from professionally-based resource assessments/inventories. 5) Number of communities that employ or retain through written agreement the services of professional forestry staff who have at least one of these credentials: a) degree in forestry or related field and b) ISA certified arborist or equivalent professional certification.6) Number of communities that have adopted and can present documentation of local/statewide ordinances or policies that focus on planting, protecting, and maintaining their urban and community trees and forests.7) Number of communities with local advocacy/advisory organizations, such as, active tree boards, commissions, or non-profit organizations that are formalized or chartered to advise and/or advocate for the planting, protection, and maintenance of urban and community trees and forests.8) Number of hours of volunteer service logged.(An agency-wide consistent methodology to be developed to track volunteer hours).9) State offered community grant program in current fiscal year.10) Number of communities receiving financial assistance awarded during the Federal FY 2010 through a state managed community grant program.11) Amount of Federal (USFS) funding to States.From the database critical needs may be addressed especially for: a) communities that have the potential to develop management programs for their trees and forests with assistance from UCF technical, financial and/or educational program services, and b) communities that currently are not managing, or developing programs to manage, their urban and community trees and forests.Finally, an estimate of federal (USFS) dollar cost or expenditure per capita in community assisted can be tracked. Fire Report This form was designed to collate forest fire information for wildland fire occurrences within North Dakota.Pertinent data collected include: 1) Fire discovery and containment dates, 2) fire size (acreage), 3) locational information (latitude and longitude and land ownership, 4) cause of fire, 5) vegetation burned, and 6) structures lost or threatened.This information will be critical in modeling fire disturbance and spread, for example, by defining a non-parametric separation index (SI) to potentially determine which cover types are prone to fire disturbances.The cover types listed on the form include, grass, cropland, pine forest, hardwood forest, brush and any other category.North Dakota has a climatic gradient from the drier West to well-watered Eastern parts.The SI can be calculated from [14]: where , SI i j is separation index between cover types i and j ( ) A is the overlap area between cover types i and j, i A or j A is area for cover type i or j, and Min represents the minimum function (smaller number between i A and j A ). Data from the GIS application and fire data can also be used to model spa- tio-temporal variability in fire return intervals using Stambaugh and Guyette method [15].Fire return intervals can be estimated from empirically determining a Mean Fire Interval (MFI) from [15]: where TRI denotes the topographic roughness index, POP signifies the natural log of human population density, and RD is river distance.Flame height can be modeled from the following equation [16]: where 0 U is wind speed at a given height (m/s), 0 f H is flame height (m) in the absence of wind, and g is gravity (m/s 2 ).The tangent of the flame tilt angle is proportional to ( ) Forestry Assistance and Accountability North Dakota Forest Service through its outreach programs extends educational components through several programs.These include, Natural Resource Conservation Education, Envirothon, Arbor Day, and Smokey Bear Poster Contests.The extension of this program is facilitated through numerous avenues including the Teacher Learning Centers across the state, Project Learning Tree workshops, and via other K-12 outreach programs.Some of these programs are funded under the auspices of the North Dakota Environmental Education Strategic State Plan.Organizers of these events can now report through the Forestry Assistance & Accountability form.This form is also dynamically-linked to the Accountability Measures form (Figure 4).This information can be used to track educational programs across the state. Training The Forest Stewardship Program Original guidelines were meant to delineate potential stewardship tracts within states; provide tools necessary for the North Dakota Forest Service to effectively and efficiently address critical forest resource issues at state, regional, and community scales; and provide forest resource managers with unbiased means to address problems, opportunities and objectives associated with intermingled federal, city, state and private land ownership patterns within North Dakota.The form (Figure 5) designed for forest stewardship and rural forestry assistance help track number of landowners that participated in landowner assistance or education-based programs.The value for education-based programs comes from Section 3.6.The form also calculates the acreage for 1) new and/or revised Forest Stewardship Management plans, 2) new and/or revised Forest Stewardship Management Plans that are in prioritized areas.These areas include high priority areas, medium priority, and low priority areas (e.g. . Forest resource management and stewardship template.The designed template provides map interactivity with locations, resource threats/potential and associated base maps. [10]).The number of plans for any fiscal year can be queried within the database. P as the transition density.A Markovian process can be adequately determined by two functions, ( ) ( ) , , X P X t X t such that for 1 2 3 t t t < < then [17]: , ; , ; , , which also holds true for all hierarchy of ( ) P .The time-stationary Markov chain can be determined by the Markov transition matrix and n t T ∈ such that [13] [18]: where n i (t − 1) is the total number of cells transiting from category iduring the tth transition period, n ij (t) is the number of cells transiting from category i to j in the tth transition period (Wu et al., 2006).From Equation ( 6), if ( ) = and 0 ij δ = otherwise, then ( ) where I is the identity matrix, with . Q is infinitesimal generator that represents the rates of change of the transitions.We can therefore surmise that the transition rate at time y, ( ) , system of differential equations. Sick Tree Assistance, Forest Threat, Chat The sick tree assistance form was designed to aid private landowners provide base data for managing any tree that exhibited disease symptoms.An exhaustive listing is provided for common tree species within the state with a chance of inputting cultivars if known.The forest threat module was designed to provide a concise detail of potential threats in the following categories: 1) Invasive plants, 2) insects, 3) diseases, 4) climate, 5) loss of open space, 6) pollution, 7) wild land fires, 8) other invasive species, for example, non-native earthworks, 9) unmanaged recreation or any other unlisted threat.The form also provides for each listed threat an image and text de-scription of each threat on computer mouse hover (Figure 6).A chat module was also created primarily for foresters from different geographic locales to be able to dialogue especially on pertinent issues.In this way the chat module effectively provides an extra platform for communication. Accountability Measures The accountability measures report is the most comprehensive management component designed.It coprises of twelve sections and seven areas of accomplishments data that could be used in any combination.The sections include: 1) Forest based economic growth.2) Forestry-based economic benefits.3) Community wildfire protection planning.4) Rural fire department (RFD) capacity enhancement.5) Wildland fire awareness and prevention programs.6) K-12 teachers and students education outreach.7) Arborists training and re-certification programs.8) Conifer (evergreen) conservation tree planting initiatives.9) Natural resources sustenance through stewardship programs.10) Community forestry programs.11) Forest health and sustainability programs.12) Multiple-use management programs.The seven areas for accomplishments data include: 1) Information and education, 2) Community forestry, 3) Forest Resource Management, 4) Fire management, 5) Tree production, 6) State forests, and 7) Forest health.For each accomplishment data a performance indicator is automatically calculated based on underlying factors within the seven areas and standardized units of measure.A searchable by date database is also generated. Result Analysis It is worthwhile to note that most wild land fire occurrences are on riparian or close to riparian forests (Figure 7).This indicates that even with relatively higher stand densities, decreased probability of torching and lower canopy heights amongst other factors they still had higher probability of torching than upland forests.Figure 8(a) shows average and maximum wind speeds for North Dakota Agricultural Weather Network (NDAWN) stations recorded on the same day that wild land fire was recorded.Thiessen polygons were generated using ArcGIS to determine which GPS locations of wild land fires corresponded to which NDAWN station.It is clear that maximum wind speeds were recorded where the burned areas were larger.At Bottineau the burned area exceeded 400 m 2 although the wind speed was comparatively lower, this can be attributed to the type of less fire tolerant vegetation found in this area which is also resplendent with typical needle-leaves trees.From Equation (3), we modeled H values was to cover a broad range of wildland vegetation, that is, prairie grasses that cover most of North Dakota, intermittent shrub heights and also to allow for an acceptable range of anthropogenically-induced wildland fire heights (Figure 8 ). NDAWN stations that had higher ranges for 0 f f H H included Hazen, Turtle Lake and Watford City, however, there burned areas were smaller in acreage.Figures 9(a)-(h) show variation of flame height in cm when wind is considered with respect to latitude and longitude.For the range of 0 f H values used the observed trend is that flame heights with respect to wind are higher in a band trending NW -SE along the central grasslands region.From Figure 7, in 2012, most fire occurrences reported for this region were in Stutsman County.It is also worthwhile that there is no discernible uniform trend for flame heights for each location (trend would mimic same trend that can be generated from Figure 8(c)).All graphs were generated using SigmaPlot (Systat Software, San Jose, CA).Figures 10(a)-(h) represent polar plots where the distance from the origin of the graph is given by ( ) and the angle between the positive horizontal axis and the radius vector from the origin is represented by the wind direction as recorded by NDAWN stations.They represent an annual composite of wind direction roses with respect to recorded fire events.It can be clearly seen that for locations where the prevailing directions are westward there are correspondingly higher values of ( ) and plant species that will improve environmental health and quality.In the period, 2001-2002 areas that were under floodwaters can be seen transiting to other "absorbing" states, for example categories 2 to 7. Categories that exhibited significant changes between 1999-2001 include other crops, urban/developed areas, and woodland.Considering this subset of the state containing two of the major metropolitan areas, we can assess impacts of forest conversion.For example from the transition probabilities graphs (Figures 11(d)-(f)), woodland transition probabilities to urban/developed category are 0.0095 in 1999-2000, 0.0467 in 2000-2001 and 0.0411 for the full period 1999-2001.This trend is low probably due to most development on the eastern side of North Dakota is not significantly affecting predominant riparian forests that exist in this region.But again since this area is greatly affected by periodic flooding, for the time period 2000-2001, there were more instances of water category transiting to other states.Areas with grains, hay, seeds category almost displayed a uniform probability of staying unchanged over the three year period. Conclusions The goal of our research was to provide a systematic, quantitative and innovative tool that supports decision-makers in forestry management.The designed system can be used to access forest management program initiatives, especially where these programs are lacking.Furthermore, the system is an integral component in spatially displaying areas where the best forestry policies may achieve the best results.The ADSS was designed as an adaptive dynamic framework modularly constructed to optimize system capabilities and provide flexibility for future.The system provides for real-time assessment of information stemming forth from all affiliated entities.The system has a base stage deployed using ArcGIS Web services.Web applications and services utilizeconfigured authentication methods and a. NET security standard over HTTP with windows security systems recognition.From this ADSS a secure gateway for the delivery of North Dakota Spatial Analysis Project (SAP) derived data layers was designed.From this platform, resource information and key management variables can be retrieved, queried or catalogued over non-authenticated web browsers.The ADSS has arisen as a vital link of city, county, state and federal forestry reporting mechanism for NDFS and affiliated partners.For example, in the 2001 year alone, wild land fire reporting peaked at over 930 records marking a significant success.General public can utilize the system by querying multiple layers of records and as such decipher management accomplishments.Since the system also provides baseline data on other resources, for example, watersheds and geo-corrected hydrologic datasets, these can be utilized to secondarily address water quality issues or management opportunities identified through other resources. Several resource management forms were incorporated into the ADSS, for example, the most intricate form designed was for tracking community accomplishments for urban and community forestry programs.Measurable outcomes include impact of forestry programs on local communities within each fiscal year, impact of professional advice offered to individuals or communities, and federal funding per capita amongst other deliverables.Other forms include wild land fire, innovation in State and Private Forestry, forestry assistance and accountability measures, forestry training programs, forest health, and an online chat.Integral to most of these forms are associated relative databases.The databases store retrievable pertinent information related to forestry resource management.Multi-faceted information can be retrieved from the system from current reports, filed reports or information that can be utilized in a myriad of possibilities. Figure 1 . Figure 1.An overview map of North Dakota, USA. Figure 2 . Figure 2. North Dakota Forest Service Decision Support System (NDFSDSS) system configuration. Figure 4 . Figure 4. Multi-component Accountability Measures Report form designed to integrate several levels of data in a master RDBMS. training form populates a database with available training programs and cooperating agencies.Listed training includes 1) Fire department training, 2) Insect and disease training, 3) Landowner education and training, and any other agency-based training program.Cooperating agencies that may be involved include North Dakota Department of Agriculture (NDDA), ND Game & Fish, NDSU Extension, Animal and Plant Health Inspection Service (APHIS), Tribal Organizations amongst other potential agencies. Figure 6 . Figure 6.Report a forest threat form features forest threats in several categories, for example; (a) Invasive plants, (b) Insects, (c) Diseases, (d) Climate, (e) Loss of open space, (f) Pollution, (g) Wildland fires, (h) Other invasive species, (i) Unmanaged recreation and an "other" category. Figure 7 .Figure 8 .Figure 9 .Figure 10 . Figure 7.A map depicting 2012 wildland fire locations with graduated symbols depicting burned areas.The hydrography layer shows all perennial, intermittent and ephemeral streams. [13]r critical areas that can be easily queried include: a) Base Non-Industrial Private Forests (NIPF) acres in important forest resource areas, b) acres covered by current Forest Stewardship plans, c) acres in important forest resource areas covered by current Forest Stewardship plans, d) total number of acres in important forest resource areas being managed sustainably, as defined by a current Forest Stewardship Plan, e) acres currently under an Environmental Quality Incentives Program (EQIP) Management Plan.Using Markovian random processes e.g.[13]we successfully utilized the Forest Stewardship Program data to model the transition potentials and areal changes for eastern North Dakota.Our basic paradigm was to define forest transition as a first-order Markov process { }
5,567.8
2014-05-30T00:00:00.000
[ "Computer Science" ]
The Neuroprotective Role of Protein Quality Control in Halting the Development of Alpha-Synuclein Pathology Synucleinopathies are a family of neurodegenerative disorders that comprises Parkinson’s disease, dementia with Lewy bodies, and multiple system atrophy. Each of these disorders is characterized by devastating motor, cognitive, and autonomic consequences. Current treatments for synucleinopathies are not curative and are limited to improvement of quality of life for affected individuals. Although the underlying causes of these diseases are unknown, a shared pathological hallmark is the presence of proteinaceous inclusions containing the α-synuclein (α-syn) protein in brain tissue. In the past few years, it has been proposed that these inclusions arise from the self-templated, prion-like spreading of misfolded and aggregated forms of α-syn throughout the brain, leading to neuronal dysfunction and death. In this review, we describe how impaired protein homeostasis is a prominent factor in the α-syn aggregation cascade, with alterations in protein quality control (PQC) pathways observed in the brains of patients. We discuss how PQC modulates α-syn accumulation, misfolding and aggregation primarily through chaperoning activity, proteasomal degradation, and lysosome-mediated degradation. Finally, we provide an overview of experimental data indicating that targeting PQC pathways is a promising avenue to explore in the design of novel neuroprotective approaches that could impede the spreading of α-syn pathology and thus provide a curative treatment for synucleinopathies. INTRODUCTION Maintaining protein homeostasis is essential for normal cellular function and viability. This is overseen by PQC mechanisms, through the control of protein synthesis, localization, folding/refolding, degradation and formation of protein inclusions. At the post-translational level, PQC is orchestrated by several mechanisms including chaperones that maintain correct protein conformation or help refold misfolded proteins; and the UPS and ALP, which degrade proteins that are irreversibly misfolded, damaged, or are no longer required by the cell. In this review, we focus on these aspects of PQC; with other PQC pathways reviewed elsewhere (Wolff et al., 2014;Dubnikov et al., 2017). In eukaryotes, protein chaperones are essential for ensuring the correct folding of nascent proteins and refolding of misfolded proteins. Hsps or heat shock chaperones (Hscs) are a prominent group of chaperones and they can be found in the ER, mitochondria, cytoplasm or extracellular space (Hartl et al., 2011;Wyatt et al., 2013). Protein degradation through the UPS is regulated by the sequential activity of E1, E2, and E3 enzymes that conjugate primarily K48-linked ubiquitin (Ub) chains onto lysine residues in proteins destined for elimination through the 26S proteasome (Passmore and Barford, 2004). The ALP acts mainly through macroautophagy and CMA. In macroautophagy, cytoplasmic content (including soluble and aggregated proteins) is engulfed by a double-membrane to form an autophagosome that fuses with the lysosome forming an autolysosome, degrading the autophagosomal content (Bento et al., 2016). In CMA, Hsc70 specifically binds to and targets proteins containing KFERQ-like motifs to the lysosomal receptor Lamp2A for client import through the lysosomal membrane, and subsequent degradation by lysosomal hydrolases (Cuervo and Wong, 2014). In addition, chaperones and ubiquitination systems promote the spatial sequestration of misfolded proteins into inclusions (aggresome/Q-bodies) and mediate the lysosomal degradation of toxic aggregates through the aggresomeautophagy and multivesicular body pathways (Johnston et al., 1998;Sahu et al., 2011;Escusa-Toret et al., 2013;Sontag et al., 2017). These pathways are regulated by K63-linked Ub chains and are critical for the degradation of aggregated proteins including α-synuclein (α-syn) (Tanaka et al., 2004;Filimonenko et al., 2007;Tofaris et al., 2011). Inefficient PQC is implicated in protein toxicity, gain-or loss-of-function in many pathologies, including several neurodegenerative diseases known as synucleinopathies. Synucleinopathies, which include PD, LBD and MSA, are characterized by the pathologic accumulation and aggregation of α-syn (McCann et al., 2014). As some mutations altering PQC machinery are associated with familial forms of synucleinopathies and α-syn pathologic aggregates impair PQC, targeting the PQC machinery has become a promising therapeutic strategy for opposing the toxic effects of misfolded α-syn aggregates (Figure 1). Aggregation of α-syn leads to the formation of proteinaceous inclusions termed Lewy bodies (LB) and Lewy neurites (Wakabayashi et al., 1998). In PD, α-syn pathology has been shown to spread from brainstem to neocortex following a specific pattern (Braak et al., 2003). Recent evidence suggests that this is due to the prion-like, cell-to-cell propagation of α-syn aggregates (Masuda-Suzukake et al., 2013;Goedert et al., 2016). This concept was established in cells containing α-syn fibrils that secrete α-syn seeds taken up by surrounding healthy cells. In these recipient cells, exogenous protofibrils seed the aggregation of endogenous soluble α-syn monomers, causing α-syn to adopt an insoluble β-sheet conformation. This results in the formation of new α-syn seeds, which spread into neighboring cells (Volpicelli-Daley et al., 2011;Luk et al., 2012a,b;Mougenot et al., 2012). TARGETING PQC DEFECTS AS POTENTIAL NEUROPROTECTIVE STRATEGIES AGAINST α-syn PATHOLOGY Chaperones The first indication that chaperones confers neuroprotection in α-syn-induced pathogenesis was Hsp70 overexpression protecting against α-syn toxicity in Drosophila . Accordingly, modulating chaperone function through chemical or genetic approaches holds great therapeutic promise for synucleinopathies. Chaperones ensure the correct folding of nascent and mature protein chains (Ebrahimi-Fakhari et al., 2011;Sharma and Priya, 2017). They also prevent seeding of new aggregates and fibrillization by occluding surfaces that may serve as platforms to induce misfolding of native proteins (Hartl et al., 2011). To some extent, Hsp110, Hsp70 and Hsp40 chaperones can disassemble α-syn fibrillary aggregates in vitro (Duennwald et al., 2012;Gao et al., 2015). Enhancing disaggregase activity genetically or with pharmacological modulators could counteract α-syn aggregation Jackrel and Shorter, 2015;Shorter, 2016;Sharma and Priya, 2017). Whether disaggregation occurs in vivo remains to be established, and since this process might generate soluble, potentially toxic forms of misfolded α-syn, simultaneous enhancement of α-syn degradation is likely necessary for beneficial effects. Targeting Hsp70/Hsp90 signaling is of prime interest, not only in synucleinopathies, but also in other adultonset proteinopathies (Pratt et al., 2015). These chaperones FIGURE 1 | Principal PQC mechanisms involved in α-syn homeostasis and potential therapeutic approaches. In physiologic conditions, misfolded α-syn protein is degraded by PQC machinery: the UPS is responsible for ubiquitination of α-syn leading to proteasomal degradation of α-syn, while macroautophagy and CMA both lead to lysosomal degradation of misfolded α-syn. In synucleinopathies, alterations in these protective mechanisms result in the accumulation of misfolded α-syn in aggregates and Lewy bodies (LB) that lead to neuronal dysfunction and death. Genetic or pharmaceutical approaches to restore altered PQC pathways or stimulate alternative PQC pathways: (1) Inhibition of α-syn expression could prevent its pathological accumulation. (2) Overexpression of Lamp2A lysosomal receptor could increase the CMA of misfolded α-syn. (3) Pharmaceutical inhibition of mTOR, an autophagy receptor, stimulates macroautophagy preventing α-syn accumulation and aggregation. (4) Overexpression of the transcription factor NRF2 activates both macroautophagy and CMA and stimulates the lysosomal degradation of misfolded α-syn. (5) Improving lysosomal hydrolases activity or (6) stimulating lysosomal biogenesis could also enhance α-syn lysosomal degradation. α-syn aggregation can also be prevented by stimulation or overexpression of (7) endogenous or (8) secretory chaperones. have opposing effects: Hsp90 stabilizes its clients, whereas Hsp70 directs them for proteasomal degradation upon Hsp90 dissociation. In yeast, cellular, or animal models of PD, inhibiting Hsp90 activity Auluck et al., 2005;Putcha et al., 2010) or stimulating Hsp70 activity McLean et al., 2002;Klucken et al., 2004;Zhou et al., 2004Zhou et al., , 2011Shin et al., 2005;Yu et al., 2005;Batelli et al., 2008;Outeiro et al., 2008) and that of its collaborator Hsp40 (McLean et al., 2002;Fan et al., 2006) reduces α-syn oligomerization, inclusions formation, and toxicity, and diminishes α-syn levels ( Table 1). Although induction of chaperone expression in various cellular locations during proteotoxic stress and its associated stress response is observed in the brains of patients affected by synucleinopathies, α-syn aggregates still accumulate, indicating that the chaperone machinery is overwhelmed. This is supported by findings that many chaperones (including Hsp70, Hsp90 and Hsp40) or mediators of the heat-shock response (HDAC6) are found in LBs (Table 1), possibly reflecting a cellular attempt to sequester soluble, harmful misfolded species of α-syn (Escusa-Toret et al., 2013). Other chaperones can also mitigate α-syn aggregation and toxicity in various models (see Table 1). Overall, it appears evident that modulation of chaperone function is an innovative therapeutic approach against α-syn toxicity. In a clinical context, where widespread α-syn aggregation has already occurred, a global increase in chaperoning activity such as stimulation of the heat-shock response (Du et al., 2014) might have a greater impact than manipulation of individual chaperones. The use of pharmacological chaperones (e.g., flavonoids or polyphenols, Caruana et al., 2011;Ren et al., 2016;Gautam et al., 2017) to prevent or revert α-syn aggregation may also complement therapeutic modulation of endogenous chaperones. Targets: main PQC pathways and biological targets with therapeutic potential. Physiologic function: function of the target with respect to α-syn-relevant biological pathways. Implication in disease: pathologic evidence for the implication of the target in patients with synucleinopathies. Therapeutic strategies: describes experimental strategies used to manipulate a given target. Therapeutic effect: describes biologic effects on α-syn pathology observed upon application of the corresponding therapeutic strategies. Superscript numbers indicate the corresponding references for each model. The corresponding human Gene Symbol related to proteins of interest is indicated in parentheses. Since proteasomes can degrade α-syn (Bennett et al., 1999), and regulation of α-syn ubiquitination has been implicated in PD (Liani et al., 2004;Rott et al., 2008Rott et al., , 2011, enhancing UPS activity could stimulate α-syn degradation and reduce aggregation-linked pathology (Opattova et al., 2015). Non-aggregated α-syn could be specifically targeted to the proteasome, thereby preventing aggregated α-syn from further inhibiting proteasome catalytic activity (Stefanis et al., 2001;Snyder et al., 2003;Chen et al., 2006). Selective enhancement of α-syn targeting to proteasomes is a more desirable approach to broader enhancement of UPS activity, which may lead to serious adverse effects. This could be achieved by increasing the activity of the specific machinery that controls the ubiquitination of α-syn, such as the druggable deubiquitinase USP9X (Rott et al., 2011, Table 1), although only inhibitors have been reported so far (Peterson et al., 2015). Deubiquitination of α-syn might redirect the α-syn burden toward the ALP, which is generally recognized as a more efficient α-syn degradation pathway than the UPS (Vogiatzi et al., 2008). It should be noted that α-syn ubiquitination can serve as a signal for lysosome-dependent degradation (Tofaris et al., 2011;Braun, 2015;Alexopoulou et al., 2016), illustrating a complex cross-talk between post-translational modifications of α-syn and cellular degradation machineries (Choi et al., 2012;Haj-Yahya et al., 2013;Shahpasandzadeh et al., 2014;Tenreiro et al., 2014;de Oliveira et al., 2017). It remains unclear which α-syn degradation pathway is favored, therefore further study is needed before a viable therapeutic strategy can be designed to enhance UPS-mediated α-syn degradation. Autophagy-Lysosome Pathway The ALP is thought to be the most efficient pathway for degradation of α-syn (Vogiatzi et al., 2008), with dysfunction causing accumulation and aggregation of α-syn. Defects in the ALP have been linked with an increasing number of genetic variants identified as causative or associated with PD risk (Gan-Or et al., 2015), including Vps35, a component of the retromer that mediates retrograde transport from endosomes to Golgi, the lysosomal ATPase pump ATP13A2, and LRRK2 (Ramirez et al., 2006;Usenovic et al., 2012;Orenstein et al., 2013;Kong et al., 2014;Tsunemi and Krainc, 2014;Tang et al., 2015;Follett et al., 2016). Polymorphisms in genes encoding lysosomal enzymes, acid sphingomyelinase (SMPD1 gene), and β-glucocerebrosidase (GBA, GBA1 gene), are also risk factors for synucleinopathies (Neumann et al., 2009;Dagan et al., 2015;Gan-Or et al., 2015). A reduction in GBA expression and activity is observed in the substantia nigra and cerebellum of patients with sporadic PD (Gegg et al., 2012), and the inhibition of GBA or its transporter Limp2 is sufficient to stimulate α-syn aggregation through autophagic inhibition (Rothaug et al., 2014;Du et al., 2015). Polymorphisms in the lysosomal K + channel encoding gene TMEM175 are risk factors for PD (Nalls et al., 2014). TMEM175 deficiency causess ALP dysfunction and increased α-syn aggregation (Jinn et al., 2017). Additional ALP-related genes were just recently linked to PD (Chang et al., 2017), converging into a unifying theory for PD pathogenesis, where the ALP is challenged by defects in synaptic exocytosis, endocytosis, and endosomal trafficking, resulting in neuron dysfunction and death (Trinh and Farrer, 2013). Macroautophagy is responsible for degrading most of the aggregated, proteasome-resistant, α-syn, and enhancing this process represents a promising therapeutic strategy (Figure 1 and Table 1). The mTOR inhibitor rapamycin activates macroautophagy, prevents α-syn accumulation and aggregation, and ameliorates motor symptoms, but adverse effects have been reported (Kahan, 2011;Li et al., 2014;Tian et al., 2016). More recently, it was shown that acetylation of α-syn increases macroautophagy-mediated degradation of α-syn aggregates, with knock-out of the α-syn deacetylase SIRT2 protects against α-syn-induced dopaminergic cell loss in vivo (de Oliveira et al., 2017, see Table 1). Independent from UPS-targeting, modulation by the deubiquitinase USP8 and the Ub-ligase Nedd4 of α-syn modification by K63-linked Ub appears to control its autophagic degradation (Braun, 2015;Alexopoulou et al., 2016). A better understanding of the specific effects of various post-translational modifications will be necessary to appropriately modulate α-syn clearance by macroautophagy. CMA specifically degrades physiologic α-syn (which contains a KFERQ-like motif, VKKDQ), whereas pathologic α-syn inhibits CMA, thus enhancing aggregation of itself and other LB components (Martinez-Vicente et al., 2008;Vogiatzi et al., 2008;Xilouri et al., 2009). Accordingly, overexpression of certain PD-associated microRNAs is suspected to be responsible for pathologic CMA downregulation through decreased Hsc70 and Lamp2A expression. This correlates with α-syn accumulation in brains of patients with PD (Alvarez-Erviti et al., 2013;Murphy et al., 2015). CMA-mediated degradation of α-syn and LRRK2 is also impaired by mutants of these proteins that cause inherited PD (A53T and A30P α-syn mutants; G2019S and R1441C LRRK2 mutants). These mutants are recognized by Hsc70 and targeted to the lysosomal membrane, but fail to be translocated into the lysosome due to an aberrantly high affinity for Lamp2A. This impairs CMA-mediated degradation of these proteins and CMA activity, contributing to PD pathology (Cuervo et al., 2004;Orenstein et al., 2013). Deficiencies in CMA (caused by LRRK2 or VPS35 PD-associated mutations for example, Orenstein et al., 2013;Tang et al., 2015;Ho et al., 2016) cause accumulation of α-syn, favoring the emergence of aberrant α-syn species that hinder the function of the Lamp2A receptor. Lamp2A overexpression efficiently prevents α-syn burden in cellular and animal models of PD and counteracts motor deficits (see Table 1). Whether this strategy can reverse α-syn pathology in a clinical context, in a safe and effective way, still needs to be determined, especially since CMA cannot mediate the degradation of aggregated species. The role of CMA in PD pathogenesis has been reviewed recently (Sala et al., 2016), and will not be discussed further here. Notably, strategies aiming at activating both macroautophagy and CMA are also being explored (Gan et al., 2012;Lastres-Becker et al., 2016), such as overexpression of the transcription factor NRF2, which protects against α-syn pathology and increases its turnover through unknown ALP-dependent mechanisms (Skibinski et al., 2017). Other PQC Mechanisms Other PQC mechanisms exist that are less commonly referred to in the context of α-syn pathology. α-syn synthesis could be reduced in the first place to prevent its accumulation. Several microRNAs target α-syn mRNA to reduce its expression in cell culture and in vivo (Junn et al., 2009;Doxakis, 2010;Singh and Sen, 2017). The therapeutic potential of this mechanism remains to be evaluated, especially regarding potential adverse effects of a lack of functional α-syn on the dopamine system (Abeliovich et al., 2000). Finally, unconventional secretion of misfolded proteins (misfolding-associated protein secretion, MAPS) was recently suggested to protect individual cells from misfolded proteins by delivering them to the extracellular space (Lee et al., 2016). In the context of a multicellular organism, however, this secretion might be harmful by contributing to the prion-like spreading of misfolded proteins including α-syn. CONCLUDING REMARKS Extensive genetic and experimental evidence indicate that PQC deficiencies influence the development of synucleinopathies. Despite the potential of several experimental strategies targeting PQC to attenuate α-syn pathology, translation into therapy is still pending. Whether these approaches will be clinically effective, where synuclein pathology is pre-existant, remains unknown. No successful clinical trial has been reported for synucleinopathies, but targeting PQC bears great promise as such strategies have proven effective to treat diseases such as cystic fibrosis or cancer (Teicher and Tomaszewski, 2015;Hegde et al., 2017). Further functional characterization of genes associated with synucleinopathies, will provide important insights regarding the molecular mechanisms that can be targeted to enhance PQC function, boost α-syn degradation, and prevent its aggregation. For patients with familial synucleinopathies, the upcoming era of personalized medicine, including the use of patient-derived induced pluripotent-stem cells and genomeediting, might allow correction of patient-specific mutations or PQC impairments. However, in sporadic cases, where genetic contributions are unknown (the majority of PD cases), simultaneous enhancement of several components of PQC machinery will likely be necessary to stop the progression, or even reverse the course, of these devastating neurodegenerative diseases. AUTHOR CONTRIBUTIONS D-LM, BV, and TD: conception and organization of content of the mini-review. D-LM: design and generation of Figure 1, writing of introduction, and sections on α-syn pathology and defective PQC in synucleinopathies. BV: design and generation of Table 1, writing of section on therapeutic strategies, the abstract and concluding remarks, and assembly of manuscript. EF: overall revision. TD: in-depth editing of manuscript, and overall revision.
3,994.2
2017-09-27T00:00:00.000
[ "Biology" ]
Universal chiral magnetic effect in the vortex lattice of a Weyl superconductor It was shown recently that Weyl fermions in a superconducting vortex lattice can condense into Landau levels. Here we study the chiral magnetic effect in the lowest Landau level: The appearance of an equilibrium current $I$ along the lines of magnetic flux $\Phi$, due to an imbalance between Weyl fermions of opposite chirality. A universal contribution $dI/d\Phi=(e/h)^2\mu$ (at equilibrium chemical potential $\mu$ relative to the Weyl point) appears when quasiparticles of one of the two chiralities are confined in vortex cores. The confined states are charge-neutral Majorana fermions. It was shown recently that Weyl fermions in a superconducting vortex lattice can condense into Landau levels. Here we study the chiral magnetic effect in the lowest Landau level: The appearance of an equilibrium current I along the lines of magnetic flux Φ, due to an imbalance between Weyl fermions of opposite chirality. A universal contribution dI/dΦ = (e/h) 2 µ (at equilibrium chemical potential µ relative to the Weyl point) appears when quasiparticles of one of the two chiralities are confined in vortex cores. The confined states are charge-neutral Majorana fermions. I. INTRODUCTION This paper combines two topics of recent research on Weyl fermions in condensed matter. The first topic is the search for the chiral magnetic effect in equilibrium [1][2][3][4][5][6][7][8][9]. The second topic is the search for Landau levels in a superconducting vortex lattice [10][11][12][13]. What we will show is that the lowest Landau level in the Abrikosov vortex lattice of a Weyl superconductor supports the equilibrium chiral magnetic effect at the universal limit of (e/h) 2 , unaffected by any renormalization of the quasiparticle charge by the superconducting order parameter. Let us introduce these two topics separately and show how they come together. The first topic, the chiral magnetic effect (CME) in a Weyl semimetal, is the appearance of an electrical current I along lines of magnetic flux Φ, in response to a chemical potential difference µ + − µ − between Weyl fermions of opposite chirality. The universal value [14][15][16] dI dΦ = e 2 h 2 (µ + − µ − ) (1.1) follows directly from the product of the degeneracy (e/h)Φ of the lowest Landau level and the current per mode of (e/h)(µ + − µ − ). A Weyl semimetal in equilibrium must have µ + = µ − , hence a vanishing chiral magnetic effect -in accord with a classic result of Levitov, Nazarov, and Eliashberg [17,18] that the combination of Onsager symmetry and gauge invariance forbids a linear relation between electrical current and magnetic field in equilibrium. Because superconductivity breaks gauge invariance, a Weyl superconductor is not so constrained: As demonstrated in Ref. 8, one of the two chiralities can be gapped out by the superconducting order parameter. When a magnetic flux Φ penetrates uniformly through a thin film (no vortices), an equilibrium current appears along the flux lines, of a magnitude set by the equilibrium chemical potential µ ± of the ungapped chirality. The renormalized charge e * < e determines the degeneracy (e * /h)Φ of the lowest Landau level in the superconducting thin film. The second topic, the search for Landau levels in an Abrikosov vortex lattice, goes back to the discovery of massless Dirac fermions in d -wave superconductors [19,20]. In that context scattering by the vortex lattice obscures the Landau level quantization [21][22][23], however, as discovered recently [13], the chirality of Weyl fermions protects the zeroth Landau level by means of a topological index theorem. The same index theorem enforces the (e/h)Φ degeneracy of the Landau level, even though the charge of the quasiparticles is renormalized to e * < e. Does this topological protection extend to the equilibrium chiral magnetic effect, so that we can realize Eq. (1.2) with e * replaced by e? That is the question we set out to answer in this work. The outline of the paper is as follows. In the next section we formulate the problem of a Weyl superconductor in a vortex lattice. We then show in Sec. III that a flux bias of the superconductor can drive the quasiparticles into a topologically distinct phase where one chirality is exponentially confined to the vortex cores. The unconfined Landau bands contain electron-like or hole-like Weyl fermions, while the vortex-core bands are chargeneutral Majorana fermions. The consequences of this topological phase transition for the chiral magnetic effect are presented in Sec. IV. We support our analytical calculations with numerical simulations and conclude in Sec. V. Figure 1 shows the system we are considering, a Weyl superconductor in a magnetic field, in either a flux-biased or a current-biased circuit. For the Weyl superconductor we take the heterostructure configuration of Meng and Balents [24]: a stack in the z-direction of layers of Weyl semimetal alternating with an s-wave superconductor. A magnetization β perpendicular to the layers separates the Weyl cones along k z in opposite chiralities. Each Weyl cone is twofold degenerate in the electron-hole degree of freedom, mixed by the superconducting pair potential ∆ 0 . arXiv:1911.00312v1 [cond-mat.mes-hall] 1 Nov 2019 < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > The Bogoliubov-De Gennes Hamiltonian is [1,24,25] II. FORMULATION OF THE PROBLEM The Pauli matrices σ i and τ i (i = x, y, z, with i = 0 for the unit matrix) act on the spin and orbital degrees of freedom, respectively. The wave vector k = (k x , k y , k z ) is measured in units of the inverse of the lattice constant a 0 of a cubic atomic lattice. Energies are measured in units of the nearest-neighbor hopping energy t 0 (taken isotropic for simplicity). The chemical potential is µ, the vector potential is A, and the pair potential has amplitude ∆ 0 and phase φ. We set ≡ 1 and take the electron charge e > 0. For definiteness we also fix the sign β > 0. The Fermi velocity v F = a 0 t 0 / is unity for our chosen units. The superconductor has length L parallel to the applied magnetic field B = ∇ × A. The dimensions in the perpendicular direction are W × W , large compared to the London penetration length λ. This is the key difference with Ref. 8, where W < λ was assumed in order to prevent the formation of Abrikosov vortices. For W λ l m ξ 0 (with l m = /eB the magnetic length and ξ 0 = v F /∆ 0 the superconducting coherence length) we are in the vortex phase of a strong-type-II superconductor, where the magnetic field penetrates in the form of vortices of magnetic flux Φ 0 = h/2e. The vortex lattice has two vortices per unit cell, we take the square array (lattice constant d 0 ) indicated in Fig. 1. In the gauge with ∇·A = 0 the superconducting phase is determined by The first equation specifies a 2π winding of the phase around each vortex core at R n , and the second equation ensures that the superconducting velocity has vanishing divergence. Since the vortex cores occupy only a small fraction (ξ 0 /l m ) 2 of the volume, we may take a uniform pair potential amplitude |∆| = ∆ 0 and a uniform magnetic field strength |B| = B 0 . The dominant effect of the vortex lattice is the purely quantum mechanical scattering of quasiparticles by the superconducting phase [22]. The vector potential contains a constant contribution A z = Λ/e in the z-direction controlled by either the flux bias or the current bias [26]: (2.4) III. CHIRALITY CONFINEMENT IN A VORTEX LATTICE In the absence of a vortex lattice, for W < λ, it was shown in Ref. 8 that a flux bias or current bias confines Weyl fermions of one definite chirality to the surfaces parallel to the magnetic field, gapping them out in the bulk. Here we consider the opposite regime W λ in which a vortex lattice forms in the Weyl superconductor. We will show that effect of the Λ bias is qualitatively different: both chiralities remain gapless in the bulk, but one of the two chiralities is confined to the vortex cores. The analytics is greatly simplified if the magnetic field is along the same z-axis as the separation of the Weyl cones. The corresponding vector potential is where for definiteness we take Λ ≥ 0. This is the fluxbiased geometry of Fig. 1b. Numerical simulations indicate that the current-biased geometry of Fig. 1c, with B along the y-axis, is qualitatively similar -but we have not succeeded in obtaining a complete analytical treatment in that geometry. A. Landau bands We have calculated the eigenvalues and eigenfunctions of the tight-binding Hamiltonian (2.1) using the Kwant code [28] as described in Ref. 13. We take parameters β = t 0 , ∆ 0 = 0.5 t 0 , µ = 0. We arrange h/2e vortices on the square lattice shown in Fig. 1a. The lattice constant d 0 = N a 0 of the vortex lattice determines the magnetic field B 0 = (h/e)d −2 0 . In the numerics the full nonlinear k-dependence of H(k) is used, while for the analytical expressions we expand near k = 0. The zero-field spectra in Figs. 2a and 2b reproduce the findings of Ref. 8: For small Λ and provided that ∆ 0 < β one sees two pairs of oppositely charged gapless Weyl cones, symmetrically arranged around k z = 0 at momenta K ± and −K ± given by The pair at |k z | = K − is displaced relative to the other pair at |k z | = K + by the flux bias Λ, becoming gapped when Λ is in the critical range Application of a magnetic field in Figs. 2c and 2d shows the formation of chiral zeroth-order Landau bands: a pair of electron-like Landau levels of opposite chirality and a similar pair of hole-like Landau levels. The Landau bands have a linear dispersion in the z-direction, along the magnetic field, while they are dispersionless flat bands in the x-y plane. For k z near K ± the electron-like and hole-like dispersions are given by [13] and similarly near −K ± the dispersions are The k z -dependent factor cos θ renormalizes the charge and velocity of the quasiparticles, according to [8,27] cos θ(k) = |k z | The degeneracy of a Landau band is not affected by charge renormalization [13], each electron-like or holelike Landau band contains chiral modes, determined by the ratio of the enclosed flux Φ = B 0 W 2 and the bare single-electron flux quantum h/e. While the dispersion of a Landau band in the Brillouin zone changes only quantitatively with the flux bias, it does have a pronounced qualitative effect on the spatial extension in the x-y plane. As shown in Fig. 3, the intensity profile |ψ ± (x, y)| 2 of a zeroth-order Landau level at |k z | = K ± peaks when r = (x, y) approaches a vortex core at R n . The dependence on the separation δr = |r − R n | is a power law [13], (3.7) When Λ enters the critical range (3.3) this power law decay applies only to one of the two chiralities: the two Landau bands at k z = K + and k z = −K + with dE/dk z < 0 still have the power law decay (3.7), but the other two bands with dE/dk z > 0 merge at k z = 0 and become exponentially confined to a vortex core. As we shall derive in the next subsection, B. Vortex core bands To demonstrate the exponential confinement in a vortex core of the τ z = +1 chirality we expand the Hamiltonian (2.1) to first order in k x , k y at k z = 0, µ = 0, The applied magnetic field does not contribute on length scales below l m , so we only need to include the constant eA z = Λ term in the vector potential. The winding of the superconducting phase is accounted for by the factor e iϕ , in polar coordinates (x, y, z) = (r cos ϕ, r sin ϕ, z) centered on the vortex core. A. Charge renormalization We summarize the formulas from Ref. 8 that show how charge renormalization by the superconductor affects the CME. The equilibrium expectation value I z of the electrical current in the z-direction is given by The sum over n is over transverse modes with energy In the insets in panel c the same data is presented using a loglog scale (for the zeroth Landau level) and log-linear scale (for the vortex-core band). The Landau band is spread over the magnetic unit cell, with an algebraic divergence at the vortex cores, whereas the vortex-core band is exponentially localized at the vortices. The profiles were calculated for the same set of parameters as the spectra in Fig. 2, with the Landau band corresponding to the state marked with a square, and the vortex-core band corresponding to the state marked with a circle. To improve the spatial resolution, we used a larger ratio d0/a0 = 102. expectation values that we need are those of the velocity operator v z = ∂H/∂k z and the charge operator Q = −e∂H/∂µ, given by Following Ref. 8 we also define the "vector charge" which may be different from the average (scalar) charge Q 0 ≡ Q E because the average of the current as the product of charge and velocity may differ from the product of the averages. The CME is a contribution to I z that is linear in the equilibrium chemical potential µ, measured relative to the Weyl points. We extract this contribution by taking the derivative ∂ µ I z in the limit µ → 0. Two terms appear, an on-shell term from the Fermi level and an off-shell term from energies below the Fermi level, At low temperatures, when −f (E) → δ(E) becomes a delta function, the on-shell contribution J on-shell involves only Fermi surface properties. It is helpful to rewrite it as a sum over modes at E = 0. For that purpose we replace the integration over k z by an energy integration weighted with the density of states: (4.6) In the T → 0 limit a sum over modes remains, where we have restored the units of = h/2π. B. On-shell contributions We apply Eq. (4.7) to the vortex lattice of the fluxbiased Weyl superconductor. Derivatives with respect to A z are then derivatives with respect to the flux bias Λ. According to the dispersion relation (3.4a), the electronlike Landau band near K + has renormalized charges in the limit k z → K + , µ → 0. The charge renormalization factors cancel, so this Landau band with sign v z < 0 contributes to J on-shell an amount − 1 2 e/h times the degeneracy N 0 = (e/h)Φ, totalling − 1 2 (e/h) 2 Φ. Similarly, for the hole-like Landau band near −K − Eq. (3.4a) gives for the same contribution of − 1 2 (e/h) 2 Φ. The total onshell contribution for this chirality is (4.10) We can repeat the calculation for the electron-like band near K − and the hole-like band near −K − , the only change is the sign v z > 0, resulting in J on-shell (|k z | = K − ) = (e/h) 2 Φ. (4.11) We conclude that the Dirac fermions in the Landau bands of opposite chirality give identical opposite on-shell contributions ±(e/h) 2 Φ to ∂ µ I z . The net result vanishes when Λ is outside of the critical region (Λ c1 , Λ c2 ). When Λ c1 < Λ < Λ c2 one of the two chiralities is transformed into unpaired Majorana fermions confined to the vortex cores. The vortex-core bands have Q 0 = 0 at E = 0, so they have no on-shell contribution, resulting in (4.12) The coefficient (e/h) 2 contains the bare charge, unaffected by the charge renormalization. C. Off-shell contributions Turning now to the off-shell contributions (4.5c), we note that the Landau bands do not contribute in view of Eq. (3.4): For the vortex-core bands, off-shell contributions cancel because of particle-hole symmetry. This does not exclude off-shell contributions from states far below the Fermi level, where our entire lowenergy analysis no longer applies. In fact, as we show in Figs. 4 and 5, we do find a substantial off-shell contribution to ∂ µ I z in our numerical calculations (see App. A for details). Unlike the on-shell contribution (4.12), which has a discontinuity at Λ = Λ c1 , Λ c2 , the off-shell contribution depends smoothly on the flux bias and can therefore be extracted from the data. V. CONCLUSION In summary, we have demonstrated that a flux bias in a Weyl superconductor drives a confinement/deconfinement transition in the vortex phase: For weak flux bias the subgap excitations are all delocalized in the plane perpendicular to the vortices. With increasing flux bias a transition occurs at which half of the states become exponentially localized inside the vortex cores. The localized states have a definite chirality, meaning that they all propagate in the same direction along the vortices. (The sign of the velocity is set by the sign of the external magnetic field B 0 .) As a physical consequence of this topological phase transition we have studied the chiral magnetic effect. The states confined to the vortex cores are charge-neutral Majorana fermions, so they carry no electrical current. The states of opposite chirality, which remain delocalized, are charged, and because they all move in the same direction they can carry a nonzero current density j parallel to the vortices. This is an equilibrium supercurrent, proportional to the magnetic field B 0 and to the chemical potential µ (measured relative to the Weyl point). We have calculated that the supercurrent along the vortices jumps at the topological phase transition by an amount which for a large system size tends to the universal limit Remarkably enough, the proportionality constant contains the bare electron charge e, even though the quasiparticles have a renormalized charge e * < e. Fig. 4, but now for a fixed flux bias eAz = 1.05/a0 in the two-cone regime, showing the contributions to ∂µIz from different momenta kz along the magnetic field. We distinguish between the total current and the off-shell contribution. The difference between the two is the on-shell contribution, which peaks at the momenta where the Fermi level crosses the chiral Landau bands. The vortexcore bands at kz = 0 have vanishing on-shell contribution. The chiral fermions confined in the vortex cores are a superconducting realization of the "topological coaxial cable" of Schuster et al. [29], where the fermions are confined to vortex lines in a Higgs field. There is one difference: the chiral fermions in the Higgs field are charge-e Dirac fermions, while in our case they are charge-neutral Majorana fermions. The difference manifests itself in the physical observable that serves as a signature of the confinement: for Schuster et al. this is a quantized current dI/dV = e 2 /h per vortex out of equilibrium, in our case it is a quantized current dI/dµ = 1 2 e/h per vortex in equilibrium. Top: low-energy dispersion relation for the corresponding system. The on-shell contribution to the current response, which is the difference between the total and off-shell contributions, only appears at momenta for which a band crosses the Fermi energy. In the four-cone regime four peaks are present, the contributions of which cancel out. In the two-cone regime the vortex-core band at kz = 0 has a vanishing on-shell contribution, whereas the contribution of the other two Landau levels remains unchanged. The plots were obtained for a system size N = 18. and (A3), in the two-cone regime at eAz = 1.05/a0 for a finite chemical potential µ. The colored data points give the total response, as well as the off-shell and on-shell contributions. The dotted line µe 2 Φ/h 2 is the theoretical prediction (4.12) for the on-shell contribution to first order in µ, which is a good approximation to the numerical result for small µ. The plots were obtained for a system size N = 18.
5,268.2
2019-11-01T00:00:00.000
[ "Physics" ]
Maximum a posteriori natural scene reconstruction from retinal ganglion cells with deep denoiser priors Visual information arriving at the retina is transmitted to the brain by signals in the optic nerve, and the brain must rely solely on these signals to make inferences about the visual world. Previous work has probed the content of these signals by directly reconstructing images from retinal activity using linear regression or nonlinear regression with neural networks. Maximum a posteriori (MAP) reconstruction using retinal encoding models and separately-trained natural image priors offers a more general and principled approach. We develop a novel method for approximate MAP reconstruction that combines a generalized linear model for retinal responses to light, including their dependence on spike history and spikes of neighboring cells, with the image prior implicitly embedded in a deep convolutional neural network trained for image denoising. We use this method to reconstruct natural images from ex vivo simultaneously-recorded spikes of hundreds of retinal ganglion cells uniformly sampling a region of the retina. The method produces reconstructions that match or exceed the state-of-the-art in perceptual similarity and exhibit additional fine detail, while using substantially fewer model parameters than previous approaches. The use of more rudimentary encoding models (a linear-nonlinear-Poisson cascade) or image priors (a 1 /f spectral model) significantly reduces reconstruction performance, indicating the essential role of both components in achieving high-quality reconstructed images from the retinal signal. Introduction A torrent of visual information arrives at each of our eyes, but only a small portion of it is transmitted to the brain via the optic nerve, which is comprised of the axons of the retinal ganglion cells (RGCs). Elucidating the nature of this encoded information, and the inference process the brain uses to interpret it, is fundamental to understanding biological vision. Image reconstruction provides a method of visualizing the information encoded in RGC signals, evaluating it using standard image quality metrics, and reasoning about how the brain might interpret it [1,2,3,4]. The fidelity and quality of reconstructed images also provides a useful objective function for optimizing the design of electrical stimulation patterns delivered by devices implanted to restore vision [5,6]. The simplest and most well-studied image reconstruction method is linear regression [1,3,7]. Optimal reconstruction kernels are learned for each RGC using least-squares regression of recorded responses to many visual images, and the reconstruction of a new incident image is computed with the sum of the filters weighted by the response of each cell. The quality of linearly reconstructed images can be enhanced by applying an autoencoder neural network to leverage natural image priors [8], or by using deep neural networks to non-linearly recover additional high spatial frequency image components [4]. Neural networks can also be directly trained (supervised) for reconstruction, but this is data-intensive, and to date has limited their use to simulated data, or low-dimensional stimuli and small numbers of cells [9]. These regression approaches leave substantial room for improvement and interpretation. A Bayesian formulation, in which encoding model and prior probabilities are made explicit and are separately fitted, could provide a more flexible and interpretable solution, and could potentially improve the fidelity of reconstructed images. Here we present a method for approximate maximum a posteriori (MAP) image reconstruction from RGC spikes, that combines a retinal encoding model that accurately captures retinal responses [10] with state-of-the-art image priors that are implicitly embedded in deep denoising networks [11,12,13,14,15,16]. By separating the effects of image prior and retinal spiking response likelihood, our method offers two primary advantages over existing methods for reconstruction: (1) any pre-trained or closed-form natural image prior can be used, and the effects of different priors can be compared; and (2) any model of RGC encoding that provides an explicit likelihood can be used, and the method can quantify the relative importance of different model components in representing the visual signal, including spike train history, cell-to-cell correlations and output nonlinearities. We apply our method to reconstruct static flashed natural images from responses of several hundred macaque RGCs of identified types recorded with large-scale electrode arrays. We compare our method directly to published state-of-the-art linear and neural network regression methods (Section 4.1). The new method matches or significantly outperforms previous methods, producing sharper, more naturalistic reconstructions (Figure 2), and similar or greater perceptual similarity to ground truth ( Figure 3, Tables 1 and 2). However, our method also produces some reconstructions with distinctive spurious image structure, as would be expected when RGC signals are noisy and image priors dominate the reconstruction process. Finally, comparisons to more conventional encoding models and image priors reveal that both aspects of the approach are important for the most accurate reconstructions. Retinal data and stimuli Extracellular recordings from RGCs in the peripheral macaque retina were performed ex vivo using a 512-electrode system [17] as described previously [3]. Retinas were obtained from terminally anesthetized macaque monkeys used by other laboratories, in accordance with Institutional Animal Care and Use Committee requirements. Spikes from individual RGCs were identified with the YASS [18] spike-sorter. A 30-minute spatiotemporal white noise stimulus [19] was used to compute spatio-temporal receptive fields, and to identify cells of distinct types. Analysis focused on the four major RGC types of the primate retina (ON parasol, OFF parasol, ON midget, and OFF midget) [20,21], totaling roughly 700 cells per recording. The receptive fields [22] of all four cell types formed regular mosaics, uniformly covering a region of visual space ( Figure 1). Natural images were presented to the retina as described previously [3]. Images from the ImageNet database [23] were converted to grayscale and cropped to 256x160. Each pixel measured approximately 11 ⇥ 11 µm at the retina. Each stimulus image was displayed for 100 ms, followed by a 400 ms uniform gray display, allowing each image presentation to be treated as an independent trial. This trial design does not fully mimic natural vision because it does not account for eye movements [24], and because the temporal component of the stimulus is known to the reconstruction algorithm. Two retinas from different animals were used, with 19,000 and 10,000 image/response pairs, respectively. Details are summarized in Tables 4 and 5 in the Appendix. Figure 1: Receptive field mosaics from one retina for the four major RGC types (ON parasol, OFF parasol, ON midget, OFF midget) used in the image reconstructions. Image quality metrics were computed over the shaded blue region, to exclude areas that were insufficiently covered by receptive fields of recorded RGCs. MAP image reconstruction from RGC spikes MAP reconstruction estimates the stimulus image x from observed RGC spike trains s by minimizing the negative log of the posterior, log p(x | s), which can be expressed using Bayes' Rule as: Both terms in equation (1) have intuitive interpretations in the context of reconstruction from RGC spikes. The first, log p(s | x), is the negative log likelihood (NLL) of an encoding model describing the probabilistic spiking of RGCs given a stimulus. The parameters of this model can be learned from experimental data. The second is the negative log prior of the stimulus image x and can be learned from natural images independently of retinal responses. Because encoding models with varying levels of fidelity and detail can be mixed and matched with priors of varying sophistication, the MAP approach allows us to probe the distinct roles of these two components in image reconstruction [25,26]. RGC encoding models Encoding models for each RGC must be learned from the experimental data before performing MAP reconstruction. Two types of encoding models were fitted to the data: (1) a linear-nonlinear-Poisson (LNP) cascade model with an exponential nonlinearity, the most commonly used model of RGC responses to visual stimuli [7]; and (2) a generalized linear model (GLM) that augments the LNP model with a feedback loop and cross-connections between neighboring cells, and which can accurately capture fine spike timing structure and cell-cell correlations [10]. Encoding models were fitted on the entire training partition, and regularization hyperparameters were tuned by evaluating the test partition NLL for a small subset of RGCs of each type. Linear-nonlinear-Poisson (LNP) encoding model The LNP model is the de facto standard model for describing the probabilistic spiking of RGCs in response to visual stimuli [7]. The model parameters for a single RGC consist of a linear spatial filter m, and a scalar bias b. In the model, a scalar generator signal m T x + b is passed through a nonlinearity to compute a spike rate . The RGC spike count in a 150 ms interval is modeled as a draw from a Poisson distribution with rate . Assuming an exponential nonlinearity, the encoding NLL for a single RGC is which is convex in m and b. In practice, to ensure that the spatial filters were spatially compact and corresponded approximately with the receptive fields obtained using reverse correlation with white noise, the MAP objective was augmented with two regularization terms: an L1 sparsity-inducing penalty, and an L2 penalty enforcing similarity to the receptive field. The complete objective function with both regularization terms is described in A.4. The parameter optimization was solved separately for each cell using FISTA [27]. MAP reconstruction requires the joint encoding NLL of the observed spikes from every RGC given the image. Since the LNP model assumes that the RGC responses are statistically independent, this is simply the sum of the single-cell NLLs, which is convex in the image x: Generalized linear encoding models The generalized linear model (GLM) is an augmentation of the LNP model that incorporates the effects of spiking history and cell-cell correlations on neural response [10]. In the GLM, each RGC (indexed by i) is parameterized by a spatio-temporal stimulus , and a bias b i . The GLM was fitted to spike counts measured within 1 ms time bins, approximately matched to the refractory period of the cells [28,29]. To limit the number of parameters and improve computational efficiency, the stimulus filter was assumed to be space-time separable Using a sigmoidal nonlinearity and Bernoulli spiking, the encoding NLL used to fit a single cell (see A.5.2 for complete derivation) is To simplify the GLM, the filters h i , f i , and c (j) i were each represented as weighted sums over a set of cosine "bump" functions [10]. As with the LNP model, L1 and L2 regularization terms were added to constrain the spatial filters, and an additional L 1,2 group sparsity penalty for the coupling filters was added to eliminate spurious cell-cell correlations. The complete objective function is described in detail in A.5.3. Model parameters were found by alternating between spatial and temporal filter convex minimization steps for each RGC, using FISTA [27,30] for each step. The joint encoding NLL over all of the cells used for image reconstruction (see A.5.4 for derivation) is again convex in the image x: MAP with Gaussian 1/F priors The Gaussian 1/F prior is the among the simplest and most commonly used image priors [31], and is the basis for many classical image processing algorithms. The 1/F prior assumes that pixels of the image are drawn from a stationary jointly Gaussian distribution (and thus that the spatial covariance matrix is diagonalized by the 2D Fourier basis) and that spectral power (variance of each spatial frequency component) falls off in inverse proportion to the square of the frequency F 2 . Discarding terms that do not depend on the image x, the negative log prior can be written is the amplitude of the k th Fourier coefficient at frequency f k . MAP image reconstruction using the 1/F prior can be performed using standard unconstrained convex minimization methods as both the negative log prior and the RGC encoding NLLs described in (3) and (6) are convex. Approximate MAP with denoising convolutional neural network (dCNN) priors Modern denoising convolutional neural networks can represent powerful image priors, but these priors are implicit [12,32]: they are not expressed in closed form, and their values cannot be computed explicitly, making exact MAP inference difficult. The "plug-and-play" methodology provides an approximate iterative procedure for using such denoisers in MAP estimation problems [11], by incorporating them into variable-splitting optimization methods such as half-quadratic splitting (HQS) [33] or alternating direction method of multipliers (ADMM) [34,35]. Here, we adopt a method based on the HQS method presented in [15] to perform MAP reconstruction from RGC spikes. As in [15], we introduce an auxiliary variable z, split the original problem in (1) into two sub-problems, incorporate a regularization parameter p to control the prior term, and solve by alternating between two complementary optimization problems: Since the problem in equation (8) has the same objective function as MAP Gaussian denoising with known noise variance p /⇢ (k) , we solve it approximately with a single forward pass of a pre-trained unblind DRUNet denoiser network [15], resulting in Algorithm 1. Unlike most applications of HQS, the encoding term in equation (7) is non-quadratic in x and hence (7) was solved iteratively (gradient descent with momentum and backtracking line search) rather than in closed form. z (1) was initialized as the linear solution, though using random Gaussian initialization does not significantly affect the results (see Appendix A.6). Because convergence in the mathematical sense is not necessary for most imaging applications [36], K = 25 iterations were used in Algorithm 1. As in [15], ⇢ (k) was increased per-iteration on a log-spaced schedule. The hyperparameters p and [⇢ (1) , ⇢ (25) ] were determined by performing a grid search and evaluating reconstruction quality on an 80-image subset of the test partition. Variations in hyperparameters over a reasonable range (⇢ (1) 2 [10 2 , 10 1 ], ⇢ (25) 2 [30, 500], p ⇡ 0.1) produced similar reconstruction quality, and optimal hyperparameters were similar across the two retinas and across LNP and GLM encoding models. Approximate MAP reconstructions using this algorithm are termed MAP-GLM-dCNN and MAP-LNP-dCNN for the GLM and LNP encoding models, respectively. Benchmark: nonlinear regression with artificial neural networks Current state-of-the-art methods for reconstruction of natural images from RGC spikes rely on an initial linear reconstruction step [1,3], followed by ad hoc application of nonlinear neural networks. Specifically, Parthasarathy et al. Approximate MAP with GLM/dCNN matches or exceeds state-of-the-art results To test whether our MAP-GLM-dCNN method outperforms state-of-the-art approaches, image reconstructions were generated from the test partitions of the datasets, and were compared both qualitatively and quantitatively. Example reconstructions are shown in Figure 2 for the L-CAE [8], for Kim et al. [4], and for our method. MAP-GLM-dCNN reconstructions are seen to be sharper than those of L-CAE, and contain additional image details (especially extended contours, as in rows C, E, G, H, I, and L). When compared to Kim et al., MAP-GLM-dCNN tended to recover more content, particularly straight edges (rows E, G, H, I, and L), but sometimes exaggerated the contrast (rows A, H, and I). MAP-GLM-dCNN produced qualitatively different artifacts than the other methods. In particular, it sometimes hallucinated naturalistic structure not present in the stimulus images (rows J, K, and N), including striking irregularities in contours (rows D, K, L, and M). Quantitative comparisons between MAP-GLM-dCNN and the two benchmark regression methods were also made. Scatter plots comparing MS-SSIM and PSNR on the test partition of one retina are shown in Figure 3A and 3B for Kim et al. and the L-CAE, respectively, and summary statistics over the test and heldout partitions for both retinas are presented in Tables 1 and 2. On an image-forimage basis, MAP-GLM-dCNN reconstructions have greater MS-SSIM than those of L-CAE (3B), demonstrating that the new method systematically achieves greater perceptual similarity to ground truth. The MAP-GLM-dCNN method resulted in comparable MS-SSIM perceptual similarity to the much more complicated Kim et al. method (3A). The PSNR of MAP-GLM-dCNN reconstructions was systematically worse than either benchmark. This is not surprising, as the MAP optimization procedure does not necessarily minimize MSE. These results held for both retinas (Tables 1 and 2). Deep denoiser prior substantially improves image quality over 1/F prior To test the importance of the image prior, MAP-GLM-dCNN results were compared against reconstruction using the GLM encoding model with the classical 1/F Gaussian prior (MAP-GLM-1F). Example reconstructed images using the denoiser prior and 1/F prior are shown in columns 5 and 7, respectively, of Figure 2. Images reconstructed with the denoiser prior are less "grainy", and tend to have better-defined edges and smoother surfaces. The artifacts seen in the 1/F examples are expected, since this simple prior does not constrain phase [31], whose alignment is essential for generating sharp spatially-localized features. Scatterplots of image quality on the test partition using MS-SSIM and PSNR are shown for one retina in Figure 3C, and mean values for both retinas are summarized in Tables 1 and 2. Consistent with the visual appearance, PSNR and MS-SSIM were systematically higher when using the denoiser prior, in both retinas. Thus, using the more sophisticated denoiser image prior substantially increased the perceptual similarity of the reconstructions to ground truth. GLM encoding model recovers additional image structure over LNP encoding model To test the importance of the encoding model, we compared images reconstructed using the GLM and LNP encoding models, both using the same denoiser prior. Example images are shown in columns 5 and 6 of Figure 2. Images reconstructed using both models exhibit natural image structure like smooth surfaces and well-defined edges, but the GLM-reconstructed images tended to have more realistic-looking textures, whereas the LNP-reconstructed images tended to be overly simplified. Moreover, the GLM method recovered more high spatial frequency details (e.g., the legs of the insect in row C, the horizontal stripes on the tape cassette in row D, and the details on the hammock in row E, and the structure on the file cabinets in row G). The quality of image reconstructions for each image/response pair in the test partition for one retina were compared using MS-SSIM and PSNR in Figure 3D, and their mean values over the test and held out partitions for both retinas are summarized in Tables 1 and 2. In both retinas, images reconstructed using the GLM encoding model had systematically greater MS-SSIM scores, indicating greater perceptual similarity to ground truth, than those reconstructed using the LNP encoding model. This demonstrates that the choice of encoding model significantly affects reconstruction quality, and that the inclusion of temporal spike dependencies and cell-to-cell correlations in the more sophisticated GLM encoding model provides important constraints on the information encoded by the RGC spikes. This finding is consistent with previous work showing that decoding using the GLM (without priors) can access more information than simplified models lacking the cell-cell correlations or spiking history [10]. Discussion This paper presents a novel approximate MAP method for reconstructing natural images from the simultaneously recorded spikes of several hundred RGCs, using an accurate probabilistic model of retinal encoding and a natural image prior implicit in a pre-trained denoising neural network. The method matches or outperforms the current state-of-the-art in terms of recovering naturalistic image structure and/or the perceptual similarity of reconstructions to ground truth, while also being more principled and interpretable due to the explicit Bayesian separation of the encoding model and prior. The new approach uses substantially fewer parameters than previous state-of-the-art methods based on CNNs, and does not require training CNNs on retinal data (the prior is obtained from a network trained exclusively on image denoising). We showed that both encoding model and image prior contributed to the high-quality image reconstructions: removal of either substantially degraded performance. Thus, we expect that cell-cell correlations and temporal structure of spike trains, as well as image priors, will prove important in understanding how the retinal signal is used by the brain. Several previous studies have used GLM encoding models for stimulus reconstruction from experimentally-recorded retinal signals, revealing the significance of cell-cell correlations for decoding temporal structure in white noise stimuli [10,38], and the significance of the temporal structure of spike trains in tracking moving features [2]. By including a complex natural image prior into a Bayesian reconstruction method, the present work more efficiently exploits both the GLM and experimental data to produce state-of-the-art natural image reconstructions. The enhanced reconstructions and interpretability obtained with our method could lead to improved function of retinal implants for restoring vision. Previous work [5] has suggested that electrical stimulation with a retinal implant can be guided by minimizing the expected MSE of linearly reconstructed images. This method ignores potentially important cell-cell correlations and fine temporal structure in RGC spike trains, and assumes that image priors captured by linear regression are sufficient for high performance. The method presented here offers an alternative approach to choosing simulation patterns to produce higher-fidelity artificial vision, while potentially being more robust than ad hoc neural network methods. However, achieving this in real time with minimal latency presents a substantial technical challenge. Though the present work is limited to reconstruction of flashed static natural images from RGC spikes, extensions of our approximate MAP reconstruction method could be used to probe how neurons encode visual information under more natural conditions. For example, a central problem is understanding how the visual system achieves high-acuity perception in the presence of "jitter" in eye position, even when fixated [24]. Previous computational efforts have probed this question, but have been largely limited to simulated data with simple encoding models and stimuli [39,40,41]. Combining the methods put forth here with modern algorithms for image deblurring and motioncorrection [42,15] could yield more powerful methods to decode images from jittered retinal inputs. A related problem is understanding how the retina encodes the information contained in complex naturalistic movies [2], including movement of objects within a scene and other non-rigid transformations over time. The dimensionality of such stimuli and the consequent data requirements are high, so the ability to capture stimulus priors using modern machine learning tools [43,44] separately from the retinal data, as was done here, will be important for understanding reconstruction in more naturalistic visual contexts.
5,110
2022-05-20T00:00:00.000
[ "Computer Science" ]
Derivative-free neural network for optimizing the scoring functions associated with dynamic programming of pairwise-profile alignment Background A profile-comparison method with position-specific scoring matrix (PSSM) is among the most accurate alignment methods. Currently, cosine similarity and correlation coefficients are used as scoring functions of dynamic programming to calculate similarity between PSSMs. However, it is unclear whether these functions are optimal for profile alignment methods. By definition, these functions cannot capture nonlinear relationships between profiles. Therefore, we attempted to discover a novel scoring function, which was more suitable for the profile-comparison method than existing functions, using neural networks. Results Although neural networks required derivative-of-cost functions, the problem being addressed in this study lacked them. Therefore, we implemented a novel derivative-free neural network by combining a conventional neural network with an evolutionary strategy optimization method used as a solver. Using this novel neural network system, we optimized the scoring function to align remote sequence pairs. Our results showed that the pairwise-profile aligner using the novel scoring function significantly improved both alignment sensitivity and precision relative to aligners using existing functions. Conclusions We developed and implemented a novel derivative-free neural network and aligner (Nepal) for optimizing sequence alignments. Nepal improved alignment quality by adapting to remote sequence alignments and increasing the expressiveness of similarity scores. Additionally, this novel scoring function can be realized using a simple matrix operation and easily incorporated into other aligners. Moreover our scoring function could potentially improve the performance of homology detection and/or multiple-sequence alignment of remote homologous sequences. The goal of the study was to provide a novel scoring function for profile alignment method and develop a novel learning system capable of addressing derivative-free problems. Our system is capable of optimizing the performance of other sophisticated methods and solving problems without derivative-of-cost functions, which do not always exist in practical problems. Our results demonstrated the usefulness of this optimization method for derivative-free problems. Introduction The profile comparison alignment method with a positionspecific scoring matrix (PSSM) [1] is one of the most accurate alignment methods. The PSSM is a two dimensional vector (matrix) for sequence length. Each element in the vector consists of a 20 dimensional numerical vector, in which each value represents the likelihood of the existence of each amino acid position in a biological sequence. Here, we designed the vector inside PSSM as a position-specific scoring vector (PSSV). In a profile alignment, cosine similarity or correlation coefficient is generally calculated against the PSSVs to calculate similarity or dissimilarity between the two sites in the sequences of interest on dynamic program-ming (DP) [2,3]. Profile alignment methods using these functions have been successful for a long time [4], although cosine similarity or correlation coefficient cannot capture the non-linear relationship between two vectors and the similarity between two sites is not always expressed by linear relationships. The performance of profile sequence alignment has been improved by various studies in the past decades. For example, HHalign improved alignment quality using profiles constructed with the hidden Markov model, which provided more information than PSSM [5], MUSTER incorporated protein structural information in a profile [3], and MRFalign utilized the Markov random fields to improve alignment quality [6]. Although various methods have been devised from different perspectives, studies to develop the scoring function itself with sophisticated technologies are lacking. Neural networks are computing system, which mimic biological nervous system of animal brains. Theoretically, it can approximate any function regardless of linearity of the functions [7]. Neural networks are attracting attention from various areas of research, including bioinformatics, due to the availability of improved computational methods and the explosive increase in available data. In recent years, these algorithms have been vigorously applied to bioinformatics. For example, several studies applied a deep neural network model to predict protein-protein interaction [8,9], protein structure [10,11] and various other biological conditions such as residue contact map, backbone angles, and solvent accessibility [12,13]. These algorithms basically used the backpropagation method, which requires derivation of a cost function for searching optimal parameters, and few studies implemented derivative free neural network. In this study, we utilized the neural network to optimize a scoring function. In the process, we first combined two PSSVs (for which we wanted to calculate similarity) derived from two sites and set it as an input vector. A target vector was required to implement supervised learning. However, in this case, we did not have the target vector because the ideal function and an ideal similarity score for each site were unknown, and thus, the scoring function could not be directly optimized. Instead, we calculated the entire DP table for the input sequences and the difference between the resultant alignment and the correct alignment was used for calculating cost. In this case, we could not use the backpropagation method for optimal weight search because we lacked the derivation of the cost function required for this search. Namely, we could not incorporate our idea in the conventional neural network framework. Therefore, we newly utilized the covariance matrix adaptation evolution strategy (CMA-ES) [14], which is an adaptive optimization method modifying the basic evolutionary strategy [15], as the search method for neural network to realize derivative free neural network calculation. Using this framework, we attempted to produce higher performance scoring function for remote sequence alignment in this study. Dataset We downloaded the non-redundant subset of SCOP40 (1.75 release) [16], in which sequence identity between any sequence pair is less than 40%, from ASTRAL [17]. We selected the remote sequence subset since we wanted to improve the remote sequence alignment quality. The SCOP is a protein domain dataset where sequences are classified in hierarchical manner by class, fold, superfamily, and family. All notations of the superfamily in the dataset were sorted by alphabetical order and all superfamilies, the ordered numbers of which were multiples of three, were classified into a learning dataset, whereas the others were classified into a test dataset. We obtained 3,726 and 6,843 sequences in the learning and test datasets, respectively. Next, we randomly extracted a maximum of 10 pairs of sequences from each superfamily to negate a bias induced by different volumes of each superfamily and used these sequence pairs for subsequence construction of PSSM. We confirmed that sequences in each pair were from the same family to obtain decent reference alignment. Finally, we obtained 1,721 and 3,195 sequence pairs in the learning and test datasets. Figure 1 shows the learning network computed in this study. We calculated similarity scores between two PSSVs using the neural network. At first, the summation of matrix products between xa (the PSSV A) and W1a, xb (the other PSSV B) and W1b, and 1 (bias) and b1 in the neural network were calculated. The resultant vector was transformed by an activating function, φ(). Finally, the summation of the dot products between the transformed vector and w2, and 1 and b2 was calculated. The resultant value was used as the similarity score for the two sites. Namely, the forward calculation was computed by the following equation. Here, y is the similarity score. PSSV B Score The complete DP table was calculated using the similarity score and a final pairwise alignment was produced. The pairwise alignment and its corresponding reference alignment were compared to each other and an alignment sensitivity score, described below, was calculated. The subtraction of the alignment sensitivity score from 1 was used as cost for searching optimum weight by the neural network with CMA-ES. We set the weights W1a and W1b equal to each other (shared weight) so that the network outputs same value even though the input order of the two PSSVs were opposite. The number of units of the middle layer was set to 144. The rectified linear unit was utilized as the activation function. We set σ, λ, and µ as 0.032, 70, and 35, respectively, as parameters for CMA-ES. Here, σ is almost equivalent to step size of the gradient descent method, and λ and µ indicate the number of descendant and survival individuals in evolutional process. In actual learning, we read training datasets in batch manner. The learning loop was stopped using the early stopping criteria by checking the dissociation between the training and validating curves. The initial weight was derived from parameters that mimicked the correlation coefficient. To generate the initial weight, we randomly generated 200,000 PSSM pairs and learned them using multilayer perceptron with hyperparameters (the dimension of weight and activating function) identical to the above hyperparameters. In addition to the weights, we simultaneously optimized the open and extension gap penalties. The initial values of open and extension gap penalties were set as -1.5 and -0.1. Alignment algorithm In this study, we implemented the semi-global alignment method, namely global alignment with free end-gaps method [18,19]. Metrics of alignment quality The alignment quality was evaluated using alignment sensitivity and precision [20]. The alignment sensitivity was calculated by dividing the number of correctly aligned sites by the number of non-gapped sites in a reference alignment. In contrast, alignment precision was calculated by dividing the number of correctly aligned sites by the number of nongapped sites in a test alignment. Calculation of residue interior propensity The relative accessible surface area (rASA) for residues of all proteins in the learning and test dataset was calculated by areaimol in CCP4 package version 6.5.0 [21]. The residues of which rASA is less than 0.25 were counted as an interior residue and the other residues were counted as surface residue, according to a previous study [22]. We divided the ratio of the interior residues by the background probability of residues to calculate the residue interior propensity. The residue interior propensity is the likelihood of a residue existing inside a protein. Namely, propensity greater than 1 signifies that the probability of the residue to be inside the protein is high. Gap optimization of existing functions At first, we conducted gap penalty optimization of the existing scoring functions such as cosine similarity and correlation coefficient on the learning dataset. We computed both alignment sensitivity and precision for aligners using these functions, changing open and extension gap penalties by 0.1 increments from -2.0 to -0.6 and from -0.4 to -0.1, respectively. The best alignment sensitivity was selected as the optimum combination among the combinations of open and extension gap penalties. As shown in Table 1, the best gap penalty combination for cosine similarity and correlation coefficient was (-1.0, -0.1) and (-1.5, -0.1). Optimization of scoring function of the neural network Next, we conducted optimization of scoring function on the neural network with CMA-ES. During learning, we randomly divided the learning dataset into two subsets, namely, the training and validation datasets, which included 1,536 and 160 pairwise PSSV sets and its corresponding reference alignments as targets, respectively. Since calculation of CMA-ES in our parameter settings requires more than 100,000 times DP (the size of training dataset × λ) per epoch, the consumption of computer resources was large and calculation time was long even when 24 threads were used with the C++ program; therefore, we set the maximum limit for epoch to a small number such as 150. We selected the best scores from the validation scores of the last fifth part of an entire epoch (which was derived from 145th epoch) and obtained final weight and bias matrices, namely, the substance of a novel scoring function and optimal gap penalty combination, respectively. As a result, optimal combination of open and extension gap penalty for the final weight and bias matrix were approximately -1.7 and -0.2. Finally, we implemented the pairwise profile aligner with the weight and bias matrices as novel scoring function and named it as neural network enhanced profile alignment library (Nepal). Our aligner and scoring function (weight and bias matrices) can be downloaded from https://github.com/yamada-kd/nepal. Benchmark of Nepal and other aligners with existing function on the test dataset Next, we conducted benchmark test of Nepal and other aligners with existing functions on the test dataset. In addition to profile comparison methods, we examined the performance of sequence comparison aligners with difference substitution matrices such as BLOSUM62 [23] and MIQS [24] for reference. We used -10 and -2 as open and extension gap penalties, respectively, based on a previous study [24]. When calculating alignment qualities, the test dataset was further categorized into remote and medium subset depending on pairwise sequence identity of the reference alignments. The remote and medium subset includes sequence pairs, of which each sequence identity was not lower than 0% and less than 20%, and not lower than 20% and less than 40%, respectively. Generally, a pairwise alignment between sequences of lower identity such as those in the twilight zone is more difficult [25]. Table 2 shows alignment quality scores for each method. Results show that among the existing methods, including sequence comparison methods, the method with the best performance from all perspectives was the profile comparison method with correlation coefficient scoring function. In contrast, Nepal improved both alignment sensitivity and precision compared to this method. Actually, these improvements were statistically significant according to Wilcoxon signed rank test with Bonferroni correction even when significance level (α) is set to 0.01. Comparison between sequencebased methods with different substitution matrices such as MIQS and BLOSUM62 showed that the gain of improvement of MIQS compared to BLOSUM62 was more significant for the remote subset than the medium subset. This was expected since MIQS was originally developed to improve remote homology alignment. This trend was observed regarding the relationship between Nepal and correlation coefficient implemented aligner, where Nepal improved both alignment sensitivity and precision by about 4% and 1% in remote and medium subsets, respectively. This indicated that the novel scoring function was optimized for remote sequence alignment. This is expected because sequence alignment between sequences with closer identities was easier than those with remote identities. Therefore, during optimization, the novel scoring function would be optimized to be naturally advantageous for remote sequence alignments. Since the problem regarding remote relationship holds true for sequence similarity search [24,26], the novel scoring function of our method could be useful for improving the performance of remote similarity search methods. Importance of attributes using the connection weight method Finally, we calculated the importance of 20 attributes using the connection weight method [27]. As shown in Figure 2A, the connection weights against each attribute, namely each amino acid, were distributed to various values. This indicated that our developed scoring function discerned the importance of the attributes depending on the variety of amino acids. According to the results, the connection weight of hydrophobic residues such as Leu, Ile, and Val were of higher value. These residues are located mostly inside the hydrophobic cores of proteins. In addition, as shown in Figure 2B, the other residues which also tend to locate inside proteins, such as Ala, Cys, and Tyr, were of higher importance. In contrast, residues which tend to locate on protein surface, such as Asp, Pro, Lys, and Asn, were of lower importance. The Spearman's rank correlation coeffi- cient between the connection weight and interior propensity was approximately 0.6 and the value was statistically significant (p-value < 0.05). While residues which are exposed on the protein surface are subject to higher mutation pressures, interior residues are less susceptible to mutation [28]. This is because the protein structure is disrupted if mutations in the interior residues collapse the hydrophobic core [29]. The scoring function constructed in this study was optimized for alignment of remote homologous sequences. According to the previous study based on substitution matrices [30], hydrophobicity of residues was the dominant property of remote sequence substitution rather than simple mutability. This fact partially represents that for remote sequence alignment, residues occupying interior locations in a protein higher order structure with less susceptibility to mutation pressure are considered more meaningful. Since our scoring function was also optimized for remote sequence alignment, the above property would be observed and this fact paradoxically suggests that our scoring function was optimized for remote sequence alignment. Collectively, this property is one of the reasons for the superiority of our method to the existing ones. In addition, although the connection weight consisted of various values, it would at least contribute to increasing the expressive power of the novel scoring function. For example, we wanted to calculate the similarity score between PSSV A (a) and B (b) as shown in Figure 3. The original scores are 0.488207 and 0.387911 when calculated using the correlation coefficient and Nepal score, respectively, (middle panel Figure 3). The scores calculated by correlation coefficient did not change when the 1st and 18th sites or the 4th and 19th sites were swapped. This was unexpected since the converted PSSV obtained after swapping was not identical to the original one. This could be one of the drawbacks of us- ing unweighted linear function such as cosine similarity and correlation coefficient. In contrast, Nepal scores changed after the swapping, which varied with the change in PSSV. Actually, there were about 290,000 overlaps when we calculated similarity score to six places of decimal against randomly generated one million PSSVs using correlation coefficient, whereas there were approximately 180,000 overlaps when Nepal was used. These overlaps would negatively affect DP computation because higher overlap scores would cause difficulty in deciding the correct path, especially during the computation of maximum three values derived from up, diagonal, and left side of the DP cell. Collectively, the different weights based on amino acid variety presented by the connection weight method is one of the reasons why Nepal score improved the alignment quality compared to the existing scoring functions. CONCLUSION In this study, we developed a new derivative free neural network with CMA-ES. Using this framework, we devel-oped a novel scoring function for profile comparison and Nepal, a pairwise profile aligner with the scoring function. Large computational resources were required by our learning procedure with the derivative free neural network; thus, we could not examine whether the learning was converged enough because of our limited computational environment. Nevertheless, Nepal significantly improved alignment quality of profile alignment, especially for alignment of remote relationships, compared to the existing scoring functions. Nepal improved alignment quality because of adaptation to remote sequence alignment and increasing the expressive power of similarity score. The novel scoring function can be realized using a simple matrix operation and the parameters are provided on https://github.com/yamada-kd/nepal. In future, the performance of distant homology detection method or that of multiple sequence alignment method for remote homologous sequences may be further improved with our scoring function. Funding This work was supported in part by the Top Global University Project from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT)
4,333
2017-08-30T00:00:00.000
[ "Computer Science" ]
Reconfigurable Radiation Angle Continuous Deflection of All-Dielectric Phase-Change V-Shaped Antenna All-dielectric optical antenna with multiple Mie modes and lower inherent ohmic loss can achieve high efficiency of light manipulation. However, the silicon-based optical antenna is not reconfigurable for specific scenarios. The refractive index of optical phase-change materials can be reconfigured under stimulus, and this singular behavior makes it a good candidate for making reconfigurable passive optical devices. Here, the optical radiation characteristics of the V-shaped phase-change antenna are investigated theoretically. The results show that with increasing crystallinity, the maximum radiation direction of the V-shaped phase-change antenna can be continuously deflected by 90°. The exact multipole decomposition analysis reveals that the modulus and interference phase difference of the main multipole moments change with the crystallinity, resulting in a continuous deflection of the maximum radiation direction. Thus, the power ratio in the two vertical radiation directions can be monotonically reversed from −12 to 7 dB between 20% and 80% crystallinity. The V-shaped phase-change antenna exhibits the potential to act as the basic structural unit to construct a reconfigurable passive spatial angular power splitter or wavelength multiplexer. The mechanism analysis of radiation directivity involving the modulus and interference phase difference of the multipole moments will provide a reference for the design and optimization of the phase-change antenna. Introduction The optical antenna builds the connection between the local electromagnetic field mode and the free space far-field radiation energy distribution at the sub-wavelength scale [1][2][3][4], and its applications involve advanced photon manipulation [5,6], optical communication [7,8], and biomedical sensing [9,10]. The design of a high-performance optical nanoantenna requires the simultaneous regulation of the electrical and magnetic parts of the local electromagnetic field mode to realize the high efficiency of light manipulation with controllable direction [11,12] and specific reflection or transmittance [13,14]. The local electromagnetic field mode of the nanoantenna is highly sensitive to its geometry and material composition. The direction of light can be manipulated by carefully designing specific geometric plasmonic metal nanoantennas, such as the YagI-Uda antenna [15], split-ring resonator [16], and V-shaped nanoantennas [17]. V-shaped metal nanoantennas have been used as basic structural units to construct metasurfaces and metalenses with specific properties [18,19]. In comparison to the metal nanoantennas, the all-dielectric nanoantenna with a high refractive index allows the formation of multiple Mie modes, so the radiation direction can be flexibly adjusted. Moreover, the refractive index imaginary part of the dielectric is low and the intrinsic absorption losses under the electromagnetic field are minimized, which can achieve efficient optical regulation with minimal absorption losses [13,[20][21][22]. V-shaped all-dielectric silicon-based nanoantennas exhibit the efficient optical radiation characteristics of wavelength bidirectional scattering with multiple Mie modes and lower inherent ohmic loss [23]. However, as the refractive index of silicon is difficult to reconfigure in practical applications, silicon-based optical devices cannot be initialized according to specific scenarios. Unconventionally, the optical properties of all-dielectric phase-change materials can be significantly altered by solid-state phase transition [24][25][26]. The Ge-Sb-Te (GST) is a typical phase-change material, which has been exploited in a wide range of photonic devices, including optical switches [27,28], reconfigurable meta-optics [24,[29][30][31][32], tunable emitters and absorbers [33][34][35][36], and nonvolatile display [37]. The nanostructure of GST can be prepared in the amorphous phase by magnetron sputtering and gradually transformed into a crystalline phase after annealing. In addition, by controlling the specific annealing temperature and time, semicrystalline states with distinct optical properties can be obtained after annealing. After the removal of the stimulus, the refractive index of GST in the amorphous, semicrystalline, and crystalline states is high and distinct, and the phase remains stable [38,39]. Recently, the optimized alloy, Ge 2 Sb 2 Se 4 Te 1 (GSS4T1), combines broadband transparency (1-18.5 µm), large optical contrast (∆n = 2.0), and significantly improved glass forming ability, making it a better candidate for reconfigurable passive optical devices [40]. The flexibility, compatibility, and passivity of optical devices based on all-dielectric phase-change materials make them very suitable for optical applications [41,42]. In this paper, the feasibility of the V-shaped GSS4T1 antenna for reconfigurable radiation angular power splitter is explored, and variation of the optical radiation angle with phase-change crystallinity is theoretically investigated. For micro/nanostructures of phase-change materials, strong absorption based on anapole mode and full backward or forward scattering based on the Kerker condition has been studied [28,36]. Herein, we systematically analyze the radiation angle continuous deflection of the phase-change antenna, including the influence of each scattering multipole moment with different modulus and phase angle. By the finite element method (FEM) and current density-based multipole decomposition [43,44], the relationship between the continuous deflection of the antenna radiation directivity and the change of multipole moments with the crystallinity is investigated. The results show that the maximum radiation direction of the V-shaped phase-change antenna can continuously be deflected by about 90°with the material phase change. The power ratio in two vertical radiation directions can be monotonically reversed from −12 to 7 dB between 20% and 80% crystallinity. Multipole decomposition reveals that the continuous deflection of radiation direction of V-shaped phase-change antenna with crystallinity is due to the change of complex coefficient of the main multipole moment, including modulus and interference phase difference. Especially, the interference phase differences of main multipole moments are the key to the radiation direction continuous deflection. Finally, the consistency of the far-field radiation pattern reconstructed from the multipole scattering coefficient and the one calculated by FEM demonstrates the reliability of the mechanism analysis. We designed the V-shaped phase-change antenna as a promising candidate for reconfigurable passive spatial angular power splitter or wavelength multiplexer. Theoretical and Methods To investigate the feasibility of a V-shaped phase-change antenna for a reconfigurable radiation angular power splitter, the numerical calculation of the electromagnetic field is performed based on the FEM with commercially available software (COMSOL Multiphysics 5.6, COMSOL Inc., Sweden). As shown in Figure 1a, the V-shaped phase-change antenna is symmetric about the x-axis with its center section in the xy-plane, in which the length L is 2.0 µm, the width W is 0.70 µm, the height H is 0.75 µm, and the included angle α is 75°. A y-polarized plane light wave with amplitude E 0 = 1 V/m propagates along the −z-direction. The antenna is embedded in a homogeneous air host medium with relative permittivity ε air = 1. Taking the perfect matching layer (PML) as the boundary condition, the Helmholtz equation of electric field E is calculated [45]: where k 0 is the wave vector and ε r = (n − ik) 2 . The n and k are the real and imaginary parts of the complex refractive index of the antenna material, respectively. As shown in Figure 1b, the complex refractive index of amorphous and crystalline GSS4T1 phase-change materials is the fitting value of the experimental data of GSS4T1 in Ref. [40]. In addition, the permittivity of GSS4T1 varies with crystallinity C using the following relation [36]: where ε aGSS4T1 and ε cGSS4T1 are the permittivities of amorphous (0%) and crystalline (100%) GSS4T1, respectively. Figure 1c shows the concept that, for the V-shaped phase-change antenna with a fixed geometric size, its radiation directivity can be continuously reconfigured by adjusting the crystallinity with stimulus. The numerically calculated scattering field is the difference between the total field and the incident light field: According to the above scattering field, the scattering cross section can be calculated by using the following relations [46]: where P scat is the Poynting vector of the scattered field, n is the unit normal vector of the far-field boundary S, and η = √ µ 0 /ε 0 . Based on the scattering field, the Stratton-Chu formula is adopted to calculate the far-field radiation electric field of the angular point p [47]: where n r is the unit vector in the direction of the radius vector r. According to the far-field intensity I(θ, ϕ), the directivity of the positive and negative x-axis is calculated by: the directivity of the positive and negative z-axis is calculated by: and the directivity of specific radiation angle and window size is calculated by: where θ 0 and δ are taken to be 135°and 10°, respectively. It is difficult to clarify the physical mechanism by the numerically calculated results, and the scattering multipole decomposition is an essential theoretical analysis for the in-depth study of the radiation mechanism of antennas. Beyond the long-wavelength approximation, the exact expressions for the multipole moments are valid for any wavelength and size dimensions [43,44]. To clarify the mechanism of the variation of the radiation angle of the V-shaped phase-change antenna with the crystallinity, the multipole decomposition with exact expressions is performed. Firstly, the current density can be calculated according to E: Here, the dipole and quadrupole are mainly considered. Then, we calculate the electric dipole (ED), magnetic dipole (MD), electric quadrupole (EQ), and magnetic quadrupole (MQ) by the exact expressions: where α, β = x, y, z, and j n (ρ) denotes the spherical Bessel function. Using the multipole moments, the sum of the scattering contributions from different multipole moments is written as [43]: The scattering far-field from the V-shaped phase-change antenna described up to quadrupole order in Cartesian coordinates can be defined as [44]: where R = 1 m is the radius of the far-field radiation receiving spherical surface, α ED α , α MD α , α EQ αβ , and α EQ αβ are the complex coefficients of the multipole moments. Results and Discussion Firstly, we calculate the electromagnetic field of a V-shaped phase-change antenna in the wavelength range of 2.0 to 5.0 µm at the crystallinity of 20%, 50%, and 80%. Then, multipole decomposition based on the current density is performed to analyze the antenna radiation. The scattering cross sections of ED, MD, EQ, MQ, their summations (Sum), and the total scattering cross sections calculated from the scattering field (Scat) at the crystallinity of 20%, 50%, and 80% in the wavelength range of 2.0 to 5.0 µm are shown in Figure 2a,c,e, respectively. It can be seen that the peak shapes of Sum and Scat are almost the same, which indicates that the multipole decomposition described up to the quadrupole order is reliable. The multipole scattering cross sections are redshifted with increasing crystallinity, which results from the refractive index of GSS4T1 increasing with crystallinity. The electric and magnetic field distributions in the xy-plane of the V-shaped phase-change antenna are shown in Figure S1. It shows that at the 3.6 µm wavelength, the V-shaped antennas with crystallinity of 20%, 50%, and 80% produce different near-field electromagnetic resonance modes. It leads to different far-field scattering. Consequently, based on the calculated scattering field, the directivities of V-shaped antennas at the crystallinity of 20%, 50%, and 80% are calculated, including the x-axis positive-negative (X/-X) directivity, z-axis positive-negative (Z/-Z) directivity, as well as the specific angle and window size (D: θ 0 = 135°, δ = 10°) directivity, which are shown in Figure 2b,d,f, respectively. Obviously, the three directivity curves are redshifted with increasing crystallinity. Note that for V-shaped phase-change antennas at 3.6 µm wavelength, when the crystallinity increases between 20% and 80%, the X/-X or D directivity reverses, while the Z/-Z directivity is almost negative. In particular, the D directivity could be reversed from −12 dB to 7 dB by changing the crystallinity at 3.6 µm wavelength. Furthermore, multipole scattering cross sections and directivities of the amorphous and crystalline V-shaped phase-change antennas are shown in Figure S2. For the amorphous (0%) V-shaped antenna, the X/-X or D directivity reverses in the wavelength range of 2.8 to 3.4 µm ( Figure S2b). In addition, for crystalline (100%) V-shaped antennas, the X/-X or D directivity reverses approximately in the wavelength range of 4.0 to 4.7 µm ( Figure S2d). These results suggest that X/-X or D directivity could be reversed by changing the crystallinity at a selected specific wavelength in the intersecting range of 3.4 to 4.0 µm. To further investigate the continuous change of the V-shaped phase-change antenna's scattering with the crystallinity, we calculated the multipole scattering cross sections and directivities of the V-shaped phase-change antenna in the crystallinity between 0% and 100% at 3.6 µm wavelength, which are shown in Figure 3a,b, respectively. It can be seen that the X/-X directivity reverses monotonically from −12 to 7dB in the range of 20% to 80% crystallinity. Based on the scattering field, the Stratton-Chu formula is used to calculate the far-field radiation of the V-shaped phase-change antenna at the crystallinity of 20%, 35%, 50%, 65%, 80%, and 90%, and the modulus normalized results are shown in Figure 3c. Obviously, the maximum radiation direction of the V-shaped phase-change antenna reverses about 90°with an increase in crystallinity. The above results fully reflect the theoretical feasibility of realizing continuous controllable angular power splitting of a V-shaped phase-change antenna based on reconfigurable phase transition. To clarify the mechanism of radiation directivity change of the V-shaped phase-change antenna, we deeply analyze the change of multipole moments with crystallinity. The calculation results indicate that non-zero multipole moments include ED y , MD x , MD z , EQ xy , EQ yz , MQ xx , MQ xz , MQ yy , and MQ zz . The complex coefficient of each multipole moment includes the modulus and phase angle. The modulus determines the radiation amplitude of the multipole moment, the normalized modulus of complex coefficients of these multipole moments are shown in Figure 4a. It can be seen that α ED y , α MD x , α MD z , α EQ xy , and α EQ yz are relatively large, while α EQ xx , α EQ xz , and α EQ zz are relatively small, indicating that ED y , MD x , MD z , EQ xy , and EQ yz make relatively large contributions to the far-field radiation of the V-shaped phase change antenna, while MQ xx , MQ xz , and MQ zz make relatively small contributions. In addition, the intrinsic far-field radiation patterns of the unit multipole moments can be seen in Figure S3. D directivity is the key of a V-shaped phase-change antenna to achieve a continuous reconfigurable radiation angular power continuous control, which is closely related to the ratio of the radiation modulus in the two directions of (θ: 135°, ϕ: 0°) and (θ: 135°, ϕ: 180°) which can be simplified from Equation This formula suggests the multipole moments that affect D directivity are ED y , MD z , EQ xy , EQ yz , MQ xx , and MQ zz . Comparing the far-field modulus in the upper and lower of above fractions, the coefficients of α ED y and α EQ yz are the same, while the coefficient of α MD z , α EQ xy , α EQ xx , and α EQ zz are opposite. This indicates that MD z , EQ xy , MQ xx , and MQ zz lead to the radiation difference in the above two directions and are the key moments in the direction change of lateral deflection. The multipole moments interference forms the final far-field radiation pattern, and the modulus and interference phase difference of the multipole scattering coefficient together determine the final far-field radiation pattern. The phase angle differences between the interference multipole moments are critical to the direction of far-field radiation. To investigate how the interference phase difference of multipole moments affect far-field radiation patterns, we calculate the interference far-field radiation patterns of the unit multipole moments with different phase differences (see Figure S4). In the supplementary material, we have deeply analyzed and compared the influence of each multipole moment on the far-field radiation pattern, and the related analysis clearly indicates that ED y , MD x , MD z , and EQ xy make the major contributions to the change of the D directivity of the V-shaped antenna. Consequently, the interference phase differences between MD x and ED y , MD z and ED y , and EQ xy and ED y with crystallinity from 0% to 100% have been calculated and shown in Figure 4b. It shows that interference phase differences between MD x and ED y , MD z and ED y , and EQ xy and ED y vary differently with crystallinity. In addition, the interference phase difference of MD x and ED y mainly change in the range of −π/4 to π/4, which produces forward scattering along the z-axis as shown in Figure S4. To understand how interference phase differences cause the radiation angle continuous deflection of V-shaped phase-change antenna, we first analyze the interference far-field radiation of unit ED y and MD x with phase angle 0 (i.e., α ED y , α MD x = exp(i · 0)), unit MD z and EQ xy with phase angle ϕ (i.e., α ED y , α MD x = exp(iϕ)). As shown in Figure 5a, the intrinsic far-field radiation patterns of unit ED y , MD x , MD z , and EQ xy do not vary with their respective phase angle, but when they interfere with each other, their phase difference causes changes in the direction of the final far-field radiation. As shown in Figure 5b, it can be seen that both D directivities of ED y +MD x +exp(iϕ)MD z and ED y +MD x +exp(iϕ)EQ xy reverse at the phase difference of −π/2 and π/2. In addition, the interference far-field radiation patterns corresponding to the points numbered 1-10 in Figure 4b are shown in Figure 5c. As shown in Figure 4b, the calculated phase differences of MD z and ED y continuously change around ϕ = π/2 with crystallinity; when the crystallinity is at 20%, the phase difference is about 3π/4 and the far-field radiation contribution of MD z corresponds to case number 2 in Figure 5c; and when crystallinity is at 50%, the phase difference is about π/2 and the far-field radiation contribution of MD z corresponds to case number 3 in Figure 5c. In contrast, the calculated phase differences of EQ xy and ED y continuous changes around ϕ = −π/2 with crystallinity; and when the crystallinity increases in the range of 20% to 80%, the far-field radiation contribution of EQ xy corresponds to the cases number 7 to 10 in Figure 5c. Furthermore, MD z and EQ xy contribute in the same direction near 20% crystallinity, while MD z and EQ xy contribute in the opposite direction near 60% crystallinity. That explains why the D directivity calculated by FEM has a significant change trend of about 20% crystallinity, but it changes slowly at about 80% crystallinity (see Figure 3b). According to the calculated complex coefficients of the major multipole moments, the X/-X and D directivities of the interference far-field radiation have been obtained, which are shown in Figure 6. It is found that the interference of ED y , MD x , and MD z are in good agreement with the calculated results in the crystallinity range of 0% to 50%, while the crystallinity range of 50% to 100% is quite different from the FEM result. In contrast, the interference of ED y , MD x , and EQ xy in the crystallinity range of 50% to 100% is relatively consistent with the calculated result, while the crystallinity range of 0% to 50% is quite different from the FEM result. Moreover, the interference of ED y , MD x , MD z , and EQ xy is more consistent with the FEM result in the whole crystallinity range. These results indicate that the V-shaped antenna's D directivity that changes continuously from 0% to 50% crystallinity is mainly the contribution of MD z , and its D directivity that changes continuously from 50% to 100% crystallinity is mainly the contribution of EQ xy . The above analysis shows that the continuous change in radiation direction of V-shaped phase-change antenna with crystallinity is due to the change of complex coefficient of the main multipole moment ED y , MD x , MD z , and EQ xy , including modulus and interference phase difference. Moreover, we also consider the influence of the minor moments on the directivities of interference far-field radiation (see Figure S5). Although the minor moments MQ xx and MQ zz cannot cause significant changes in directivities, it shows a tendency to approach the FEM results. To verify the reliability of the above multipole scattering analysis, we use the calculated multipole scattering coefficients to reconstruct the interference far-field radiation pattern and compare it with the far-field radiation pattern calculated by FEM. For the V-shaped phase-change antenna at a wavelength of 3.6 µ m with crystallinity of 20%, 35%, 50%, 65%, 80%, and 90%, the multipole moments ED y , MD x , MD z , and EQ xy make a major contribution to D directivity, and their modulus normalized coefficients are expressed in the complex coordinate system (Figure 7a), and the corresponding reconstructed interference far-field radiation patterns are shown in Figure 7b. Obviously, the relative change of radiation in (θ: 135°, ϕ: 0°) and (θ: 135°, ϕ: 180°) shows the angular power splitting function of the V-shaped phase-change antenna, which can be reconfigured by the controllable phase transition. However, because multipole moments in other directions are not considered, the reconstructed far-field radiation pattern is different from the far-field radiation pattern calculated by FEM (Figure 3c). Correspondingly, we further consider all the above non-zero multipole moments and express their modulus normalized scattering coefficients in the complex coordinate system (Figure 7c), and the corresponding reconstructed interference far-field radiation patterns are shown in Figure 7d. Obviously, the reconstructed far-field radiation pattern considering all multipole moments is close to the result of the FEM calculation ( Figure 3c). This comparison fully demonstrates the reliability of the above mechanism analysis of multipole scattering. Conclusions The radiation direction of the V-shaped phase-change antenna deflects continuously by 90°with increasing crystallinity. In-depth analysis of multipole decomposition reveals that ED y , MD x , MD z , and EQ xy make the major contributions to the change in D directivity of the V-shaped antenna. In addition, the continuous change in radiation direction of V-shaped phase-change antenna with crystallinity is due to the change in the complex coefficient of the main multipole moment ED y , MD x , MD z , and EQ xy , including the modulus and interference phase difference. In particular, the interference phase differences between MD z and ED y , and between EQ xy and ED y that change with crystallinity cause the radiation angle continuous deflection of V-shaped phase-change antenna. The D directivity of the V-shaped phase-change antenna can be monotonically reversed from −12 to 7 dB in a crystallinity of 20-80% so that it can be used as the basic structural unit to construct a configurable passive optical angle power splitting device or wavelength multiplexer. The mechanism analysis involving the modulus and interference phase difference of multipole moments can provide a reference for the design and optimization of a phase-change antenna to realize a specific bidirectional scattering power splitter or wavelength multiplexer. Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/nano12193305/s1, Figure S1: Electric (|E|) and magnetic (|H|) field distributions in the xy-plane of the V-shaped phase-change antenna at 3.6 µm wavelength; Figure S2: Multipole scattering cross sections and directivities of the V-shaped phase-change antenna at the crystallinity of 0% and 100%; Figure S3: Far-field radiation patterns of unit multipole moments; Figure S4: Interference far-field radiation patterns of the unit multipole moments with different phase differences; Figure S5: Directivities of interference far-field radiation according to the complex coefficients of major and minor multipole moments at 3.6 µm wavelength; Figure S6: D directivities of V-shaped phase-change antenna with different geometric angles. Author Contributions: Conceptualization, P.T.; methodology, J.X. and S.L.; software, S.L.; investigation, P.T. and J.X.; resources, L.Z. and S.L.; writing-original draft preparation, P.T. and Q.T.; writing-review and editing, P.T. and L.Z.; visualization, P.T. and Q.T.; supervision, L.Z. and Y.Q.; project administration, L.Z.; funding acquisition, P.T., L.Z., and Y.Q. All authors have read and agreed to the published version of the manuscript. Acknowledgments: Authors acknowledge the support from Guangdong Provincial Key Laboratory of Information Photonics Technology. Great thanks to J.X. and S.L. for their help with the methodology. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: GST Ge-Sb-Te GSS4T1 Ge 2 Sb 2 Se 4 Te 1 FEM finite element method PML perfect matching layer ED electric dipole MD magnetic dipole EQ electric quadrupole MQ magnetic quadrupole
5,710.6
2022-09-22T00:00:00.000
[ "Physics" ]
Statistical measurement of trees’ similarity Diagnostic theories are fundamental to Information Systems practice and are represented in trees. One way of creating diagnostic trees is by employing independent experts to construct such trees and compare them. However, good measures of similarity to compare diagnostic trees have not been identified. This paper presents an analysis of the suitability of various measures of association to determine the similarity of two diagnostic trees using bootstrap simulations. We find that three measures of association, Goodman and Kruskal’s Lambda, Cohen’s Kappa, and Goodman and Kruskal’s Gamma (J Am Stat Assoc 49(268):732–764, 1954) each behave differently depending on what is inconsistent between the two trees thus providing both measures for assessing alignment between two trees developed by independent experts as well as identifying the causes of the differences. Introduction Diagnostic theories are theories about the appropriate corrective action to take when given a set of observable conditions (Reiter 1987).They are fundamental to Information Systems (IS) practise (Webster and Watson 2002;Rooney and Van den Heuvel 2004;Clauset et al. 2008).For example, when an Information Technology (IT) system fails, it can fail for a myriad of reasons.The tree of symptoms and diagnoses associated with an IT system failure is a diagnostic theory.Similarly, when a new IT product is launched, there can be many reasons why users do not adopt it.Again, that tree of possible causes is a diagnostic theory.Indeed, many expert systems operate based on diagnostic theories.For instance, Mycin (Shortliffe 2012) and other expert systems navigate a decision tree to identify the root cause of a problem.Despite the fact that diagnostic theories are core to IS practise, little attention has been paid in IS research to diagnostic theories.Perhaps this is because diagnostic theories are not explanatory theories like variance or process theories (Webster and Watson 2002), but instead are prescriptive theories-they directly inform decision making.Nevertheless, like all theory, diagnostic theories need to be validated. One way of creating diagnostic theories is to have an expert creating the theory.To validate the diagnostic theory, a second expert in the same area creates another diagnostic theory, and the two are compared.However, good measures for the correspondence of two diagnostic theories are essentially non-existent.This study aims to develop measures useful for comparing two diagnostic theories.Existing measures for trees such as edit-distance are not suitable for diagnostic theories because they are sample size dependent.An edit-distance of 20 is very bad when comparing two trees with 40 nodes each but is not so bad if the two trees have over 1000 nodes.Ratios of edit-distance (e.g., 10% of the tree are different) are also not suitable, because a lack of correspondence near the root of a diagnostic tree is a more severe issue than a lack of correspondence near the base of the tree-an idea a ratio does not capture. To address our problem, we performed a set of bootstrap simulations to measure how various statistics change as a hypothetical diagnostic tree deviates from a "true" version.We apply traditional statistical measures in a new way to measure tree similarity.In particular, we transform the tree into a contingency table and employ traditional contingency table statistics to evaluate similarity.Our contribution is the discovery that three measures of association, Goodman and Kruskal's Lambda (λ), Cohen's Kappa (ƙ), and Goodman and Kruskal's Gamma (γ) (Goodman and Kruskal 1954) together provide information useful for assessing the similarity of two diagnostic theories.Each of these three statistics behaves differently depending on what is inconsistent between the two trees thus providing both metrics for assessing alignment between two diagnostic theories developed by different experts as well as identifying the causes of the differences. The paper is constructed in the following manner.First, we present the limitations of previous work.Then, we attempt to address those problems by providing a process for developing good thresholds for the construct validity of diagnostic tree and diagnosing their differences.We conclude with a discussion of diagnosing inter-rater reliability. Diagnostic theory A diagnostic theory is represented by a tree.For instance, Hopp et al. (2007) used a diagnostic tree for evaluating and improving production line performance.A diagnostic tree consists of a root, which corresponds to the problem domain (Geoffrion 1989).The root of the tree is unpacked to represent broad classes of diagnoses.As one traverses down the tree, the classes become narrower until we reach the tree's base, where specific potential solutions are identified.For example, consider Fig. 1 which presents a diagnostic tree to identify why users have low Instagram self-efficacy-i.e., what is it about Instagram they find most hard to use?In this example, the top-level nodes encompass the different dimensions of Instagram skill.Each top-level node, in turn, links to more specific areas a user can experience problems in. Comparison of trees Diagnostic trees are typically built by experts and have certain properties.First, they can have hundreds of nodes, where nodes concerning higher-level concepts are mapped to nodes with greater precision.The nodes then have a parent-child relationship.Second, nodes higher up the tree are more important than those lower in the tree.Nodes lower in the tree are sub-nodes of those higher in the tree.This means that any errors or disagreement in the higher levels propagate to lower levels. One way to validate diagnostic trees is to compare, the similarity of two diagnostic trees, created by two independent experts in the same domain.However, appropriate measures and "good enough" thresholds for demonstrating the similarity of two diagnostic trees are unknown.In addition, good measures for identifying problematic nodes in the tree are undiscovered.As an example, if two experts disagree on the mapping of two nodes, we would want to know whether the experts think that the nodes belong to different parents, or whether the experts disagree on the precision of the node in the hierarchy.In effect, measures akin to the modification indices of variance-based structural equation models need to be formulated (Gefen et al. 2000). The remainder of this section reviews the principal existing methods of measuring tree similarity, which are edit-distance and statistics-based.We demonstrate the limitations of both methods and identify elements that can provide a foundation for creating a threshold for diagnostic trees. Edit-distance based techniques Edit-distance is a poor general comparator for diagnostic trees for several reasons.One is that existing algorithms do not take into account that nodes in the tree are not equally important (Jiang et al. 1995;Weinberg and Last 2017).To illustrate, consider Fig. 2, where Trees B and C are each inconsistent with Tree A in exactly one way.Tree B swaps nodes 2 and 5, while Tree C swaps nodes 2 and 3.The typical edit-distance algorithm treats both inconsistencies equally.However, Tree B suggests the expert considered node 5 as a parent to node 2, while the expert for Tree B considered nodes 2 and 3 to be completely different nodes from the expert creating Tree A. The difference represented in Tree C is more serious than Tree B, as the expert (1) considered the nodes to effectively be two different nodes (as opposed to different levels in the same node), and (2) the issue occurred at a relatively high level in the tree, suggesting there are further problems lower in the tree that were undiscovered.In comparing trees, a technique that identifies such differences is necessary. Also, edit-distance measures are often sample size sensitive.Clearly if there are two trees, each having 50 nodes, where 10 changes are required to transform one into the other, this is different from two trees, each having 500 nodes where only 10 changes are required.In statistical thinking, we want to compare the statistic to some probability distribution to standardize results according to "sample size".We then calculate confidence intervals or p values of significance, where the threshold (typically 0.05 or 0.01) is sample size independent.The tree edit-distance literature has no equivalent analogue. Finally, as a corollary to the above two points, we would like measures of tree similarity to systematically identify where the differences are between the trees.Edit-distance algorithms do this for individual nodes-they identify that to transform one tree into another, these are the nodes that must be changed and how (Grassi et al. 2015;Green and Ricca 2015).However, they do not, for example, tell us that most errors occur in the top of the tree (very bad) or at the bottom of the tree (not so serious)-or tell us that most of the errors are occurring in the children of node 1. Statistics based techniques Existing statistics-based methods are not suitable for several reasons.One is that existing measures and statistics are employed for generally "flat" question structures, and not the hierarchical structure of trees.For example, the traditional factor analytic concepts of convergent and divergent validity are assessed with correlations (Sartori 2006;Sartori and Pasini 2007;Hair et al. 1998).However, in diagnostic trees, nodes have a parent-child relationship.If the nodes behave correctly, the parent correlates highly with at least one child but is unlikely to correlate with all.For example, if a respondent answers that she is dissatisfied with food quality, then the respondent might be unhappy about the way the food was prepared but be satisfied with portion size.Factor loadings do not take this into account.The inferential statistics tradition in several academic disciplines such as IS is to employ thresholds to evaluate whether two things are the same (Boudreau et al. 2001).For example, we regularly consider a p value under 0.05 to be "good enough".In cases where thresholds are unknown, research is done to identify them.As an example, Hu and Bentler (1999) examine the adequacy of the "rules of thumb" of conventional cutoff criteria and propose new alternatives for various fit indexes in structural equation models.However, such techniques have not been applied to trees. In this study, we apply traditional statistical measures in a new way to measure tree similarity.Essentially, we map the tree into a contingency table and employ traditional contingency table statistics to evaluate similarity.The three measures used are Goodman and Lambda (λ), Cohen's Kappa (ƙ), and Goodman and Kruskal's Gamma (γ) (Goodman and Kruskal 1954).Goodman and Kruskal (1954, p. 749) interpret Lambda (λ) as "how much more probable it is to get like than unlike orders in the two classifications, when two individuals are chosen at random from the population."We chose Lambda (λ) because it has a meaning akin to r in a regression (Anderson and Gerbing 1988), i.e., Lambda (λ) is the measure of the strength of association in a contingency table (Everitt 1992;Goodman and Kruskal 1963).We chose Kappa (ƙ) because it is the observed proportion of agreement between the assigners after chance agreement is removed from consideration (Cohen 1968).Kappa (ƙ) is widely used as a measure of association for contingency tables (Hambleton and Zaal 2013;Rudick et al. 2013;Sengupta and Te'eni, D. 1993;You et al. 2012).In addition, Landis and Koch (1977) proposed the English-language meanings of Kappa (ƙ) thresholds featured in Table 1.We chose Gamma (γ) because it is explicitly designed for data with ordinal values (Higham and Higham 2019;Nelson 1984), and hierarchies are ordered data structures.Goodman and Kruskal (1954) interpret Gamma (γ) as how much more probable it is to get like than unlike orders in the two classifications, when two individuals are chosen at random from the population (Davis 1967;Göktaş and İşçi 2011;Goodman and Kruskal 1954).The value of the Gamma (γ) coefficient ranges from − 1 to + 1 where the latter value indicates perfect agreement between the two classifications (Baker 1974). Foundation for threshold building To build suitable thresholds for comparing and assessing diagnostic trees, we first generate a hypothetical "perfect" tree.We then make a copy of the tree and systematically change the tree and measure the statistic.We make a second change on the tree and measure the statistic again, repeating, the process many times to get a good appreciation for how statistics vary as two trees diverge.We then do the same with other "perfect" trees of various sizes. There is one constraint on modifications-each parent cannot have just one child nodewith only one child node, there is no "branching".In the below, we formally define terms employed in the remainder of the paper. Level is the distance of a node from the root.A node is on the n + 1 level of its parent node.As an example, a node located on the 3rd level is three levels below the root node and its parent is on the 2nd level.Levels closer to the root are considered higher levels and levels further from the root are considered lower levels.Root is the node with no parent.The root of the tree is on level 0. Degree given two nodes a and b of level m and n such that a is the ancestor of b or b is the ancestor of a.The degree of the pair d(a,b) = |m − n|.Descendant is the nth degree child of an ancestor node.As an example, 3rd degree descendant of a node is located three levels below its ancestor node.A first-degree descendant of a node is also called a child node.Top-level are first-degree descendants of the root.The top level of the tree has a level of 1. Ancestor an ancestor is the nth degree parent of a descendant node where n > 0. As an example, the ancestor of 4th degree is located four levels above its descendant node.A first-degree ancestor of a node is also called a parent node.Relative Given two nodes, a and b, either a and b share an ancestor which is not the root, or a is an ancestor of b, or b is an ancestor of a. Non-relative is any node whose only ancestor to another node is the root.Modification given two trees, one of the differences between the two trees.Movement Given a tree T with nodes labelled from 1 to n.A movement M(a,b) where a and b are nodes in T such that 0 ≤ a ≤ n, 0 ≤ b ≤ n, a <> b and b has descendants and a is not a first-degree descendant of b, is defined as: T′ such that a is the child of b, i.e., a is a first-degree descendant of b.There are three types of movements: • Type 1 movement is a movement such that a in T is a childless node. • Type 2 movement is a movement such that in T, a is a parent node.In T', all descendants of a in T become descendants of a's parent.• Type 3 movement with child(ren) is a movement such that in T, a is a parent node.In T′ all descendants of a in T have the same parents. Direction of movement given two nodes a and b in tree T, a can move in three possible directions to become a child of b in T′, (1) within relatives (up or down), (2) within its level (left or right), or (3) both within relatives and levels.Movements can occur with any kind of node (with or without descendant). • Hierarchy movement is a direction movement M(a,b) where a and b are nodes in T and a is a relative of b.In T′ a becomes a first-degree descendant of b.If b is descendant of a in T, then a hierarchy movement type 3 is not possible, because effectively, nothing happens to b.A hierarchy movement is effectively a movement up or down the tree.We care about hierarchy movements, because these suggest a certain type of error.In a diagnostic tree, the top-level nodes are unpacked and their descendants are mapped.This type of error indicates that experts disagree on the mapping of their direct relative nodes.As an example, consider Fig. 3 which presents two trees from experts 1 and 2. The experts disagree on the parent of node 6 as expert 1 has mapped node 6 to node 2, while expert 2 has mapped node 6 to node 5.In addition, expert 2 sees node 6 on a lower level than expert 1, as expert 1 has mapped node 6 to node 2 which is on a higher level.In this example, as node 6 (a) moved to become a child of node 5 (b), and does not have any descendants, we call this a type 1 hierarchy movement.• Level movement is a direction movement M(a,b) a is in the same level as a child of b, is defined as: T' such that a is the child of b.We care about level movements, because these suggest that while experts agree on the level of the node, they disagree on the "family" of nodes the question relates to.As an example, consider Fig. 4a where in Tree 1, node 6 is on the same level as nodes 7, 8, and 9. Figure 4b presents level movement type 1 for node 6 (a) as it moved from node 2 to node 3 (b).The level of node 6 has not changed, however, experts disagree on the direct parent node.This indicates that the experts are confused between nodes 2 and 3. • Diagonal movement is a direction movement M(a,b) that is both a hierarchy and level movement.Diagonal movements suggest two experts thought of a node in very different ways, as they disagree on both the level of the mapping and their direct relative nodes.We consider swap distinctive from movements because this reflects a single cognitive difference between two experts rather than two or more cognitive differences.Similar to direction movements, there are three types of swaps, which are hierarchy, level, and diagonal.• Hierarchy swap is where a is a relative of b in T. For example, consider Fig. 5, which presents a hierarchy swap between nodes 11 and 2. In Fig. 5b, node 11 is closer to the root (node 1), hence it becomes the ancestor of node 2. This shows that the experts disagree on the mapping of the direct relative nodes of nodes 2 and 11. • Level swap is a swap where a and b in T are located in the same level and if both a and b do not have any descendants, they must not share a first-degree ancestor (direct parent) as tree T' will be the same as T. As an example, in Fig. 6, nodes 2 and 3 are on the same level and have been swapped.This shows that while the experts agree on the level of the nodes, they disagree on the mapping of the parent node.• Diagonal swap is a swap where a and b in T are located on different levels and are not relatives.Diagonal swaps suggest the two experts are confused with two nodes in very different ways, as they disagree on both the level of the mapping and their direct relatives. In addition, we want to perform analyses comparing the result of moving nodes at higher levels of the tree versus moving nodes at lower levels of the tree.Changing nodes at higher levels of the tree should have a greater impact, because this suggests problems with more important nodes.As an example, consider Fig. 7, which presents Trees A, B and C. In Trees A and B, experts disagree in the mapping of node 3 and in Trees A and C the experts disagree on the mapping of node 14.The disagreement between Trees A and B is more serious than the disagreement between Trees A and C. To simulate these conditions, we perform analyses where we restrict the levels where movements and swaps occur.For every perfect tree with levels 0,…,n, we introduce a variable x where 2 < x < n.Using the perfect tree as a base, we perform a set of swaps and movements between levels 1 and x.We then use the perfect tree as a base again, and perform a second set of swaps and movements between levels x and n and we compare the difference in scores.To distinguish the two, the swaps and movements performed between levels 1 and x are called movements and swaps on the "top" of the tree, and those between x and n as on the "bottom" of the tree. Insertion and deletion Finally, in some cases, one expert may not choose to map all pre-determined nodes and the two trees could have different numbers of nodes.Hence, we assess the impact of an insertion/deletion of a node in a tree.As deletion is the reverse of insertion, we only assess the impact of insertions.We consider two types of insertion as there are only two ways to insert a node to a tree, (1) insertion in levels where there is an increase in the number of branches per node and (2) insertion in a hierarchy where there is an increase in the number of levels. Threshold building and diagnostic process For the diagnosing process, we systematically identified all the ways a node can move in a tree hierarchy which has 1 to n-levels and 1 to m branches per node.We generate 12 (i.e., 3 × 4) perfect trees to test.Each tree has between three and five (i.e., three possibilities) branches per node and three to six levels (i.e., four possibilities) as tabulated in Table 3.We did not perform the simulation on trees with more than 200 nodes, because the computations required to simulate these trees become exponentially complex (Goldreich 2011).Our analysis of smaller trees suggests that statistics are similar regardless of the size of the tree.In addition, we drop the 3-branch 3-level tree as the number of nodes is too small to run a simulation for 100 rounds.Hence, six trees remain for the diagnosing process.These are identified as the bold cells in Table 2. To diagnose each type of disagreement among the experts, each perfect tree is compared to a series of 27 possible modifications.Each modification is performed 100 times on each perfect tree.The total number of tests is therefore 16,200 (27 × 6 × 100).These modifications are: • Nine possible direction movements comprising a combination of a movement type (Types 1-3) and direction (level, hierarchy, diagonal).• Three possible movements where we keep the type constant, and allow random directions.• Three possible movements where we keep the direction constant, and allow random types.• Three possible swaps (level, hierarchy, diagonal). • Eight top and bottom movements, where we restrict one half of a tree.Consider an example with tree T which has five levels.We first limit movements and swaps for only levels two and three and then for only levels four and five.It should be noted that by definition, the scores on the top half of the tree will change more than on the bottom half of the tree, given there are fewer nodes on the top half, and thus any change will have a greater effect.However, we wanted to know what the magnitude of the difference would be.• One random movement/swap where a random change (either one of the 12 movements or 3 swaps) is performed.Each change is equally likely.The aim is to compare the results and evaluate how the statistics change and identify a suitable threshold. Finally, for the insertion process, we assess our trees by first creating a perfect tree, T. Next, we make a copy of the tree, as T'.Then for each type of insertion, we randomly add one node to T and map it to a node.Contingency table analyses are unable to be performed to compare two different sample sizes.To address this, for every missing node in tree T, a node is represented in T' in the same location with a number not found in T'.We repeat this 20 times.As an example, consider a 3-level 3-branch tree as illustrated in Fig. 8, node 14 is in tree T and mapped to node 1. Node 14 does not exist in tree T', hence, for tree T', we insert the dummy node 100 to represent node 14 of tree T. As this increases the number of branches per node and not the number of levels, we consider this a level insertion. Data collection Each variation of the tree is represented in a contingency table as follows.First, every node is given a number from 1 to n. 1 is the root node.The tree is then translated into a twocolumn table.The first column denotes the parent node, and the second denotes the child.Table 3 presents a 3-level 3-branch tree transformed into two columns.As seen, in Table 3, there are 12 child nodes and each parent node has 3 children.Each row represents a child and a parent.As an example, row 11 shows that child node 11 belongs to parent node 4. Simulation analysis The perfect tree is placed alongside the modified tree and a statistical comparison between the two is performed.Each pair of trees is compared on three statistics, Goodman and 1 3 Kruskal's Lambda (λ), Cohen's Kappa (ƙ), and Goodman and Kruskal's Gamma (γ) (Goodman and Kruskal 1954).Recall that we analyse six possible perfect trees varying in number of levels and branches.Next, for the first 100 runs of each type of movement or swap, the six trees are transferred into a table, each child and parent is combined into an individual column and the means and standard deviations of the three statistics are calculated.In addition, the mean change (i.e., how much each statistic changes from one run to the next) and standard deviations of the mean change are calculated for the first 100 runs of each movement and swap (a total of 98 mean changes).Lambda (λ), Kappa (ƙ), and Gamma (γ) of the 100 rounds for six trees are recorded in each column and a paired sample t-test for each pair of the measures is calculated.Finally, for each type of insertion process, we calculate Lambda (λ), Kappa (ƙ), and Gamma (γ) of the 20 rounds. Results To build suitable thresholds for comparing and assessing diagnostic trees, we compare each of our hypothetical "perfect" trees to the modified tree and measured the statistic, repeating this process many times.Our results demonstrate Lambda (λ), Kappa (ƙ), and Gamma (γ) change at different rates depending on the kind of movement and swap performed.Table 4 presents a summary of these changes.There are several insights for each movement or swap, which we discuss below. Movements There are several insights for each direction or type of movement.Table 5 presents the means, standard deviations, and Cohen's distance for the first 100 runs of each movement for the six trees.Cohen's distance provides a measure of the strength of the difference in a t-test (Cohen 1988).In addition, Table 6 presents the mean changes in the measures for each directional movement. Results indicate that for all hierarchy movement types, Gamma (γ) decreases more dramatically than the other two measures.In addition, the mean for Gamma (γ) is lower than the other two measures throughout all types of hierarchy movements.As an example, in a 4-level 3-branch diagnostic tree as illustrated in Fig. 9a, Gamma (γ) in run 20 drops from 0.978 to 0.683 in hierarchy movements while Kappa (ƙ) drops from 0.966 to 0.839.Paired sample t-tests between Gamma (γ) and Kappa (ƙ) (the next lowest measure) are all statistically significant. In level movements, results indicate the mean for Kappa (ƙ) for the six trees is lower than the other two measures.In addition, Kappa (ƙ) decreases at the fastest rate of all three measures.All changes are statistically significant when Kappa (ƙ) is compared to Gamma (γ), the next lowest measure.As an example, in a 4-level 3-branch diagnostic tree as presented in Fig. 9b, the mean change for Kappa (ƙ) is 0.0036, while Gamma (γ) is only 0.0016 in level movement.In addition, in level movements, for a 4-level 3-branch diagnostic tree, Kappa (ƙ) in run 20, drops from 0.991 to 0.635, while Gamma (γ) drops from 0.999 to 0.761. In diagonal movements, Lambda (λ) decreases at a faster rate than for any other movement as shown in Table 6.As an example, in a 4-level 3-branch diagnostic tree, Lambda (λ) in diagonal movements, in run 20, drops from 0.982 to 0.77, while in level movement it drops from 0.9871 to 0.8423 and in hierarchy movements it drops from 0.991 to 0.866.We ran a paired sample t-test on six different diagnostic trees to compare the raw scores of Lambda (λ) with the next lowest measure (Kappa (ƙ) or Gamma (γ)) for different diagnostic trees.The results for each pair was significant which indicates that the measures change at different rates. Finally, as shown in Table 5 in type movements, Lambda (λ) is more sensitive to type 2 movements; the mean for Lambda (λ) is lower compared to other movement types (1 and 3).Type 2 movements consist of two steps, (1) a move of the parent node and, (2) a move of the child nodes to the former parent's parent node.These two steps have a bigger impact on Lambda (λ) than other measures, as more than one node is impacted. Swaps Our insights, which are shown in Tables 7 and 8 concerning swaps are as follows: • Lambda (λ) does not change in swaps, as both the mean and mean change are zero. • For hierarchy swaps, the mean of Gamma (γ) is lower than Kappa (ƙ), and mean changes for Gamma (γ) are higher than the mean difference for Kappa (ƙ), which indicates that Gamma (γ) drops faster than Kappa (ƙ).In the example shown in Fig. 10a, in a 4-level 3-branch tree, Gamma (γ) in run 20, for hierarchy swap drops from 0.9092 to 0.231, while Kappa (ƙ) drops from 0.974 to 0.520.The difference between Kappa (ƙ) and Gamma (γ) is statistically significant.This is consistent with Kappa (ƙ) and Gamma's (γ) behaviour for hierarchy movements.• In level swaps, Kappa (ƙ) tends to decrease faster than Gamma (γ) as the mean change of Kappa (ƙ) is higher than Gamma (γ).As an example, as shown in Fig. 10b, in a 4-level 3-branch tree, Kappa (ƙ) drops from 0.949 to 0.72 while Gamma (γ) drops from 0.992 Fig. 9 The difference between Kappa (ƙ) and Gamma (γ) in level and hierarchy movements for a 4-level 3-branch tree to 0.889 in level swaps.The difference between Kappa (ƙ) and Gamma (γ) is statistically significant.This is consistent with Kappa (ƙ) and Gamma's (γ) behaviour for level movements. Top and bottom movements and swaps Given the sample size in a top movement/swap is always smaller than in the equivalent bottom movement/swap, our results unsurprisingly indicated that the measures decrease at a faster rate in top movements and swaps than bottom movements and swaps.As an example, Table 9 demonstrates the mean changes and standard deviations of the top, bottom, and general hierarchy and level movements and swaps for a 4-level 3-branch diagnostic tree.As presented in Table 9, Gamma (γ) decreases the fastest in top hierarchy movements and swaps, as the mean change is higher.In contrast, Kappa (ƙ) decreases the fastest in top-level movements and swaps. Insertion process in diagnostic trees Depending on the type of insertion, Lambda (λ), Kappa (ƙ), and Gamma (γ) change differently.Consider Table 10 which presents the results of level and hierarchy insertion for a 3-level tree with 3-5 branches.In total, 20 nodes were added to tree T and T'.In insertion to levels, Kappa (ƙ) is lower than Gamma (γ), while in insertion to hierarchy, Gamma (γ) is lower than Kappa (ƙ).Lambda (λ) drops faster in insertion to hierarchy than to level.The difference between Kappa (ƙ) and Gamma (γ) is statistically significant.This is consistent with Kappa (ƙ) and Gamma's (γ) behaviour for hierarchy and level movements and swaps.In all three cases, Gamma (γ) is lower than Kappa (ƙ) in hierarchy changes, and Kappa (ƙ) is lower than Gamma (γ) in level changes. Threshold properties for empirical use Many academic disciplines employ threshold values for "satisfactory" levels of inter-rater reliability.For example, the typical threshold for both Cronbach's alpha and Cohen's Kappa (ƙ) is 0.7 (Nunnally 1978;Watkins and Pacheco 2000).We believe suitable thresholds for comparing two diagnostic trees are when Lambda (λ) > 0.7, Kappa (ƙ) > 0.4 and Gamma (γ) > 0.3.These thresholds are established for the following reasons: • It is important to consider all three measures, because each measure signals different kinds of issues.Changes in Lambda (λ) signify movements are occurring, changes in Gamma (γ) suggest hierarchical inconsistencies, while changes in Kappa (ƙ) suggest level inconsistencies.• The three thresholds combined suggest that regardless of sample size, two trees that score above threshold differ in no more than 30% of their nodes.This has been assessed by testing the thresholds in random movements and swaps. The question remains as to what happens and how efficient the measures are if, (1) only two of the thresholds were used in the comparison of all three thresholds and (2) small changes are made to the thresholds. Table 10 The results of level and hierarchy insertion for a 3-level 3-5 branch tree Number of branches Type of insertion Lambda (λ) mean (SD) Kappa (ƙ) mean (SD) Gamma (γ) mean (SD) Paired samples t-test for Kappa two of the three thresholds are used.If only two thresholds are applied, it is possible for two trees to meet the thresholds when they have substantial differences from each other.For example, if only λ > 0.7 and ƙ > 0.4, then it is possible for our 4-level trees to differ by up to 60% of nodes. Table 12 presents what other thresholds mean when comparing two trees.As an example, consider a 0.1 change of Lambda (λ) from 0.7 to either Lambda (λ) > 0.6 or Lambda (λ) > 0.8 while holding Kappa (ƙ) > 0.4 and Gamma (γ) > 0.3.We count the number of modifications for each tree and calculate the percentages.Table 12 presents the results of the impact of such 0.1 sized changes with each threshold and the standard deviation of the percentages demonstrates the accuracy of the thresholds for identifying the estimates. Results indicated that Kappa (ƙ) and Gamma (γ) are especially sensitive to changes to their threshold values.As an example, when only Gamma (γ) drops from 0.3 to 0.2, the standard deviation for the percentage of modifications is 13.82 while an increase from 0.3 to 0.4, the standard deviation is 1.21.However, when lambda (λ) drops from 0.7 to 0.6, the standard deviation of the percentage of modifications is 5.83 and with an increase from 0.7 to 0.8 the standard deviation is 5.14.There are several reasons.Firstly, the thresholds are tested in randomised movements and swaps, as Lambda (λ) does not change in swaps, hence small changes to Lambda (λ) would be less dramatic.Secondly, in random movements either or both levels and the hierarchy of nodes are affected, which makes each measure more sensitive to small changes, as each measure not only changes with both movements and swaps but changes more dramatically in swaps.Hence, Kappa (ƙ) and Gamma (γ) must be simultaneously adjusted to find suitable thresholds. In addition, different combinations of measures can identify different levels of modification between two trees.Table 13 presents four thresholds for when the percentage of modifications are at 15, 20, 25, and 30% between two trees.As an example, a threshold of λ > 0.75, ƙ > 0.5, and γ > 0.4 can identify an estimate of 25% of modifications between two diagnostic trees, while a threshold of λ > 0.85, ƙ > 0.7, γ > 0.5 is suitable for identifying an estimate of 15% of modifications of two trees.In addition, Table 13 presents less strict thresholds such as a λ > 0.65, ƙ > 0.35, and γ > 0.25 which can identify an estimate of 40% of modifications between two diagnostic trees. An example of assessing the similarity of diagnostic trees As an example, consider a top-level node "Other social networks" from a perceived Instagram skill diagnostic tree presented in Fig. 1.Table 14 presents the two trees created by the experts (expert 1 and 2) and its transformation to tables and Table 15 presents the initial results of the measures.We have set the thresholds at 30% which suggests there is no more than 30% modification across the two trees.The actual scores for the measures are 0.531 for Lambda (λ), 0.474 for Kappa (ƙ), and 0.673 for Gamma (γ).The measures provide several insights.Firstly, the trees are not similar enough, and the problematic nodes will need to be edited accordingly.Secondly, Kappa (ƙ) being the lowest measure suggests the principal problem is the number of disagreements of mapping of nodes of the same level.Comparing across trees, we can see that the experts disagree on the parent nodes of nodes 12, 13, 14, 15, 17, 18, and 20 which are all located on the same level.Assume we correct this problem so that experts agree on the mapping of those nodes, the statistics become 0.85 for Lambda (λ), 0.76 for Kappa (ƙ), and 0.76 for Gamma (γ) which indicates a strong inter-rater agreement.At present, the only alternative to employing our measures is the use of edit-distance algorithms.As previously mentioned, these algorithms are neither sensitive to sample size nor to the various kinds of differences that can occur in two trees. To illustrate, in our "Other social networks" Instagram efficacy instrument, the editdistance of the two trees would have been 12 or 6% (i.e., edit-distance/total number of nodes).Observe that while this provides some measure of the non-correspondence between the two trees, it doesn't provide any useful diagnostic information.Furthermore, the reported level of difference-6% does not appear too severe.In contrast, our statistical measures identified a systematic (level) difference across the two trees.If we were to correct the errors across nodes 12-18 and 20, the edit-distance jumps to 6 or 3%. Contrast this example against another hypothetical one where we had the same number of nodes, but the problem was with the mapping of the nodes of the same level of the trees, as the experts disagree with the mapping of the parent/child nodes. Edit-distance provides exactly the same statistics, but our measures provide additional information as each behaves differently based on the types of modifications occurred in the trees.Thus, Kappa (ƙ) would decrease faster in changes within the same level, such as after 20 modifications Kappa (ƙ) would drop from 0.991 to 0.635.Thus, as can be seen, our measures provide substantially more information than edit-distance and allow us to identify and target the principal problem first.Fixing the principal problem allowing us to quickly achieve satisfactory inter-rater agreement. Limitations and conclusion Our analysis reveals several limitations with using Lambda (λ), Gamma (γ), and Kappa (ƙ) as measures of trees.First, the thresholds are inapplicable once the number of branches is greater than seven.To demonstrate this limitation, consider different thresholds for 3-level trees with 3-10 branches per node as presented in Table 16.Once there are eight or more branches, the measures are less effective for providing a threshold.As an example, in a 3-level 9-branch tree, the thresholds misrepresent the number of modifications, such as when the thresholds are set to identify 20% of the modifications, they only identify 10%, hence underestimating the number of modifications. In addition, similar to other studies (van der Ark and van Aert 2015), we found Gamma (γ) too unstable to provide a reasonable threshold for small samples sizes such as trees with a total number of nodes below 25.However, for trees with a total number of nodes above 25, Gamma (γ) appears more stable.The point of trees is to facilitate choice between hundreds of options.Thus, for the purposes of assessing trees' similarity, Gamma (γ) remains a reasonable measure. Furthermore, due to the exponentially complex computations required, we were unable to run simulations of trees with a total number of nodes above 200, hence could not make any conclusions.However, in our analysis, the measures have been fairly consistent as the growth of trees has been linear as the number of nodes per tree increased.Thus, the threshold results will most likely stay the same in trees with a total number of nodes above 200. Conclusion This study presents an analysis of the use Lambda (λ), Gamma (γ), and Kappa (ƙ) as measures of the similarity of diagnostic trees and tools for diagnosing their differences.To build suitable thresholds for comparing and assessing diagnostic trees, we first generated a hypothetical "perfect" tree.We then made a copy of the tree and systematically modified the tree.We created two general types of modifications, movements and swaps. We repeated the modifications many times and did this for other "perfect" trees of various sizes.We found that: • Gamma (γ) is useful for identifying disagreements with the hierarchy of the nodes. • Kappa (ƙ) is useful for identifying disagreements on the mapping of nodes of the same level.• Lambda (λ) is useful for determining two things.The first is whether the principal problem is disagreement among single nodes (type 2 movements), which indicates that while experts agree on the grouping of child nodes, they disagree on the parent of the child nodes.The second is that a high Lambda (λ) concurrent with a low Kappa (ƙ) or Gamma (γ) is useful to detect swaps. We then proposed thresholds for various levels of inter-rater reliability, as an example, a threshold for when Lambda (λ) > 0.7, Kappa (ƙ) > 0.4, and Gamma (γ) > 0.3, suggests there is no more than 30% modification between two trees.This work is particularly useful for assessing the node and content validity of two diagnostic trees.As future research, we hope to explore and evaluate diagnostic trees in several areas.One, very little research has been done on measuring other types of validities for diagnostic trees.For example, we do not yet have clear techniques for assessing the nomological validity of diagnostic trees.Two, we intend to compare several popular measures used to compare trees with our statistical method to further demonstrate the use of this study's method. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creat iveco mmons .org/licenses/by/4.0/. Fig. 1 Fig. 1 Diagnostic tree example for perceived Instagram skill Fig. 5 Fig. 5 Example of hierarchy swap Fig. 7 Fig. 7 Example of levels in diagnostic trees Table 3 Transformed tree Table 4 Summary of changes of Lambda (λ), Kappa (ƙ), and Gamma (γ) for different types of modifications Table 6 Mean changes in Table 8 Mean and standard deviations for the first 100 runs of each swap for the six trees Table 9 Mean difference and standard deviations for top, and bottom, and general hierarchy and level swaps and movement for a 4-level 3-branch diagnostic tree in 100 runs Table 12 The impact of thresholds with small changes 3-branch trees of 3-6 levels Table 14 Trees and transformed trees of the node "Other Social Networks" Table 15 Results of the measures for the node "Other Social Networks" Table 16 Thresholds for different amounts of modifications for 3-level trees with 3-10 number of branches per node
10,015.6
2020-01-03T00:00:00.000
[ "Computer Science", "Mathematics" ]